Look at using SQLite. Or at least start with that, and if you 'outgrow' it, only then consider PostreSQL. PostgreSQL is the best multi-reader, multi-writer, multi-user database out there, but you don't need the extra DBA overhead of managing the database with the load you expect. I'm certain SQLite could easily handle 1,000s of concurrent writes without a lot of latency, which you can test yourself to find out what those limits are. Plus you can start prototyping with Tcl as the scripting language (or Perl, Python or Ruby, but those are not as good for this use-case -- no flaming please). Tcl can load SQLite as a library directly into the Tcl shell. Awesome. If you 'outgrow' SQLite in any way, migrating to PostgreSQL should be relatively easy. FYI - I attended the Tcl/Tk conference in early November and Richard Hipp, the SQLite creator, gave a full-day tutorial on SQLite internals. Very impressive software.

The thought is always start with the simplest capability you can use to meet your requirements -- PG is awesome, but it does require care and feeding by a DBA. Here's a tool to migrate from SQLite to PostgreSQL if and when necessary, but if your schema is reasonably clean it shouldn't be a hardship:

https://github.com/dimitri/pgloader

My suggestion is script what you need to test to check throughput for the number of concurrent conns you think you'll be needing and base decisions on that. FYI - I use Fossil SCM over git. It is awesome.

https://www.fossil-scm.org/home/doc/trunk/www/index.wiki