Look at using SQLite. Or at least start with that, and if you 'outgrow' it, only then consider PostreSQL. PostgreSQL is the best multi-reader, multi-writer, multi-user database out there, but you don't need the extra DBA overhead of managing the database with the load you expect. I'm certain SQLite could easily handle 1,000s of concurrent writes without a lot of latency, which you can test yourself to find out what those limits are. Plus you can start prototyping with Tcl as the scripting language (or Perl, Python or Ruby, but those are not as good for this use-case -- no flaming please). Tcl can load SQLite as a library directly into the Tcl shell. Awesome. If you 'outgrow' SQLite in any way, migrating to PostgreSQL should be relatively easy. FYI - I attended the Tcl/Tk conference in early November and Richard Hipp, the SQLite creator, gave a full-day tutorial on SQLite internals. Very impressive software.
https://github.com/dimitri/pgloader
My suggestion is script what you need to test to check throughput for the number of concurrent conns you think you'll be needing and base decisions on that. FYI - I use Fossil SCM over git. It is awesome.