How do any writes end up on other horizontally scaled machines though? To me the whole point of a database on another machine is that it is the single point of truth that many horizontally scaled servers can write to and read each others' updates from. If you don't need that, you might as well read the entire dataset into memory and be done with it.

I know TFA says that you can "soon" automagically replicate your sqlite db to another server, but it only allows writes on a single server and all other will be readers. Now you need to think about how to move all write traffic to a single app server. All writes to that server will still take several milliseconds (possibly more, since S3 is eventually consistent) to propagate around all replicas.

In short, 100x latency improvement for reads is great but a bit of a red herring since if you have read-only traffic you don't need sqlite replication. If you do have write traffic, then routing it through S3 will definitely not give you a 100x latency improvement over Postgres or MySQL anymore. Litestream is definitely on my radar, but as a continuous backup system for small apps ("small" meaning it runs and will always run on a single box) rather than a wholesale replacement of traditional client-server databases.

PS: Congrats Ben!

I do understand the point of running SQLite in-process to speed up reads.

I do not understand why SQLite must also handle intense write load with HA, failover, etc.

I would rather have the best of both worlds: a proper DB server (say, Postgres) replicated to super-fast and simple read replicas in SQLite on every node.

(My ideal case would be some kind of natural sharding where each node keeps its own updates, or just a highly available data browsing app, with data in SQLite files updated as entire files, like a deploymen.)

> a proper DB server (say, Postgres) replicated to super-fast and simple read replicas in SQLite on every node.

Having your replicas be different database software than your master seems to me like asking for, at least, weird edge case bugs.

Is this something anyone does? Interested to hear experiences if so!

I've not done this but it's intriguing; potentially a best-of-all-worlds solution.

I think "proper" automatic replication is not possible given the mismatch between Postgres and SQLite - not everything in Postgres maps to a thing that is possible in SQLite.

That said, there are a variety of ways to get data out of Postgres, and a variety of ways to get things into SQLite.

You could periodically export CSVs or whatever from Postgres and periodically import them into SQLite.

Or you could do a more realtime sync by using a Postgres foreign data wrapper like this one: https://github.com/pgspider/sqlite_fdw which would let you write directly to SQLite. Combine that with database triggers on the Postgres side of things and you've got something pretty close to realtime replication.

Those sorts of solutions wouldn't be as robust as "real" replication (specifically, what happens when one of the SQLite replicas is unavailable? do you catch things up later?) but could be very useful for a lot of scenarios. You could have Postgres importing gobs of data, and "replicating" it over to your read-only reporting server which uses SQLite as a data source.