But... for an app running multiple instances, the article suggests a lot of extra complexity. Managing that complexity makes less sense to me than just firing up a managed database server (RDS, or whatever).
Even if you're self hosting, I think running a MySQL/Postgres cluster is a lot less complex than the options this article calls out.
edit: and more to the point - MySQL / Postgres replication is boring. It's done by a ton of deployments, and isn't going to surprise you, at least at smaller scale.
> One huge benefit to SQLite is the fact that it runs as an embedded part of your application
But all production workloads I've been involved with the last ≈5 years have either been containerised apps offloading state to databases/caches/blob storage etc, or have strived towards that.
What I do think would be awesome would be an embeddable Postgres library/binary that could use a single state file on your local filesystem for development and use a networked database in production. Getting the benefits I like most about sqlite locally, and not having to deal with files in production.
I wired Postgres up for our local dev. I don't believe in mocking the database, so all our tests and local dev run against a real Postgres instance.
The main tricks:
- Store the Postgres installation in a known location.
- Set the dynamic loader path to the installation lib dir, e.g., LD_PRELOAD.
- Don't run CI as root (or patch out Postgres' check for root)
- Create and cache the data directory based on the source code migrations.
- Use clone file to duplicate the cached data directory to give to individual tests.
One thing I'd like to pursue is to store the Postgres data dir in SQLite [1]. Then, I can reset the "file system" using SQL after each test instead of copying the entire datadir.