What does HackerNews think of sqlite_fdw?

SQLite Foreign Data Wrapper for PostgreSQL

Language: PLpgSQL

#23 in PostgreSQL
#41 in PostgreSQL
#34 in SQL
Postgres has foreign data wrappers which can kinda achieve this(see also: https://github.com/pgspider/sqlite_fdw ), but ive been surprised by how well sqlite is already supported by many tools : sometimes it comes for free as its used in testing - hooking up stuff like litestream can work really wel l too
I've not done this but it's intriguing; potentially a best-of-all-worlds solution.

I think "proper" automatic replication is not possible given the mismatch between Postgres and SQLite - not everything in Postgres maps to a thing that is possible in SQLite.

That said, there are a variety of ways to get data out of Postgres, and a variety of ways to get things into SQLite.

You could periodically export CSVs or whatever from Postgres and periodically import them into SQLite.

Or you could do a more realtime sync by using a Postgres foreign data wrapper like this one: https://github.com/pgspider/sqlite_fdw which would let you write directly to SQLite. Combine that with database triggers on the Postgres side of things and you've got something pretty close to realtime replication.

Those sorts of solutions wouldn't be as robust as "real" replication (specifically, what happens when one of the SQLite replicas is unavailable? do you catch things up later?) but could be very useful for a lot of scenarios. You could have Postgres importing gobs of data, and "replicating" it over to your read-only reporting server which uses SQLite as a data source.

Assuming you're doing bulk inserts, and can restart the insert on any failure, one tactic is to disable all consistency and durability storage features during the insert. This heavily reduces the IO operations needed and the need to wait for them to complete. That's assuming that iops are the resource that you are constrained on.

Steps:

- use ext2 or even tmpfs as the filesystem, this disables journaling and COW features of ZFS; mount options async,nobarrier

- set tables to be UNLOGGED to disable Postgres WAL

This got things fast enough for my last use-case. But next ideas, as at some point you then become CPU/memory bound

- you could try sharding by partitioning the table

- another trick is to use SQLite as a backing store for inserts (it is quite fast if you turn off all logging) and then query with Postgres via a FDW https://github.com/pgspider/sqlite_fdw