What are the practical limits (not theoretical limits) for LiteFS? Are we talking 100's of GB, or something smaller?

We're targeting databases of 1 to 10 GB right now and those seem to work pretty well overall. I'm sure we'll expand that target over time as LiteFS matures though.

Most of the requests I've seen have been to support a lot of smaller databases (e.g. hundreds or thousands of 1GB databases) rather than one huge 100GB database. You can do interesting things like sharding. Or if you're a SaaS companies, you could do one SQLite database per customer. That has some nice isolation properties and it improves SQLite's single writer restriction as your writes are spread across multiple databases.

Hi Ben

> We're targeting databases of 1 to 10 GB right now and those seem to work pretty well overall.

What are some reasons you reckon that the current setup won't scale beyond 10GB? Or, is it some arbitrary threshold beyond which you folks don't stress test things?

Also, if I may, you mentioned on Twitter that this work was 3 months in the making with 100s of PRs. Leaving aside stability related bugs, what design decisions previously made were the caused painful bugs / roadblocks? Consequently, what things majorly surprised you in a way that perhaps has altered your approach / outlook towards this project or engineering in general?

Thanks.

> What are some reasons you reckon that the current setup won't scale beyond 10GB?

It's more of an arbitrary threshold right now. A lot of testing that we do right now is chaos testing where we frequently kill nodes to ensure that the cluster recovers correctly and we try to test a range of database sizes within that threshold. Larger databases should work fine but you also run into SQLite limitations of single writer. Also, the majority of databases we see in the wild are less than 10GB.

> Leaving aside stability related bugs, what design decisions previously made were the caused painful bugs / roadblocks?

So far the design decisions have held up pretty well. Most of the PRs were either stability related or WAL related. That being said, the design is pretty simple. We convert transactions into files and then ship those files to other nodes and replay them.

We recently added LZ4 compression (which will be in the next release). There was a design issue there with how we were streaming data that we had to fix up. We relied on the internal data format of our transaction files to delineate them but that would mean we'd need to uncompress them to read that. We had to alter our streaming protocol a bit to do chunk encoding.

I think our design decisions will be tested more once we expand to doing pure serverless & WASM implementations. I'm curious how things will hold up then.

> Consequently, what things majorly surprised you in a way that perhaps has altered your approach / outlook towards this project or engineering in general?

One thing that's surprised me is that we originally wrote LiteFS to be used with Consul so it could dynamically change its primary node. We kinda threw in our "static" leasing implementation for one of our internal use cases. But it turns out that for a lot of ancillary cache use cases, the static leasing works great! Losing write availability for a couple seconds during a deploy isn't necessarily a big deal for all applications.

Have you compared LZ4 to other compression algorithms, zstd for example? ( https://github.com/phiresky/sqlite-zstd )

Given that LiteFS operates at the filesystem layer via FUSE, have you considered it against designs that use built-in features native to some filesystems? For example, I've considered a similar system design based on a single ZFS primary node that streams ZFS snapshots to reader nodes. With some coordination service (e.g. consul) it could still allow for the whole node promotion process.