Uhm... experience from a large project that used SQLite was that we where hit with SQLite only allowing one write transaction at a time. That is madness for any web app really.
Why do everyone seem so hyped on this when it can't really work properly IRL? If you have large amounts of data that need to be stored the app would die instantly, or leave all your users waiting for their changes to be saved.
What am I missing?
- Most transactions are read-only
- "Large" applications can usually be sharded by account. This means 1 file per account, and can easily be put on the most optimal geolocation of the account
- If you defer locking until commit, allowing multiple writers ( https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begi... ). This is good enough for most applications anayway.
- Sqlite simple, fast enough for almost anything, supports a good set of features and datatypes, is very easy to embed.
Why would I bake all of those assumptions and limitations into my system though just on the hope it won't ever become a problem
such a rewrite is a lot more predictable endeavor, then building the initial solution, that it's a great problem to have :)
meanwhile, your UI don't have to change and a lot of your other glue code or business-logic code don't have to change either, IF you haven't hardcoded direct calls to SQLite everywhere in your program :)
eg. I used HoneySQL with great success! My queries are safely assembled from Clojure data structures and I had a single function, which I used to format them to the desired SQL dialect H2DB/SQLite/MySQL/Postgres, execute them and parse the results back into Clojure data structures and even take care of lazily paginating through long result sets, without burdening the consumer of the data with such details.