What does HackerNews think of rqlite?

The lightweight, distributed relational database built on SQLite

Language: Go

#6 in Database
#15 in Go
#2 in MySQL
#10 in SQL
To have any hope at HA with SQLite, you’d have to use something like RQLite [0], and at that point you’re already building a far more complex system than Cassandra + Redis.

They didn’t build a DB, they taped some products together with middleware. That isn’t to say what they did is bad, just that it’s not “let’s write a DB from scratch” as the headline implies.

[0]: https://github.com/rqlite/rqlite

Seems similar to https://github.com/rqlite/rqlite or https://dqlite.io.

Would be interesting to see a breakdown of the differences.

This project comes to mind https://github.com/rqlite/rqlite but I've never used it, and I'm not sure if it would count as "pure sqlite" like the op advocated anymore.
There's rqlite (https://github.com/rqlite/rqlite), which looks cool on the surface but... it's a layer on top of sqlite, at which point you should probably think long and hard whether it's still the right tool or you should switch to e.g. postgres.
Meaning something like rqlite[1]? The age of fat desktop clients all connecting back to the central SQL server is long behind us, so yeah there is probably little reason beyond fun for something like that these days, but where there is fun!

[1] https://github.com/rqlite/rqlite

Rqlite[0] does not use SQLite API but it handles replication too.

0. https://github.com/rqlite/rqlite

Did any one tried https://github.com/rqlite/rqlite How well it can handle horizontal scalability. With added distributed features, it is not good as any other RDBMS (postgres or mysql)
rqlite[1] author here. To be clear rqlite is using SQLite in a completely conventional manner. Nothing about the distributed nature of rqlite impacts on SQLite, since each rqlite node runs its own complete copy of SQLite.

This bug can affect anybody using an in-memory version of a SQLite database. That was the point of writing the C unit test.

[1] https://github.com/rqlite/rqlite

Once you need to graduate from one large server (which will take you a long way in many cases), there are tools like [rqlite](https://github.com/rqlite/rqlite) that can handle clustering. With WAL mode enabled, SQLite can handle a surprising amount of traffic that would fit a lot of use cases. If latency is important to you, it's going to be hard to beat SQLite for many workloads.
This is nice, I've used Litestream for a personal project. I wonder how it compares to something like rqlite [1] with larger datasets

[1] https://github.com/rqlite/rqlite

https://github.com/rqlite/rqlite might have the answer, it uses the Raft consensus protocol to solve time based conflicting issues like that.
rqlite[1] author here. I wouldn't consider rqlite a fork in any sense, just so we're clear. That rqlite uses plain vanilla SQLite is one of its key advantages IMHO. Users have no concerns they're not running real SQLite source.

That said, there are some things that would be much easier to do with some changes to the SQLite source. But I think the message that rqlite sits on top of pure SQLite makes is still the right choice.

[1]: https://github.com/rqlite/rqlite

This is distributed SQLite 3, running (I assume at least partially managed?) LiteFS[5] for you. Which is pretty cool!

What I'd like to have seen is how this compares to things like rqlite[1] or Cloudflare's D1[2] addressed directly in the article

That said, I think this is pretty good for things like read replica's. I know the sales pitch here is as a full database, and I don't disagree with it, and if I was starting from scratch today and could use this, I totally would give it a try and benchmark / test accordingly, however I can't speak to that use case directly.

What I find however and what I can speak to, is that most workloads already have database of some kind setup, typically not SQLite as their main database (MySQL or PostgreSQL seem most common). This is a great way to make very - insanely, really - fast read replica's across regions of your data. You can use an independent raft[3][4] implementation to do this on write. If your database supports it, you can even trigger a replication directly from a write to the database itself (I think Aurora has this ability, and I think - don't quote me! - PostgreSQL can do this natively via an extension to kick off a background job)

To that point, in my experience one thing SQLite is actually really good at is storing JSON blobs. I have successfully used it for replicating JSON representations of read only data in the past to great success, cutting down on read times significantly for APIs as the data is "pre-baked" and the lightweight nature of SQLite allows you to - if you wanted to naively do this - just spawn a new database for each customer and transform their data accordingly ahead of time. Its like AOT compilation for your data.

if you want to avoid some complexity with sharding (you can't always avoid it outright, but this can help cap its complexity) this approach helps enormously in my experience. Do try before you buy!

EDIT: Looks like its running LiteFS[5] not LiteStream[0]. This is my error of understanding.

[0]: https://litestream.io/

[1]: https://github.com/rqlite/rqlite

[2]: https://blog.cloudflare.com/introducing-d1/

[3]: https://raft.github.io/

[4]: https://raft.github.io/#implementations

[5]: https://github.com/superfly/litefs