One interesting thing you can do with CouchDB is that you can have a webapp where a user can specify their own database and credentials and it works over HTTP(s). That's pretty unique. I'd love to see a SaaS using CouchDB and their "on-premise" offering just means the user provides their own database. I'm not sure how payment would work though - perhaps some verification proxy?
Firebase is the gold-standard for offline apps (as a service). CouchDB replaces Cloud Firestore, and Keycloak replaces Authentication. I haven't seen OSS equivalents of Cloud Functions, ML Kit, and the other things (e.g. In-App messaging, and Cloud Messaging). It'd be nice to have the entire stack of Firebase bundled as a group of OSS projects, including CouchDB.
Sad to see that per doc access control didn't make it in 3.0. Hopefully it'll be in 3.1.
I didn't understand. You mean it's unique to work over http(s)?
I mean your CouchDB instance itself is represented by a host and port and your application's data could be stored there and a native HTTP-based API to access said data. This is contrasted to most where you would need a driver and it's accessible only in the "back-end".
To take that further, I really like the idea of running something locally like PouchDB and then letting it sync with a remote CouchDB using the replication protocol.
That's where I fell into love with the CouchDB world. Building offline-first databases in PouchDB, and letting that sync to any HTTP address that speaks the replication protocol is sometimes a dream. In practice there are so many hurdles, sadly. (CORS, CSPs, firewalls, not enough things speaking the replication protocol that should, ...)
I’m just getting into pouchdb and am liking it. I love the idea, for sure. I ran into a replication issue running through a proxy that had something to do with sessions being cached, but that was more the fault of the proxy.
My biggest current concern is client side search and size. I’ve been developing a private journalling/notes app with some fairly particular bells and whistles for my own personal use. Although it’s mostly just text, I want to store a lot of text. I would much prefer the search not happen on the server, as I’d like to encrypt all data that hits the server.
Have you used pouchdb quick search? If so, in your experience, can it handle full text search on about 1,000-5,000 documents with about 10kb worth of text each?
Ideally I’d also like to store data uri for png sketches, and maybe photos. But I know photos especially would balloon the database size quite a bit/am worried I’d hit client side storage limits extremely quickly (I think I read some mobile devices have a 50mb limit, but I haven’t researched it that thoroughly yet)
Generally I'm using a `folder/structure/ULID` approach to keys and its really easy with start_key and end_key on allDocs to grab an entire "folder" at a time. I've had some pretty large "folders" and not seen too much trouble. At this point the biggest application I worked on pulls a lot of folders into Redux on startup and so far (knock on wood) performance seems strong. (ULIDs [1] are similar to GUIDs but are timestamp ordered lexicographically so synchronizations leave a stable sort within the folder when just pulling by _id order.)
At least as far as my queries have been and what my applications needs have been, PouchDB is as fast or faster than the equivalent server-side queries (accounting for HTTPS time of flight), especially now that all modern browsers have good IndexedDB support. (There were some performance concerns I had previously when things fell back to WebSQL or worse, such as various iOS IndexedDB polyfills built on top of bad WebSQL polyfill implementations, and also a brief attempt that did not go well to use Couchbase Mobile on iOS only.)
Photos have been the bane of my applications' existence, but not for client-side reasons. I had PouchDB on top of IndexedDB handling hundreds of photos without breaking a sweat and those size limits all have nice opt-ins permission dialogs for IndexedDB if you exceed them. Where I found all of the pain in working with photos was server side. CouchDB supports binary attachments, but the Replication Protocol is really dumb at handling them. Trying to replicate/synchronize photos was always filled with HTTP timeouts due to hideously bloated JSON requests (because things often get serialized as Base64), to the point where I was restricting PouchDB to only synchronize a single document at a time (and that was painfully slow). Binary attachments would balloon CouchDB's own B-Tree files badly and its homegrown database engine is not great with that (sharding in 3.0 would help, presumably). Other replication protocol servers had their own interesting limits on binary attachments; Couchbase in my tests didn't handle them well either and Cloudant turned out to have attachment size limits that weren't obvious and would result in errors, though at least their documentation also kindly pointed out that Cloudant was not intended to be a good Blob store and recommended against using binary attachments (despite CouchDB "supporting" them). (It sounds like the proposed move to FoundationDB in CouchDB 4.0 would also hugely shake up the binary attachment game. The 8 MB document limit already eliminates some of the photos I was seeing from iOS/Android cameras.)
I'd imagine you'd have all the same replication problems with large data URIs (as it was the Base64 encoding during transfers that seemed the biggest trouble), without the benefits of how well PouchDB handles binary attachments (because of how well the browsers today have optimized IndexedDB handle binary Blobs).
The approach I've been slowly moving towards is using `_local` documents (which don't replicate) with attached photos in PouchDB, metadata documents that do replicate with name, date, captions, ULID, resource paths/bucket IDs (and comments or whatever else makes sense) and a Blurhash [2] so there's at least a placeholder to show when photos haven't replicated, and side-banding photo replication to some other Blob storage option (S3 or Azure Storage). It's somewhat disappointing to need two entirely different replication paths (and have to secure both) and multiple storage systems in play, but I haven't found a better approach.