While I don’t have enough knowledge of the wider implications of this, it does impact something I was experimenting with last year.

The FoundationDB rewrite would introduce a size limit on document attachments, there currently isn’t one. Arguably the attachments are a rarely used feature but I found a useful use case for them.

I combined the CRDT Yjs toolkit with CouchDB (PouchDB on the client) to automatically handle sync conflicts. Each couch document was an export of the current state of the Yjs Doc (for indexing and search), all changes done via Yjs. The Yjs doc was then attached to the Couch document as an attachment. When there was a sync conflict the Yjs documents would be merged and re-exported to create a new version. The issue being that the FoundationDB rewrite would limit the size and that makes this architecture more difficult. It’s partly why I ultimately put the project on hold.

(Slight aside, a CouchDB like DB with native support for a CRDT toolkit such as Yjs or Automerge would be awesome, when syncing mirrors you would be able to just exchange the document state vectors - the changes to it - rather than the whole document)

But is it a small size limit that affects realistic usage? Don't you have performance issues if you use a CRDT implemented in JavaScript and running in the browser with large files?

So yes, a particularly large document is not the norm but it can happen.

JavaScript CRDTs can be quite performant, see the Yjs benchmarks: https://github.com/dmonad/crdt-benchmarks