At one company we had one-off requests to run get some statistics from our mongodb.

Writing the proper queries proven pretty tricky whenever GROUP BY/JOINs were involved, so I used online converters between SQL -> mongo query.

But then I realized that PostgreSQL has nice JSONB type (supports indices for subfields). So I put all the mongo data into tables with a single JSONB column (or two columns id+data, if you prefer).

Turned out that was much faster to run analytical queries :)

"document" is just row

"collection" is just table

Since FerretDB is built on top of Postgres, could it be an extension? So that one can mix some FerretDB tables and some general SQL tables in the same database (with proper foreign key constraints between them etc)

Yes, building an extension is something we are looking at, and I am glad to hear that you would be interested in this kind of mixed use case. Would definitely make things interesting, and more performant.

Pretty exciting!

What about optionally validating some columns with jsonschema? Perhaps using https://github.com/supabase/pg_jsonschema - is using other postgres extensions supported in FerretDB? (if not, maybe it's feasible to incorporate the code of pg_jsonschema in FerretDB?)