No, I had a sour experience in 2009 where it ate my data, the devs were rather cavalier with "there's a warning on the download page" (I got it through apt), it ate my data again when the OOM killer killed its process.

I didn't like the project attitude of a database being so lax with persistence, so I never used it again.

I'm the most popular non-MongoDB-employee answer at 'to what extent are 'lost data' criticisms still valid of MongoDB?'

My answer contains a history of my experiences with MongoDB that is pretty similar to yours:

https://stackoverflow.com/a/18269939/123671

I feel like MongoDB now is actually a pretty stable product simply through time and investment, however I will never trust the company for using our data to beta test for a decade.

That's my attitude as well. RethinkDB, in comparison, had a much better attitude of "reliable first, fast later". Unfortunately, it turned out that when you're a database, it doesn't matter how much data you lose, only how fast you are while losing it.

The PostgreSQL community is a nice counterexample - perfectly gigantic and growing marketshare, and very very reliable.

Listening to the community and using Postgres is my biggest regret. In hindight, given our scale, any database would have worked. There is no built in solution for high availability with multiple VPS, and having one server alone isn't enough availability for me.