This is predictable. A Mastodon instance, last I checked, involved installing a constellation of a half-dozen databases and services. That's insane. It should be a single binary.
PostgreSQL and Redis are a constellation of half-dozen databases?
Sorry, I wasn't precise: it's two databases (Postgres, Redis), an HTTP frontend (nginx) and specific versions of two interpreted language runtimes (Ruby, Node). The point stands.
While Mastodon can be used for small server purposes (e.g. server-of-one) it is really geared towards professional use - providing a service to thousands of users and scaling up horizontally. Looking beyond technical requirements, so many of Mastodon's features echo this -- account management, reporting, moderation. In that context, I really do not think that "uses a reverse proxy in the front" and "has a database and a cache store" is a factor.
Mastodon advertises itself as being a self-hosted project.
> Your *self-hosted*, globally interconnected microblogging community
https://github.com/mastodon/mastodon
--
> I really do not think that "uses a reverse proxy in the front" and "has a database and a cache store" is a factor.
These things matter at scale, but the typical Mastodon instance comes nowhere near that scale.
A reverse proxy is relevant beyond O(10k) RPS; less than that, and a single process directly serving requests is more suitable. A separate database is relevant beyond O(10M) records; less than that, and any embedded DB, or, frankly, direct filesystem access with any encoding format, is more suitable. A caching layer is relevant at the next order of magnitude for both of those dimensions; less than that, and there's no need.
Mastodon should, by default, ship as a single binary, statically compiled, with no runtime language requirements, and should manage its data storage requirements directly to disk.