Thanks for satisfying my curiosity! Also, congrats on your success!

> Yes, my user-facing servers are proxying the files to the users.

I've never operated a service as large as yours, so take my question with a grain of salt: I'm wondering whether it would make sense to split off the actual file front-end servers from the user-facing servers (going for a redirect approach instead of proxying), since the requirements for serving the UI (low latency, low bandwidth) are so different from the file serving requirements (high bandwidth, but latency is not an issue). In theory, the traffic load from the files could negatively impact the UI latency leading to perceived sluggishness of the website. But perhaps that's not an issue in practice?

Since you mentioned elsewhere that you wanted to move to content delivery: What kind of content delivery do you have in mind? At the moment I can only think of either classic CDNs (but that's a few order of magnitudes larger) or ads (but that's an entirely different area).

Proxying the files has a number of benefits.

- The first is that I can have all my API endpoints under one domain. This simplifies downloading as you don't need to make a separate request to figure out where the file is stored.

- The storage servers that Hetzner sells only have 1 Gbps bandwidth. That runs out very quickly when a file goes viral. The 10 Gbps caching servers do a lot of heavy lifting here, this makes sure the disks in the storage nodes last longer.

- I can also decide to switch to a different storage system on my storage nodes when I want. I have been considering to deploy reed-solomon encoding for a while. That would make it impossible to link directly to a single storage server as a single file would also be distributed.

- Sending out this much data uses a lot of RAM for TCP send buffers. Installing this RAM on a single content delivery node is cheaper than installing it on every storage server.

To prevent the bandwidth load from affecting the UI speed I have a rate limiter on the download API which slows down when the uplink reaches 95% capacity. This way there is always some bandwidth left for the HTML and database communications.

With regards to content delivery: I want to use pixeldrain to serve static files. Nothing like the fancy site-wrapping tech that cloudflare uses. The idea is that users can have a file tree on pixeldrain somewhat like dropbox. They can copy the direct download link to that file and use it to embed videos, audio and pictures in their own websites. Because this is a lot simpler than other CDN services I can offer it at a very competitive price.

> That would make it impossible to link directly to a single storage server as a single file would also be distributed.

Check out https://github.com/chrislusf/seaweedfs/ implementation of reed solomon. Small files can still be served from 1 server.

It's also efficient for small files, which a image store requires.