Cool! Some thoughts from a former Twitch engineer:

- Probably the hardest part of running these things is managing outbound bandwidth costs. You'll need to either limit inbound bitrate or transcode video down to a manageable rate, or else you'll quickly spend a lot of money on shipping 4k video for people.

- Right now, your nginx hosts both do ingest and playback, if I understand it right. You might want to separate the two. It makes maintenance easier, and it lets you scale much better - right now, a single stream probably maxes out on viewership based on the CPU capacity of the single nginx host that is ingesting the stream, transcoding it, and delivering it. If you have multiple nginx hosts which could deliver the already-transcoded stream, you could scale much better.

- Please don't keep using RTMP. RTMP is so stateful that it's pretty hard to manage, it doesn't have a spec, it doesn't have implementations on many devices, and its security is, uh, _weak_. Big players are forced to keep using it because telling their broadcasters to change is hard, but you don't have that problem. You might consider accepting an alternative modern protocol.

- You'll almost certainly need admin and moderation tools soon. Expect lots of pirate streams, as well as some horrific content. You can't run a live streaming platform without admin tools.

- Beware DDoS attacks. This setup looks very, very, very easy to take down, as-is...

Some great points! I built a project similar to this (think online lectures but it never went anywhere) in spring when the pandemic hit. I highly recommend building a platform like this if you have some spare time. It involves lots of interesting engineering/architecture challenges all over the stack. IMHO, from a technical viewpoint, it's mostly integration work, as all the really hard parts were already done by excellent 3rd party tools and libraries. You'll probably have some kind of RTMP bridge (nginx [1]), wrap ffmpeg for transcoding and play it back using something like Shaka player [2] where the segments are being served by some kind of caching reverse proxy. It took me a few weeks to get a prototype running inside a K8S cluster where oAuth-ed users could publish RTMPS streams which could be watched by other authed users via HLS in a somewhat scalable way. It was surprisingly easy to build this using Elixir/Phoenix LiveView. Some thoughts on your comment:

> - Probably the hardest part of running these things is managing outbound bandwidth costs.

This. As others noted, you may be getting around this by using something like Hetzner's/Scaleway's/OVH's offerings. However, I think they won't be too happy if you really use that much bandwidth all the time. You can probably forget using IaaS of one of the bigger cloud platforms unless you negotiated some special terms.

> - You'll almost certainly need admin and moderation tools soon.

That's one of the main reasons why I did offer a public instance. It's probably best to host a platform like this in a non-public context such as members of a single org. Just look at what happened to Firefox Send...

> - Please don't keep using RTMP [...] and its security is, uh, _weak_.

Yes it is a bit of a pain to work with but AFAIK you can wrap RTMP inside a TLS session which is supported by OBS. I think I just exposed a stunnel instance back then which forwarded it to a nginx instance which handled auth/auth and forwarding to the transcoding backend. This way you won't leak any streaming keys or video data. Please correct me if I'm wrong. If you have any additional pointers regarding RTMP security, I would be highly interested!

Also, as others pointed out, SRT may be just around the corner. I think we're in for some interesting times as LL-DASH/CMAF and similar technologies are more widely supported. Additionally, there are interesting projects like [3] going the WebRTC route for delivering low latency streams.

[1] https://github.com/arut/nginx-rtmp-module

[2] https://github.com/google/shaka-player

[3] https://www.ovenmediaengine.com