I'm curious... between Whereby and Jitsi and I assume other browser-based video solutions relying on WebRTC...
...how big is the barrier these days to building a "videoconferencing platform" supporting millions of people... that runs on a single server?
Because if you need to do is build a pretty website that essentially just keeps track of meeting names and the names and IP addresses of participants...
...while each client is P2P-streaming their full-res videostream while speaking or other participants have them pinned... and every other client is P2P-streaming a low-res videostream to power the thumbnails (and similar decisions about which computer is the main audio source and when, or picking a single peer to serve as the audio mixer)...
What else is there to do, really?
(I mean obviously there's fancy stuff you can add like screensharing, chat, authentication, etc... and browser-specific bugfixes and quirks presumably...)
But are we at a point where anyone can write a functional videoconferencing platform in a week, and platforms are differentiating mainly on nicer UX and extra features?
Or is there something huge I'm missing here, where implementing WebRTC is somehow a lot harder than it seems, and/or still requires server farms to route the streams through in certain cases?
I'd be interested in seeing a P2P serverless option
https://github.com/Qbix/Platform
Just install it and video conferencing is one of the free features out of the box
All you do is call Q.Streams.WebRTC.start()