As this is HN, could you elaborate on the technologies you used to build this platform?

Sure thing.

The video and voice chat is powered by WebRTC. The getUserMedia API allows a browser to access the webcam and microphone of the device. I used my own simple-peer (https://github.com/feross/simple-peer) library to make WebRTC a bit easier to work with.

The server is Node.js. I'm using Next.js for the first time on this project. I usually use a custom Express server for my projects. I'm a fan of several of Next.js's decisions -- it feels really nice to use, if a bit limiting sometimes.

I use the 'ws' package (https://github.com/websockets/ws) to implement a WebSocket server which is used to help the peers get connected over WebRTC. Once peers are connected, all video and video is transferred directly in a peer-to-peer fashion.

Except sometimes the connection can't be established, so to improve reliability of WebRTC you need to set up a TURN relay server for those situations. I used coturn (https://github.com/coturn/coturn) for that.

Lastly, I used Chakra UI (https://chakra-ui.com/) as my React component library.

Really happy with how the easy the app has been to build.

The most difficult part was getting it to work on Safari for iOS. I spent about 50% of the effort working around various bugs in the Safari media stack. https://twitter.com/feross/status/1263544033135038464

Hope this was informative!