Depending on the use case you could think of it as an alternative to WebRTC with lower level control, but honestly it’s a lot more open ended than that.
I’m trying to understand whether it’s mainly a replacement for specific WebRTC use cases, or more of a building block for new kinds of real-time systems.
It doesn't work so well for having a low-latency broadcast. Your choices right now are - use WebRTC and deploy selective forwarding units, which are going to be something custom, and likely involve spinning up a bunch of geographically-distributed virtual machines, figuring out signalling and whatnot. Or - use HLS so you can use more standard HTTP CDN tech, but you gain orders of magnitude of latency.
MoQ should allow for a standardized CDN stack, meaning we should be able to have a more abstract service (instead of spinning up VMs, you just employ some company's CDN service and tell it where to get media from).
There's a lot of other little issues with WebRTC for certain, specific applications. Like - last I tried it, browsers will subtly speed up audio/video to keep everything in sync, and you can have scenarios where you'd rather just let the viewer fall behind a bit and skip ahead later (say you're listening to music, speeding it up isn't ideal).
Or - say you want to have a group call and capture each participant's audio individually and edit it together later for something like a podcast. It's been a while since I've tried this, but I recall it being pretty difficult to do that with WebRTC. I remember all the mixing would happen in the browser's libwebrtc and I had really limited control over things.
context: QUIC was originally an acronym as well
M
O
Q