Issues:
-- residential/mobile internet service has asymmetric upload/download speeds.
-- ISPs are suspicious of heavy uploads originating from residential homes and could be flagged as "servers" in violation of ISP's TOS or reclassified as "business tier pricing" thereby increasing the billing amount.
-- too much latency for live mega events (World Cup, etc) because of asymmetry mentioned above. (Your neighbor tweets that Spain scored the winning goal before your P2P stream received the last packets showing that it happened.) Latency for bittorrent is ok, but for live megaevents, no.
I don't think P2P down to residential is realistic. However, P2P between commercial entities looks more workable. If Akamai projects that their CDNs can't handle the next mega event, they contract with Amazon AWS CDNs to handle some of the spillover bandwidth. And/or work with ISPs (Comcast, FiOS edge servers) to act as CDNs. Yes, it involves competitors partnering with each other but this looks more feasible than coordinating residential internet service to deliver the extra bandwidth.
As an analogy, residential neighbors don't transfer supplemental electricity to each other but power stations in Canada might supply extra peak electricity to power stations in New York.
http://en.wikipedia.org/wiki/IP_multicast or http://en.wikipedia.org/wiki/Broadcast_address
The point being that if you have a stream of data that is the same for a large number of recipients, you can send the data down the tubes once, and it's then shared by recipients.
Although doesn't solve the general case (people wanting to watch the same thing, but out of order).
> -- ISPs are suspicious of heavy uploads originating from residential homes and could be flagged as "servers" in violation of ISP's TOS or reclassified as "business tier pricing" thereby increasing the billing amount.
Yes, but ISPs would love to join with p2p to reduce their upstream bills. There are 2 problems currently. First, there is no single protocol or system so maintaining one cache box for each system is crazy (YouTube/Google, Netflix, HBO, Amazon, Hulu... all proprietary). Second, even though there could be a payoff in the long run with savings, there's no immediate payoff for setup.
This could all be solved with a standardized protocol and system. But sadly the big bodies, in particular IEEE, are a bureaucratic mess and filled with industry sponsored members.
Internet is going backwards. Something's gotta give.
I am the founder of the company providing the pluginless peer-to-peer video streaming service the article is talking about (www.streamroot.io).
>-- residential/mobile internet service has asymmetric upload/download speeds.
Indeed, but it is not so much of an issue: 1. the average video stream bitrate today is 1.5 Mbps, which is closer to the upstream limits than the downstream limits. 2. Our solution is hybrid, so if you can't get all the data from other peers, you just get the rest from the CDN, so even if someone has a 100 kbps uplink, he still contributes to the swarm. 3. With fiber there are more and more people having 100+ Mbps uplink, and these people can serve dozens of others peers, and this balances the asymmetry partially.
>-- ISPs are suspicious of heavy uploads originating from residential homes and could be flagged as "servers" in violation of ISP's TOS or reclassified as "business" thereby increasing the billing amount.
This must be different depending your country and ISP, but the upside comparing to some full P2P service like Bittorent is that you share video segments only while you are on the streamer webpage (as soon as you close your tab the p2p stops), and also all the communications are encrypted with DTLS provided by WebRTC, so it is not so easy for ISPs to figure out what is exchanged. And finally, we actually help the ISPs by connecting the peers that use the same ISPs, and are on the same sub-networks first, so they have less peering issues.
>-- too much latency for live mega events (World Cup, etc) because of asymmetry mentioned above. (Your neighbor tweets that Spain scored the winning goal before your P2P stream received the last packets showing that it happened.) Latency for bittorrent is ok, but for live megaevents, no.
I think that this where our technology really differentiates with the old peer-to-peer systems : we don't add any additional latency to the stream, but just use the latency already generated by HTTP streaming protocols like HLS or MPEG-DASH. So with or without our system, you will still have a 20-30 seconds latency with the encoder, as it is already the case with all the professional live streams like the ones used during Superbowl or the World Cup.
As for CDNs cooperating to broadcast a live stream, it is not really the spirit of the CDN market right now, instead what happens is that the Broadcaster can contract one or several backup CDNs for their biggest events, and use them in case the primary CDNs breaks down. This is what happened for the Superbowl stream : NBC had Akamai as their primary CDN, and Level3 as a backup. In the end they didn't have to use the backup, but still paid both CDNs for the event !
I would like to add on Rodi's answers above:
- Yes, for mobile (data-plan based) connections P2P puts a heavy load, but WiFi-connected smartphones/tablets are not a problem to serve.
- Latency is not the issue being discussed here (from the Twitter example above), even Twitch latency is no less than 12 seconds: http://www.reddit.com/r/Twitch/comments/2j8b4c/has_the_strea... It seems that you are talking about asynchronous viewing, and that is something that can be made to work better so viewers at least watch with the same delay.
- The real value of P2P streaming is mostly seen by Broadcasters who wish to send their online linear signals at high and uninterrupted quality, but low cost as their traditional model.
If its free or just for fun, then yes you can go this route. But people paying for an experience are, in our experience, intolerant of any interrupt or quality drop.
The nice thing with P2P is that both model are not mutually exclusive. You can have a single peer that is effectively served by Akamai's CDN, and you can have another single peer that is served by you behind your poor ISP line; the protocol will work the same and automatically select the one that offers the better bandwith -- so people will in practice fetch from Akamai's CDN when it has capacity, and automatically switch to anybody else when Akamai alone isn't enough. You don't even need any logic, it will be handled automatically.
We can only speculate whether the omission is conscious attempt at avoiding linking to risque programs, or does that indicate the article is copy-pasted press release of content providers.
[1] https://en.wikipedia.org/wiki/Popcorn_Time ; https://popcorntime.io/ seems to be the leading fork.
For a p2p system to work you need to have a nearest neighbour finder(please wait 5 minutes whilst we probe your network) you then need to have a realtime "rationilser" that will go through and re-route the live stream according to how many peers in one segment are connected to another. Then, you need a number of backup peers for when your current master peer disappears.
Then you have to deal with partition, and also make sure that NAT doesn't get in the way. Also you'll need to block mobile phones as well.
the best part is, you'll be forced to use HVEC as the usable upload is around 2 megabits. That means a stream encoded at 750k (to allow uploading to two other users) or half that if you want 4 other peers.
So no, P2P is not the answer. Iterate again.
If you don't beleive me you can always try by yourself, with a Chrome/Firefox/Opera browser : http://demo.streamroot.io
How do you asses the bandwidth available now, and in the future? is the bandwidth between two peers inside and ISP greater than two peers from different ISPs. How do you manage peer re-routing when one disappears. What happens if more than one peer disappears?
webrtc is designed for 1 to 1 mapping, and not 1 to many paid for content delivery. Sure you can do it, but your users arn't going to be happy you chose to engineer your way out of a bad idea. (You don't want buffering during an expensive movie/tv/sports game)
Especially if you are at the edge of the the p2p chain. Your cache might be dependent on three other people refreshing their cache first.
They didn't describe the jitter, dropout rate, cache misses and eventual fall back. For example was that 58% peak? was that 58% attempted p2p use? what was the threshold? did the users notice?
what happens when peers go away? does that mean that create burst loads on the cache master?
What is the success rate when they stream 4meg, 8meg? or a bitrate that changes significantly? for example going from a semi static image to fast moving sports (think transition from commentator studio to live action) Having dealt with these types of systems in the bad old days (think 2007-9) I can tell you that they suck hard.
All that engineering work required to patch over a under resourced and terribly unreliable transport, when many a pre-built system with SLAs already exists
If you're willing to tolerate lag and decreasing fidelity you can also use peer-to-peer relaying but this only works for a very limited number of hops.
1 - While streaming normally from the CDN, also preload segments from peers. Complexity and client-bloat aside, you don't lose much trying (assuming the client has the bandwidth)
2 - Peering with folk that appear to be close. I feel that doing it by network address or ip->geo lookup might work well.
This is exactly what we are doing :) We have have a hybrid p2p model that fetches video segments from others peers as long as it can, and switches back to CDN on urgency cases. And our tracker uses geoIp data, ISP and some network topology elements to connect the best candidates together
I don't think online video streaming needs to be saved.
> There’s a clear event horizon where delivery overhead outstrips even CDN capacity.
If it's so clear, please enlighten me: All I see is network architecture making ever bolder jumps to increase capacity... Look at Amazon's multiple locations, look at fiber-to-the-home, look at the server farms around the country. Look at the bold instantiation of cell towers becoming ever-denser.
I think if the public demands 150 million individual 10Mbps streams, it will be built.
Not that WebRTC isn't cool for what it is, but a soccer game isn't a volcano (10 people punting a ball/data around like WebRTC isn't the same as a massive one-way structure of energy/data like a Volcano).
Completely a non-issue. The providers would "seed" all the videos from their own servers, too, so when there aren't enough peers seeding it, the bulk of the streaming would be offered by the provider.
Using the Pareto principle, something like 80 percent of the videos would be "long-tail" with few to no seeds (other than from the provider), but only use 20 percent of the video traffic (so the provider's burden is greatly reduced anyway). The other 20 percent videos would represent 80 percent of the traffic, and those are the videos that would be helped most by P2P streaming.
https://github.com/feross/webtorrent
It can stream video torrents into a <video> tag (webm (vp8, vp9) or mp4 (h.264))