http://xiphmont.livejournal.com/61927.html
There are a couple of comments in the comments section titled "Cisco's Costs" in which Monty says that someone he knew at Cisco told him that they had been under the licensing cap and that the OpenH264 project would increase their licensing costs.
Beyond this, Cisco is part of the 'Alliance for Open Media', consisting of Google, Cisco, Intel, Netflix, Amazon, Microsoft, Mozilla (the latter funding Daala) who are building a new royalty free codec for their needs based upon vp10, but which will make use of any useful technology their members have access to.
Early days yet:
https://chromium-review.googlesource.com/#/q/project:webm/ao...
A couple of things from their github issues: It's much slower than x.264[1]
It only supports baseline profile and the developers aren't that interested in going beyond that[2]
IIRC this quality was lower than x.264 even when set to baseline profile... I'll see if I can dig up some statistics.
[1]https://github.com/cisco/openh264/issues/2067 [2]https://github.com/cisco/openh264/issues/1844
That said, both Firefox and Chrome prefer VP8.
Google already ships an H.264 software decoder in Chrome, so I'm surprised they would (also) ship OpenH264. Maybe they don't want to pay extra for H.264 encoding licenses or to increase compatibility with other WebRTC clients using OpenH264?
Cisco sell a number of products that use H264 and so were already paying some fees (although not necessarily the max license level) for the use of this codec.
wonder if they will start offloading youtube video compression onto client machines pre-upload.
Interesting thought experiment, but there just isn't a compelling reason for doing so, and many drawbacks.
* The encoding might take a long time, users are fickle, and it and would unnecessarily drain power on laptops and mobiles. For smaller uploads where users might not notice, the transcoding farm isn't impacted as much.
* It wouldn't cover the range of codec/bitrate/resolution combinations that YouTube has to encode anyway.
* YouTube accepts a ton of input formats (thanks to FFmpeg), while this scheme would be only useful for a specific class of uploads where the browser has a decoder matching the upload (since you're technically transcoding).
* The matter of implementation: JS media handling APIs are fairly coarse (especially so WebRTC), even getting this to work in the browser would fall under the heading of "marvelous hack."
* OpenH264 supports a limited set of profiles (baseline only?), which doesn't compress nearly as well as other encoders, especially with multi-pass.
How about:
* Partially encoding the video on the client, until the user decides to close the browser?
* Encoding the video on clients run by other youtube visitors in the background?
* The encoder could also be implemented in ASM.js, I suppose, so that would eliminate the problem of codec/bitrate/resolution.
There are youtubers that uplaod 20GB extremely high bitrate files for their 1hour shows, and google lets them retrieve original files on demand.
It sounds crazy given amount of video material being uploaded to YT 24/7, no wonder Google wants new type of spinning hard drives, they must pay tens of millions per year for drives alone..
To stream everywhere, Netflix encodes each movie 120 times (gigaom.com) [2012] https://news.ycombinator.com/item?id=4946275
encoding videos is much more cpu intensive. offloading to client machines is not an option. its always better to get the source material and spend more cpu time on the server (with the option to do better later). There might be still some corner cases where you would want to encode raw or mostly uncompressed source material before upload.
(I assume it's all done on the CPU, not the GPU)