His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
https://www.zdnet.com/article/apple-shuns-mpeg-4-licensing-t...
It was sweet to see “over the Net”…
More context for this: Chiariglione has been extremely vocal that FRAND patent royalties are entirely necessary for the development of video compression tools, and believes royalty-free standards outpacing the ones that cost money represents the end of innovation in video codecs.
To be clear, Chiariglione isn't opposed to royalty-free standards at all, he just wants them to be deliberately worse so that people who need better compression will pay independent researchers for it. His MPEG actually wound up trying to make such a standard: IVC. You've never heard of MPEG IVC because Samsung immediately claimed ownership over it and ISO patent policy does not allow MPEG to require notice of what specific patents to remove so long as the owner agrees to negotiate a license with a patent pool.
You might think at this point that Chiariglione is on the side of the patent owners, but he's actually not. In fact, it's specifically those patent owners that pushed him out of MPEG.
In the 90s, patent owners were making bank off MPEG-2 royalties, but having trouble monetizing anything newer. A patent pool never actually formed for H.263, and the one for MPEG-4 couldn't agree on a royalty free rate for Internet streaming[0]. H.264 practically is royalty free for online video, but that only happened because Google bought On2[1] and threatened to make YouTube exclusively serve VP8. The patent owners very much resent this state of affairs and successfully sabotaged efforts at MPEG to make dedicated royalty-free codecs.
The second and more pressing issue (to industry, not to us) is the fact that H.265 failed to form a single patent pool. There's actually three of them, thanks to skulduggery by Access Advance to force people to pay for the same patent license twice by promising a sweetheart licensing deal[2] to Samsung. I'm told H.266 is even more insane, mostly because Access Advance is forcing people to buy licenses in a package deal to cover up the fact that they own very little of H.266.
Chiariglione is only pro-patent-owner in the narrow sense that he believes research needs to be 'paid for'. His attempt to keep patent owners honest got him sidelined and marginalized in ISO, which is why he left. He's actually made his own standards organization, with blackjack and hookers^Wartificial intelligence. MPAI's patent policy actually requires companies agree to 'framework licenses' - i.e. promise to actually negotiate with MPAI's own patent pool specifically. No clue if they've actually shipped anything useful.
Meanwhile, the rest of the Internet video industry coalesced around Google and Xiph's AV1 proposal. They somehow manage to do without direct royalty payments for AV1, which to me indicates that this research didn't need to be 'paid for' after all. Though, the way Chiariglione talks about AV1, you'd think it's some kind of existential threat to video encoding...
[0] Practically speaking, this meant MPEG-4 ASP was predominantly used by pirates, as legit online video sites that worked in browsers were using Flash based players, and Flash only supported H.263 and VP6.
[1] The company that made VP3 (Theora) and VP6
[2] The idea is that Samsung and other firms are "net implementer" companies. They own some of H.265, but they need to license the rest of it from MPEG-LA. So Access Advance promised those companies a super-low rate on the patents they need if they all pulled out of MPEG-LA, and they make it up by overcharging everyone else, including making them pay extra if they'd already gotten licenses from MPEG-LA before the Access companies pulled out of it.
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
https://www.quora.com/Is-it-true-that-in-1994-the-company-wh...
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
The release of VP3 as open source predates Google's later acquisition of On2 (2010) by nearly a decade.
(I know nothing about the legal side of all this, just remembering the time period of Ubuntu circa 2005-2008).
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
No, just no. We've had free community codec packs for years before Google even existed. Anyone remember CCCP?
And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
[0] https://mpeg.chiariglione.org/standards/mpeg-7/reference-sof...
EDIT: Here is the Wikipedia page of BiM which evidently made it even into an ISO Standard [1]
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
Just check pirated releases of TV shows and movies.
I remember this same guy complaining investments in the MPEG extortionist group would disappear because they couldn't fight against AV1.
He was part of a patent Mafia is is only lamenting he lost power.
Hypocrisy in its finest form.
https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
MPEG was also joint with the video conferencing standards group within the CCITT (now International Telecommunications Union), which generally required FRAND declarations from patent holders.
My recollection is that MPEG-LA was set up as a clearing house so that implementers could go to one licensing organization, rather than negotiating with each patent owner individually.
All the patents for MPEG 1 and MPEG 2 must be expired by now.
Besides patent gridlock, there is a fundamental economic problem with developing new video coding algorithms. It's very difficult to develop an algorithm that will halve the bit rate for the same quality, to get it implemented in hardware products an software, and to introduce it broadly in the existing video services infrastructure. Plus, doubling the compression is likely to more than double the processing required. On the other hand, within a couple of years the network engineers will double the bit rate for the same cost, and the storage engineers will double the storage for the same cost. They, like processing, follow their own Moore's Law.
So reducing the cost by improving codecs is more expensive and takes more effort and time than just waiting for the processor, storage and networking cost reductions. At least that's been true over the 3 decades since MPEG 2.
If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
Fabrice Bellard's nncp (mentioned in a different comment) leads.
DCVC-RT (https://github.com/microsoft/DCVC) - A deep learning based video codec claims to deliver 21% more compression than h266.
One of the compelling edge AI usecases is to create deep learning based audio/video codecs on consumer hardwares.
One of the large/enterprise AI usecases is to create a coding model that generates deep learning based audio/video codecs for consumer hardwares.
This makes zero sense, right? Even if this was applicable, why would it need a standard? There is no interoperability between game servers of different games
Copyright is cancer. The faster AI industry is going to run it into the ground, the better.
Or is it MPEG LA? https://wiki.endsoftwarepatents.org/wiki/MPEG_LA
And, boy howdy, they did.
Maybe these sorts of handshake agreements and industry collaboration were necessary to get things rolling in 198x. If so, then I thank the MPEG group for starting that work. But by 2005 or so when DivX and XviD and h264 were heating up, it was time to move beyond that model towards open interoperability.