Some history for people who are not aware: Theora became somewhat popular in free software / open source circles, because at the time, it was the best codec which was believed to be either free of patents or the patents were explicitly opened up for free use. Therefore, if you were concerned about patents and their impact on free software, you'd use it. But Theora wasn't a great codec, which we always knew, it just was the best we had before google bought and opened up VP8.
It's an interesting tradeoff. Theora was never particularly popular, so you probably will have a low number of sites being impacted. But we kinda have a tradition that the web plattform rarely breaks things. You mostly can still use old html from 20-30 years ago, gif will probably stay supported in browsers forever, and I don't think there are many examples of media formats in browsers being deprecated. Even odd things like bmp are still supported.
Dropping e.g. HTML/CSS features is much harder, since there likely won't be any workaround other than running older version of the browser.
Obviously only if I'd be certain some other browser can still open it, or maybe some emulator would be able to display it. My earliest websites are 27 years old now (and live on a floppy disk) I'D hate to find that they're no longer readable in any software.
But I'm perfectly fine with my Firefox, with which I spend hours a day on the modern web, dropping support for that, when they need to shed some cruft or weight.
It's a shame Theora never "made it". The peak of its popularity is long past.
So was VP8, which Google ended up pushing at nearly the same time they were rejecting Theora as an inferior H.264 clone. The irony being that VP8 wasn't that different either[1]
[1] https://web.archive.org/web/20150301015756/http://x264dev.mu...
The article you've linked it talking about some decoder implementation details which I'm not sure were all that relevant for end users - x264 was always the superior encoder, libvpx was "ok", libtheora was terrible when it came to actual encoded video results. Not sure what the article you posted really proves around that.
I tried to use VP9 in the past but it's like 20-40MBs a dll. The lowest I could find was dav1d and it's still around 4MB for the library dll and encoding AV1 and getting good compression rate was not trivial.
I was wondering about this too. https://www.osnews.com/story/24954/us-patent-expiration-for-... seems to think 12/2027.
Would it be surprising if H.264 was replaced by something else by that point? We have multiple subsequent standards, and it seems like everyone producing or providing content would want improved codecs by then.
It's also noteworthy that of the full list of H.264 patents here:
https://scratchpad.fandom.com/wiki/MPEG_patent_lists
...the majority of them have already expired. IANAL but since the original H.264 spec became public 20 years ago, everything in it should be usable as prior art.
Also, all existing MPEG-4 part 2 (infamous DivX etc.) patents and anything older, e.g. H.263, MPEG-1/2 and H.261, have certainly expired by now.
AV1 (itself derived from the On2 VP8 and VP9 formats) is supposed to be the answer to H.265's patent shenanigans, but support is very slow to manifest. Like, Apple only added it to the iPhone 15 - as in, the one that just came out a month ago. Implementations of AV1 in discrete GPUs similarly only landed last year with Nvidia 40 series, Intel Arc, and Radeon 7000 series cards.
Isn't AV1 the "preferred" option as a replacement, at least by those who are looking for something high quality and without patents encumbrance?
I considered libtheora, the library size is good but the compression/visual quality is awful compared to the alternatives.
This feels like a gross gross misoptimization that is actively harmful to 99.999999999% of user experiences.
...and in fact it doesn't need to be, as I can say so from having written an MPEG-2 (+MPEG-1) decoder myself, whose binary turned out to be less than 16KB.
When one hears about a codec being dozens of MB, the natural instinct should be "for what?" and not "who cares?" The latter attitude is responsible for why software has gotten so much more inefficient, and serves only to line the pockets of hardware manufacturers.
Nobody uses Theora, because it's a bad codec. It was worse than H.264 back in 2009 and it hasn't been updated since. Removing support for dead formats is generally a very good idea, particularly in a web browser, because it reduces the attack surface; we have recently seen a number of major vulnerabilities caused by archaic, neglected file formats and codecs that provided almost no value to users.
Small inefficiencies add up and in the end you have the janky laggy mess that is modern software.
Otherwise, why use a bunch of larger algorithms for map routing when BFS/DFS are so simple and small?
I can just play those as they are, in any browser, except Safari. Painfully, macOS actually supports Vorbis, but only in a CAF (Core Audio Format) container instead of an Ogg container. Still hoping; because shipping an entire Ogg decoder in the browser with WASM works but is ugly.
Of course, I could also just re-encode them with a microservice but it’s just… bleh.
From the FF announcement linked elsewhere in these comments.
VP9 on Safari? Sure, on desktop. On mobile? Oh yeah, only via WebRTC(why???).
Want to import FLACs into Apple Music? Nope, only inferior ALAC is supported for lossless.
AV1? Only just added to iPhone 15 Pro series (not in 15 cause old SoC) and still not on Macbooks.
HEVC? Oh yeah, of course we use it as HEIC for photos and support HEVC playback in Safari, how could we not?
It was a "nifty but who really needs it?" idea
It worked, but my experience was that Ogg isn't really a well-designed container at all. Even QuickTime / MPEG-4 with its 1990s warts is more flexible and efficient. I would definitely pick Matroska today if I really wanted to torture myself with this kind of document format again.
Somebody else wrote more eloquently about Ogg's bizarre design choices:
https://hardwarebug.org/2010/03/03/ogg-objections/
"The variable overhead in the Ogg format comes from the page headers, mostly from the segment_table field. This field uses a most peculiar encoding, somewhat reminiscent of Roman numerals. In Roman times, numbers were written as a sequence of symbols, each representing a value, the combined value being the sum of the constituent values.
"The segment_table field lists the sizes of all packets in the page. Each value in the list is coded as a number of bytes equal to 255 followed by a final byte with a smaller value. The packet size is simply the sum of all these bytes. Any strictly additive encoding, such as this, has the distinct drawback of coded length being linearly proportional to the encoded value. A value of 5000, a reasonable packet size for video of moderate bitrate, requires no less than 20 bytes to encode."
- -
[1] https://github.com/pojala/twentytwenty/blob/master/twtw-ogg....
Just about everything I own can play them. Including rockbox on my sansa clip+.
Writing about ogg vorbis as if it is a historical format is silly. Sure, it wasn't adopted by streamers, but everything on Bandcamp (for example) is available as ogg/vorbis.
All my music is Vorbis, Opus, or ripped m4as, and browsers take it all fine.
The first version of WebM actually used Vorbis. It's the newer ones that are Opus.
But even then, in a modular environment the potential pool of engineering resources is much broader
My laptop doesn't have a 1080p screen, but trying 1080p it could do some videos better than others.
in this one I only dropped 2 frames:
https://www.youtube.com/watch?v=m1jY2VLCRmY&list=PLAMlLc3Zgg...
I'm sorry for being ambiguous. I meant encoding, not decoding. In my opinion an average PC should be able to encode at least some minutes of video in reasonable time.
Whatever, having to recourse to frame dropping (even in a negligible degree) means having 100% CPU or IO load already reached, doesn't it? 100% load on 720p playback sounds bizarre. I have been accustomed to any video playback taking just a few percents on any old computer with Intel graphics.
> I have been accustomed to any video playback taking just a few percents on any old computer with Intel graphics.
it's using like 160% of a hyperthreaded core, so not even completely saturating a laptop from 2015 (8 years ago) - I don't know why it needs to drop frames though (single core too slow?)
I mean ANY phone. My phone is $160 from 2019 and it can play 1440p AV1 on youtube (with a few frame drops, but nothing noticeable), even though it has a 720p screen. I'd have to get my OnePlus 3 (2016) to stutter in 1440p
note that the OnePlus 3 would stutter in h264 1440p as well, it's just old
https://www.spiedigitallibrary.org/conference-proceedings-of...
“Overall AV1 real-time playback of 720p 30fps @ 2Mbps is feasible for low-end devices with 4 threads and 1080p 30fps @ 4Mbps is feasible for high-end and mid-range devices with 4 threads using Dav1d decoder.”
The Sony Xperia 1 series does (along with a headphone jack, microSD card support, notification LED, a hardware camera button, no hole in the screen for the front camera, etc.).
"Chrome will deprecate and remove support for the Theora video codec in desktop Chrome due to emerging security risks. Theora's low (and now often incorrect) usage no longer justifies support for most users. "
(Not that it would make any sense to implement ffmpeg on top of WebCodecs on top of ffmpeg. Just needed an example.)
I hope I have not completely missed the train by focusing on other areas outside of web.
(it's actually usable to some extent)