I process photos in ProPhoto RGB and I’m in the process of switching up my process to always publish images to the web as Display P3 which can be done just fine in JPEG and WEBP by attaching a color profile.
Display P3 is moderately larger than the old standard sRGB; you are trading some color resolution in the “mainstream” area for more saturated greens and reds.
4K TV’s use Rec 2020 which has a huge color gamut, because it is covering a bigger space, 8-bit color is not enough, you need to go to 10-bit, 12-bit or more (I process in 16 bits) and neither JPEG or WEBP can handle that. AVIF can, but so can JPEG XL.
I know people doing synthetic tests (instead of looking at the image they run a program that estimates how bad compression artifacts are) are impressed with AVIF but I’ve done some shootouts with JPEG/WEBP/AVIF/JPEG XL where I look at images with my own eyes.
For pictures that are moderate-low quality (say images for a blog) I think AVIF does very well, but I want to publish pictures I took with my mirrorless where I work really hard to get them “tack sharp” (e.g. sometimes a 4000x6000 image w/ my Sony looks almost like pixel art when you blow it up) and I want people to see something consistent with that on the web. And my experience is that AVIF falls down at that, it does not really save bits compared to JPEG and WEBP at high quality. JPEG XL gives superior compression at high quality and it supports high color depths and it’s an option I’d really like to have.
In all the comparisons I've seen, it's not even a contest.
"I picked this image because it's a photo with a mixture of low frequency detail (the road) and high frequency detail (parts of the car livery). Also, there are some pretty sharp changes of colour between the red and blue. And I like F1.
Roughly speaking, at an acceptable quality, the WebP is almost half the size of JPEG, and AVIF is under half the size of WebP. I find it incredible that AVIF can do a good job of the image in just 18 kB."
https://jakearchibald.com/2020/avif-has-landed/
It'd be interesting to see file size comparisons of AVIF lossless images vs. JPEG's "almost lossless" 100% compression, but I haven't run across any yet.
They're just showing that it can do a less-offensive job of erasing detail smoothly to take filesizes down to tiny levels than WebP or JPEG? But "tiniest size with least offense" is VERY different than "best size with greatest detail."
The compression artifacts in the self-shadows of the red bit of the car to the left of the driver's head look awful to me. It's true that the compression artifact blends in pretty well and you might think the car really looks like that but personally I can't unsee things like that once I look at them in comparison.
The thing is that it is that play of reflections and shadows that makes an expensive sports car look so sexy.
I don't want the internet to look like that F1 car to save a few milliseconds or $0.0000001 for the company sending the pic to me.
I certainly don't want my family photos to look like that.
I don't want the photographs on the internet to become much faster, much cheaper and much worse. I'd like them to become a bit faster, a bit cheaper, but also more crisp, vivid, emotionally engaging, and realistic.
So it's unsurprising that they have pushed the format optimized for "as few bits as we can get away with before things like too terrible" rather than actually improving quality and extending capabilities.
https://afontenot.github.io/image-formats-comparison/#end-of...
It seems AVIF has better compression at lower bit rates. At high bit rates they seem similar. AVIF especially shines for pictures with large homogeneous surfaces like the sky.
However, AVIF is missing some important features, such as progressive image loading. The maximum resolution is apparently also quite limited.
AVIF is missing JPEG XL's ability to re-encode JPEGs losslessly and reversibly with a reduction in file size. It may prove a serious advantage for JPEG XL. AVIF also lacks anything like https://jpegxl.info/art/. :-)
Is defined by some standard to be able to be declared "4K", or is it just what seems to be happening because all/most of the panel makers just threw it in?
My main monitor is a Dell that is very close to Adobe RGB which is great for print work because it covers the CYMK gamut well.
I am interested in getting something better but it is not so clear to me that you can really get a Rec 2020 computer monitor other than a crazy expensive monitor from Dolby. Maybe I gotta download a bunch of monitor profiles so I can know what various monitors really support as I’ve already developed a system for simulating how channel separation works for red-cyan stereograms even on monitors I don’t have.
A better TV has been on the agenda too except somehow people keep giving me free TVs on the railing edge such as a Walmart TV which had great sound (better than many sound bars) that had the backlight burn out, then I got gifted a Samsung which sucks but is working fine in my TV nook downstairs. My main AV room doesn’t have room for anything bigger than what I’ve got unless I move everything which I don’t have a good plan for…
In theory you can use any color primaries with any video resolution in computer (NOT on TV as those normally only support mainstream standard) as long as the the color space metadata is properly set, but in practise, some softwares ignore the metadata or the metadata got lost in the video processing chain. So in general, 4K video use Rec 2020, HD/FHD use Rec 709, and SD use Rec 601, for maximum compatibility.
Don't get too hung up on picking a file format then. All sorts of middleboxes, CDNs, and edge network acceleration systems can potentially "right-size" your image for what the requesting device can handle optimally.
Can you share some examples of such images/fragments?
Or maybe they are just trying to keep feature parity with Safari.
At this point adding new {image, audio, video, compression} codecs to browsers is probably a net negative, unless there's a good chance they get deployed across the entire browser ecosystem. Safari is generally the browser that's most conservative about implementing anything new, so their support makes a huge difference in the viability of getting the format universally supported.
I love this phrase thank you.
With Google's study [1], by Google's Engineer, JPEG XL is no where near good enough compared to AVIF.
None of the above facts have changed since Google Chrome's decision on JPEG XL.
/S
[1] https://storage.googleapis.com/avif-comparison/index.html
But it still more of a response than Google has given in ages.
There's a browser joke here somewhere, I'm sure.
The fact that we have to get on our knees and plead for their consideration versus just fork and ship should make you ill. No compression without representation or some such.
> @Reporter Could you please confirm the OS details.
> As the issue seems similar to crbug.com/1178058 adding firsching to cc list for more inputs.
I get that most bandwidth goes to video, but it would still be nice to have a great modern standard for images.
WebP is obsolete. It's still based on VP8 codec, which in video has been replaced by VP9 long time ago. AVIF is based on AV1, which is a successor to VP10. So WebP is a few generations behind in the VPx lineage, and is no match for modern codecs.
After a bit of searching, it's unclear what degree of "patent risk" comes with JPEG XL. JPEG historically was subject to patent troll lawsuits until the patent expired in 2006.
Please note that it's not enough for there to be a "royalty-free reference implementation" of JPEG XL, even if it's licensed with Apache 2.0, because you can't be sure from a glance that the Apache license patent grant includes all relevant patents. If you care about open source and free formats, you should look for two things: a comprehensive patent pool transferred to the standards body AND a royalty-free patent license to anyone with no strings attached.
The game here is that companies with potential claims over some techniques used within codecs have an incentive to withhold their patents from the official pool until years after adoption. Then they sue the biggest users of the codec (like Google) for obscene sums of money. That's why ALL of the patents used in a codec must be assigned to the standards body for open licensing, and you have to be SURE that none are withheld. This is difficult.
AVIF (and it's standards body, the AOM) was created in part (I believe) to solve this very problem. All the major tech companies are members and they've effectively agreed to a patent truce with regards to codecs.
This is arguably the most important commercial concern in distributing a browser for free that includes codecs. If you ship unlicensed codecs, some random company can crawl out of the woodwork 5 year later and sue you for a billion dollars.
In my view, AVIF only needs to be competitive with compression and quality. It patent risk is so low that it is the obvious choice. AVIF is truly open, there are multiple implementations, and its reason for existing is to solve the codec patent problem.
Source: I was near the activities within Netflix that helped found the AOM.
Disclaimer: I'm not a lawyer and this isn't legal advice; also I'm several years out of date w.r.t. JPEG XL specifically, so I'd be happy to be corrected about the relevant patent risk. Maybe someone has better info?
JPEG XL decides the codes at encoding time and does context modeling the same way as WebP lossless and Brotli, by deciding which entropy codes to use explicitly.
Microsoft's rANS patent is supposedly centralized around updating rANS codes at decoding time (based on past symbols). This is slightly more efficient for density, but much slower and may negate the speed benefits that rANS brings. For practical implementations JPEG XL/Brotli way is quite a bit better.
If the new issue gets closed, then it just reaffirms that the Chromium team doesn't care about this feature request. If the new issue somehow convinces the team to do something about it, then it shows that the team is utterly dysfunctional because their decision-making is more influenced by whether you say "pretty pretty please" in the right way than by the content of the discussion.
The new information here seems to be 'most people thought the previous decision was bad', rather than 'please I really want this'. Changing an old decision because most people think it was bad is not a sign of utter dysfunction.
Using it on the web is one thing, but getting better compression for my family photos would also probably be a win, and I suspect it would be possible to build a pipeline for viewing/editing that would be fairly transparent.
Combined with GNU parallel, I did this:
find -type f -iname \*.jpg -print0 | parallel -0 cjxl --lossless_jpeg=1 {} {.}.jxl
find -type f -iname \*.png -print0 | parallel -0 cjxl -d 0 {} {.}.jxl
find -type f -iname \*.webp -print0 | parallel -0 dwebp -o {.}.png {} \&\& cjxl -d 0 {.}.png {.}.jxl
JPEGs get losslessly recompressed with JPEG XL. PNGs and (lossless) WebPs get converted to lossless JPEG XL.There are a few holdout programs that I think are missing support (Blender3D springs to mind), but I also think in the rare instances where that's a problem I could probably set up a quick shortcut or some hooks to on-the-fly run cjxl and convert back to jpeg/png temporarily for whatever operation I need to do.
https://www.techspot.com/news/98355-google-deprecating-jpeg-...
Lack of popularity hasn’t stop them from supporting webp and AVIF.
Does JPEG XL allow encoders to switch between the DCT and modular modes on a per-macroblock basis, or is it just on a per-channel basis?
If it's the former then I can see this offering a lot of utility over other image formats because you'd be able to disable the DCT on high-contrast macroblocks and finally be done with all those god-awful "checkerboard" artifacts around the edges of objects.
But if it's merely on a per-channel basis then I'm not sure I see what the point of this is since I can already use a different format when I need lossless encoding; If anything JXL would become an annoyance because I can't tell if a JXL image is lossless or not based on the file's extension.
It was discussed here a few weeks ago: https://news.ycombinator.com/item?id=36801448
Tricky but it can be indeed done on a per-macroblock basis. The encoding itself is fixed per frame, but JPEG XL mandates zero-duration frames to be merged with the prior frame, so multiple frames with different encodings can be used for that. In fact I believe patches already work like this.
Two of the 8x8 transforms are extremely local. One is called IDENTITY and the other DCT2x2. It is very difficult to produce ringing artefacts when using these transforms.
When going to higher quality settings in libjxl, it tends to favor the DCT2x2 quite a bit.
This is in VarDCT -- not modular coding.
Now several formats are competing, most notably AVIF (which is basically just single AV1 video compressed frame) and JPEG XL. JPEG XL might be slightly better in some cases (as AVIF is based on a video codec) and most importantly it's backwards compatible with JPEG. So this means we can re-encode 30 years of JPEGs to JPEG XL without image degradation. Having a wide support would help immensely to make the format standard as otherwise everybody will just continue to use JPEGs. Google is somewhat against this as they already have support for AV1 and thus don't need to maintain a separate codec for JPEG XL.
However Mozilla has not completely dropped the feature like Google.
1. Binary size cost, in my experience working on Firefox this is in the 100s of KiB range when adding a new decoder.
2. Ongoing costs increased compile times, new integration tests, functional tests and so forth. Keeping those tests passing and non-flaky.
3. Once something is accepted into the web ecosystem the intention is to support it for 10s of years if not forever. Web feature deprecation is quite slow, ex <keygen> & <blink>. The web has not deprecated a primary image format.
4. Security, a 'new' binary format is a place for security vulnerabilities, crashes and hangs. The web is actively hostile place for web browsers.
That's the cost for the maintainers. Codecs are historically one of the most problematic sources of security issues (they're complex code that handles malicious downloaded files) and supporting a new one is a rather big maintenance burden for everyone involved.
And if Chrome gets backdoored by a JXL library security hole, everyone will blame Google for it.
If, by any chance, supporting JXL becomes too much of a burden, everyone will again blame Google for being evil if they ever remove it from Chrome.
I bet after (re)introduction, most of people yelling for it won't actually convert their JPEGs to XL. Just like almost noone whining about Reader actually uses or pays for any of the alternatives.
The idea is converting workflows to JPEG XL (and particularly to enable uses for which JPEG isn’t suitable and even AVIF is supposedly less optimal), not converting existing JPEGs, mainly.
It's always a pain in the ass when you discover your phone has actually been saving your photos as heic or webp or avif or whatever and hardly anything will open them.
I could understand wanting to improve JPEG in the age of dial-up and 1.44MB floppy disks - 60% smaller images could have been a great benefit in those days. But today, even if I'm taking 30 photos every day at 4k resolution, it'd take 20 years to fill up a $50 1TB disk.
The other benefits of the format might be great for some specialist applications, but options like billion-pixel-wide images, 32 bits per channel and 4099 channels ready for medical imaging only get a shrug from me. I doubt my browser is going to start displaying 4099 channel images.
I just wish we could get rid of heic, webp and avif at the same time.
60% smaller images are great for hosting providers. We have ample storage and bandwidth compared to the 90s, but it still ain't that cheap.
you also very clearly don't care about the entire internet experience, at all, whatsoever.
Edit: 60% space savings only available in the age of the floppy.... what? 60% cost savings when serving multiple terabytes of image data is useless?
You seem to view everything through the extremely tiny lens of a photographer or something... pun intended
AVIF was the first format accepted by the web that supports HDR (not already tone-mapped HDR, true HDR.) Which maybe you don't personally care about, but is something that fundamentally cannot be done with existing JPEG and PNG implementations.
AVIF might not have happened, and the above paragraph might have read "HEIC", if HEVC had had similar licensing terms as H.264. But there's no predicting that stuff before it happens.
Everything about computer imagery is pretty sadly limited when compared to the capabilities of human eyes and brains. And for quite some time now the ends of the pipeline (camera sensors and computer displays) have been improving, but are bottlenecked by the middle of the pipeline (image formats).