There are some groups in the anime scene whose sole purpose is to take releases from other groups and convert them into standard h.264 video that can play on basically any device.
With that said, I have to ask why these groups are interested in 10-bit when I'm essentially certain they cannot view in 10-bit. Only workstation GPU's (Quadro and FirePro) output 10-bit (consumer GPU's are intentionally crippled to 8-bit) and I can't really think of any monitors that have 10-bit panels under about $1000 (though there are many with 10-bit processing which is nice but doesn't get you to 10-bit monitoring). There are some output boxes intended for video production that can get around the GPU problem, but by the time you've got a full 10-bit environment, you're at $1500 bare minimum which seems excessive for most consumer consumption.
So I guess what I'm asking, are these groups interested in having 10-bit because it's better and more desirable (and a placebo quality effect) or are they actively watching these in full 10-bit environments?
But it works, for the same reason that audio engineers use 64-bit signal pipelines inside their DAW even though nearly all output equipment (and much input equipment) is 16-bit which is already at the limit of human perception.
If you have a 16-bit signal path, then every device on the signal path gets 16 bits of input and 16-bits of output. So every device in the path rounds to the nearest 16-bit value, which has an error of +/- 0.5 per device.
However if you do a lot of those back to back they accumulate. If you have 32 steps in your signal path and each is +/- 0.5 then the total is +/- 16. Ideally some of them will cancel out, but in the worst case it's actually off by 16. "off by 16" is equivalent to "off by lg2(16)=4 bits". So now you don't have 16-bit audio, you have 12-bit audio, because 4 of the bits are junk. And 12-bit audio is no longer outside the limit of human perception.
Instead if you do all the math at 64-bit you still have 4 bits of error but they're way over in bits 60-64 where nobody can ever hear them. Then you chop down to 16-bit at the very end and the quality is better. You can have a suuuuuuuper long signal path that accumulates 16 or 32 or 48 bits of error and nobody notices because you still have 16 good bits.
tl;dr rounding errors accumulate inside the encoder
The H.264 prediction loop includes some inherent rounding biases and noise injection. Using 10-bit instead of 8-bit reduces the injected noise by 12 dB, which makes a noticeable difference in quality-per-(encoded)-bit, even when the source and display are both 8-bit. I spent some time talking with Google engineers early on in the VP9 design process, and they commented that they had done some experiments, but did not see similar gains by using higher bit depths with VP9. I don't know if that's still true with the final VP9 design, though.
I can explain that - I wrote an extensive post on this matter a year back, so I'll just reuse that with a bit of tweaking. So with that clear, let's talk about the medium we're working with a bit first.
Banding is the most common issue with anime. Smooth color surfaces are aplenty, and consumer products (DVDs/BDs) made by "professionals" have a long history of terrible mastering (and then there's companies like QTEC that take terrible mastering to eleven with their ridiculous overfiltering). As such, the fansubbing scene has a long history with video processing in an effort to increase the perceived quality by fixing the various source issues.
This naturally includes debanding. However, due to the large smooth color surfaces, you pretty much always need to use dithering in order to have truly smooth-looking gradients in 8-bit. And since dithering is essentially noise to the encoder, preserving fine dither and not having the H.264 encoder introduce additional banding at the encoding stage meant that you'd have to throw a lot of extra bitrate at it. But we're talking about digital download end products here, with bitrates usually varying between 1-4 Mbps for TV 720p stuff and 2-12 Mbps for BD 720p/1080p stuff, not encodes for Blu-ray discs where the video bitrate is around 30-40 Mbps.
Because of the whole "digital download end products" thing, banding was still the most common issue with anime encodes back when everyone was doing 8-bit video, and people did a whole bunch of tricks to try to minimize it, like overlaying masked static grain on top of the video (a trick I used to use myself, and incidentally is something I've later seen used in professional BDs as well - though they seem to have forgot to properly deband it first). These tricks worked to a degree, but usually came with a cost in picture quality (not everyone liked the look of the overlaid static grain, for example). Alternatively, the videos just had banding, and that was it.
Over the years, our video processing tools got increasingly sophisticated. Nowadays the most used debanding solutions all work in 16-bit, and you can do a whole bunch of other filtering in 16-bit too. Which is nice and all, but ultimately, you had to dither it down to 8-bit and encode it, at which point you ran into the issue of gradient preservation once again.
Enter 10-bit encoding: With the extra two bits per channel, encoding smooth gradients suddenly got a lot easier. You could pass the 16-bit debanded video to the encoder and get nice and smooth gradients at much lower bitrates than what you'd need to have smooth dithered gradients with 8-bit. With the increased precision, truncation errors are also reduced and compression efficiency is increased (despite the extra two bits), so ultimately, if you're encoding at the same bitrate and settings using 8-bit and 10-bit, the latter will give you smoother gradients and more detail, and you don't really need to do any kind of processing tricks to preserve gradients anymore. Which is pretty great!
Now, obviously most people don't have 10-bit screens or so, so dithering the video down to 8-bit is still required at some point. However, with 10-bit, this job is moved from the encoder to the end-user, which is a much nicer scenario, since you don't need to throw a ton of bitrate for preserving the dithering in the actual encode anymore. The end result is that the video looks like such an encode on a 8-bit (or lower) screen, but without the whole "ton of bitrate" actually being required.
So the bottom line is that even with 8-bit sources and 8-bit (or lower) consumer displays, 10-bit encoding provides notable benefits, especially for anime. And since anime encoders generally don't give a toss about hardware decoder compatibility (because hardware players are generally terrible with the advanced subtitles that fansubbers have used for a long time), there really was no reason not to switch.
Keep in mind that there is a range expansion when converting to monitor sRGB. Also several video players now support dithering. So the debanding effect can often be quite visible.
Also, x264 benefits from less noise in its references, which means 10-bit gets a small compression performance bump.
I have been waiting for h265 to surface in consumer cameras, and hopefully at 10 and 12 bit depths as options. Even if current monitors don't do 10 bits, there is so much information you can pull out of the extra data.
Many camera sensor chips can output 10 or 12 bits, it's a shame it doesn't get recorded on most cameras.
Hopefully Rec2020 on TV's and new blu ray formats will also push cameras.
That's not what NVidia says:
"NVIDIA Geforce graphics cards have offered 10-bit per color out to a full screen Direct X surface since the Geforce 200 series GPUs"
http://nvidia.custhelp.com/app/answers/detail/a_id/3011/~/10...
> and I can't really think of any monitors that have 10-bit panels under about $1000
$394
http://www.amazon.com/Crossover-27QW-DP-27-Inch-2560x1440/dp...
$450
http://www.amazon.com/Achieva-Shimian-QH2700-IPSMS-Backlight...