The fundamental problem is that color space is 2D[1] (color + brightness is 3D, hence 3 subpixel on traditional displays), but monochromatic light has only 1 dimension to vary for color.
Relatedly, the page talks a lot about pixel density, but this confused me: if you swap each R, G, or B LED with an adjustable LED, you naively get a one-time 3x boost in pixel area density, which is a one-time sqrt(3)=1.73x boost in linear resolution. So I think density is really a red herring.
But they also mention mass transfer ("positioning of the red, green and blue chips to form a full-colour pixel") which plausibly is a much bigger effect: If you replace a process that needs to delicately interweave 3 distinct parts with one that lays down a grid of identical (but individually controllable) parts, you potentially get a much bigger manufacturing efficiency improvement that could go way beyond 3x. I think that's probably the better sales pitch.
That does mean a variable resolution scenario.
https://upload.wikimedia.org/wikipedia/commons/b/ba/Planckia...
This reminds me of the observation I had in high school that I could immerse LEDs in liquid nitrogen and run them at higher than usual voltage and watch the color change.
I got a PhD in condensed matter physics later on but never got a really good understanding of the phenomenon but I think it has something to do with
https://www.digikey.com/en/articles/identifying-the-causes-o...
Here is a video of people doing it
I guess you could cheat it by moving the wavelength outside the visible spectrum?
Human eyes have three different color receptors, each tuned for it's own frequency, so it's already 3d. However, apart from human perception, color, just like sound, can have any combinations of frequencies (when you split the signal with Fourier transform), and may animals do have more receptors than us.
We can distinguish the combination a huge number of frequencies between 20-20000Hz.
But we can only distinguish 3 independent colors of light.
Of course our vision is vastly better than hearing for determining where the sound/light comes from.
Dynamic resolution / subpixel rendering. Retina looks really good already, not sure if the effect would be relevant or interesting but it might open up something new
Citation needed. The article doesn't say anything about how the colors are generated, and whether they can only produce one wavelength at a time.
Assuming they are indeed restricted to spectral colors, dithering could be used to increase the number of colors further. However, dithering needs at least 8 colors to cover the entire color space: red, green, blue, cyan, magenta, yellow, white, black. And two of those can't be produced using monochromatic light -- magenta and white. This would be a major problem.
https://www.youtube.com/watch?v=quB60FmzHKQ
https://web.archive.org/web/20240302185148/https://www.zephr...
Presumably, you get to control hue and brightness per-pixel. But that only gives you access to a thin slice of the sRGB gamut (i.e. the parts of HSL where saturation is maxed out), but dithering can solve that. Coming up with ideal dithering algorithms could be non-trivial (e.g. maybe you'd want temporal stability).
I think about it from the CIE "triangle" where wavelength traces the outer edge, or even the Lab (Luminance a-green/red b-yellow/blue) color space since it's more uniform in perceivable SDR color difference (dE).
https://luminusdevices.zendesk.com/hc/article_attachments/44...
One key realization is that although 1 sub-pixel can't cover the gamut of sRGB (or Rec2020), but only 2 with wavelength and brightness control rather than 3 RGB. Realistically, this allows something like super-resolution because your blue (and red) visual resolution is much less than your green (eg 10-30pix/deg rather than ~60ppd). However, your eye's sensitivity off their XYZ peaks are less and perceived brightness would fall.
I guess what I'm saying is that a lot of the assumptions baked into displays have to be questioned and worked out for these kinds of pixels to get their full benefit.
If you take the "no subpixels" claim out of the article, this technology still seems useful for higher DPI and easier manufacture.
Note that even if we restrict our attention to the max-saturation curve, these pixels can't produce shades of purple/magneta (unless, as you say, they use temporal dithering or some other trick).
Even if these could produce just three wavelengths, if you can pulse them fast enough and accurately, the effect would be that color reproduction is accurate (on average over a short time period)
I probably missed something in the article, though I do see ex. desaturated yellow in the photographs so I'm not sure this is accurate.
If you can't control saturation, I'm not sure dithering won't help, I don't see how you'd approximate a less saturated color from a more saturated color.
HSL is extremely misleading, it's a crude approximation for 1970s computing constraints. An analogy I've used previously is think of there being a "pure" pigment, where saturation is at peak, mixing in dark/light (changing the lightness) changes the purity of the pigment, causing it to lose saturation.
Unsaturated colors aren't a problem, you just need to mix a bit of the opposite color. Unsaturated purples will be a challenge because you need to mix 3 wavelengths rather than just 2.
You're right though, there appear to be yellows on display. Maybe they're doing temporal dithering.
Edit: Oh wait, yellow doesn't need dithering in any case. Yellow can be represented as a single wavelength. Magenta on the other hand, would (and there does seem to be a lack of magenta on display)
That's not hugely surprising given that (I believe) LEDs have always shifted spectrum-wise a bit with drive current (well, mostly junction temperature, which can be a function of drive current.)
I guess that means they're strictly on/off devices, which seems furthered by this video from someone stopping by their booth:
https://youtu.be/f0c10q2S_PQ?t=107
You can clearly see some pretty shit dithering, so I guess they haven't figured out how to do PWM based brightness (or worse, PWM isn't possible at all?)
I guess that explains the odd fixation on pixel density that is easily 10x what your average high-dpi cell phone display has (if you consider each color to be its own pixel, ie ~250dpi x 3)
It seems like the challenge will be finding applications for something with no brightness control etc. Without that, it's useless even for a HUD display type widget.
In the meantime, if they made 5050-sized LEDs, they would probably print money...which would certainly be a good way to further development on developing brightness control.
I doubt they can. Probably the process only works (or yields) small pieces, otherwise they'd be doing exactly what you suggest.
I also notice that their blues look terrible in the provided images. Which will be a problem. I don't think they get much past 490nm or so? That would also explain why they don't talk at all about phosphors, which seem like a natural complement to this tech... I don't think they can actually pump them. Which is disappointing :(
I instead understand that this is false. Available MicroLED screens (TVs) are in fact brighter than normal screens.
The issue with MicroLED is instead that they are extremely expensive to produce, as the article points out, due to the required mass transfer. Polychromatic LEDs would simplify this process greatly.
Does that in any way contradict the claim that there are large variations in brightness between microLED pixels on the same screen?
Not quite vector display, but some thing organic than can be adressed with some stimulators like reaction-diffusion or gaussian, FFT, laplacians, gabor filters, Turig patterns, etc. Get fancy patterns with lowest amount of data.
https://www.sciencedirect.com/science/article/pii/S092547739... https://onlinelibrary.wiley.com/doi/10.1111/j.1755-148X.2010...
A lot of the article is focused on how this matters for the production side of things, since combining even 10 um wafer pieces from 3 different wafers is exceedingly time consuming, which I think is the more important part. Sure, the fact that each emitter can be tuned to "any colour" might be misleading, but even if you use rapid dithering like plasma displays did, and pin each emitter to one wavelength, you suddenly have a valid path to manufacturing insanely high density microLED displays! Hopefully this becomes viable soon, so I can buy a nice vivid and high contrast display without worrying about burn in.
I image these displays could have color sensors attached to self-calibrate.
Or the variability is low and all you need is very precise voltages.
I think the first versions will be RGB displays with fixed colors, just no longer needing mass transfer. You could use tens of subpixels per pixel, reducing all worries about color resolution.
Make these into e.g. 1x1cm mini displays and mass transfer those into any desired display size.
That sounds like it's getting close to being a really good screen for a VR headset.
It would be very cool to have a display with adjustable color.
Some states are not accessible at a given time (voltage can tune which states are available) but my understanding is the number of states is fixed without rearranging the atoms in the material.
There are ways to compensate for perceptual drift like modern LCD drivers, but unless the technology addresses the same burn-in issues with OLED it won't matter how great it looks.
You may want to look at how DMD drivers handled the color-wheel shutter timing to increase perceptual color quality. There are always a few tricks people can try to improve the look at the cost of lower frame rates. =)
Black levels would be determined more by reflectivity of the display than illumination.
https://www.porotech.com/technology/dpt/
Demo video
Of course, it's only just now been announced, but I'd love to see what a larger scale graphic looks like with a larger array of these to understand if perceived quality is equal or better, if brightness distribution across the spectrum is consistently achieved, how pixels behave with high frame rates and how resilient they are to potential burn-in.
4K virtual monitors, here we come!
Nvidia is both a blessing and a curse in many ways for standardization... =3
I can certainly see these being useful in informational displays, such as rendering colored terminal output. The lack of subpixels should make for crisp text and bright colors.
I don't see this taking over the general purpose display industry, however, as it looks like the current design is incapable of making white.
Right now we only represent colour as combinations of red, green, and blue, when a colour signal itself is really a combination of multiple "spectral" (pure) colour waves, which can be anything in the rainbow.
Individually controllable microLEDs would change this entirely. We could visualize any color at will by combining them.
It's depressing that nowadays we have this technology yet video compression means I haven't seen a smooth gradient in a movie or TV show in years.
The human eye can't distinguish light spectra producing identical tristimulus values. Thus for display purposes [1], color can be perfectly represented by 3 scalars.
[1] lighting is where the exact spectrum matters, c.f. color rendering index
Sorry, I get excited every time I work with hyperspec stuff now and love talking about it to anyone that will listen.
That's because the points on outer edge of CIE are pure wavelengths and you can get to any point inside by interpolating between two of them.