Since there are no examples I can't be sure if this is is what I think it is, but IF it is:
I want this on huge monitor for any 3D game instead of clunky headgear VR or tiny smartphone AR.
Months ago I also tested this with a small 3D visualization and very crude head tracking.
The effect is damn awesome! To be able to move around in real space with the rendering adapting to it, makes it so immersive, even for my very crude tests.
In my opinion the resulting 3D effect is MUCH better than viewing in stereo with one picture per eye.
Here is an example from someone else from 2007:
https://www.youtube.com/watch?v=Jd3-eiid-Uw
From 2012:
https://www.youtube.com/watch?v=h9kPI7_vhAU
Obviously this works for only one person viewing it, but does that really matter? There are a LOT of use cases where only a single person uses a single monitor for viewing, especially in these times. In fact it is the standard.
As for 3D, I also remember seeing a TV at Toshiba HQ in 2013 or so that had 3D without glasses, and even worked for multiple people. No idea how.
I wrote my master thesis on designing gesture interfaces (of the Xbox Kinect type) back in 2012, here's my two cents:
First of all, gesture detection still isn't reliable enough for serious input. Missing a beat one in a hundred times is acceptable in gaming settings (and even then only in more "casual" environments like party games), but for serious input the input device needs to be practically 100% reliable. A keyboard press is. A mouse click is. Detecting whether your hand is an open palm or a fist? Not so much. Hence the peripherals we typically see for VR games, which help of course. At the same time they also somewhat defeat the purpose.
A second major issue is a lack of haptic feedback. There is no such thing as touch-typing in the air.
Why this is such a big problem needs a bit of explaining: a practical way to think of our ability to manipulate our environment (literally, the manos in manipulate referring to our hands) is to think of them as a pair of kinematic chains[0][1]. Essentially this is a chain of ever-more finegrained "motors", going from coarse-grained to fine-grained precision: our shoulders, our elbows, our wrists, and finally the digits of our hands. The ingenuity of this chain is that it allows for extremely fine precision (the sub-millimeter precision of our fingertips) in large spatial volume (the reach of our arms), and it does so by having each "link" in the chain perform a bit of "error-correction" for the lower resolution of the previous link.
What does this have to do with gesture interfaces? Well, in order for that kinematic chain to work, it needs a precise feedback system to perform said error-correction. We basically have three senses for this: our visual system (that is, seeing where we are putting our hands), our haptic sense (feeling which button we're pressing with our finger-tips) and our "spatial sense". The problem with the latter sense is that is relative: I sense the sub-millimeter location of my fingers relative to my wrist. I sense the millimeter-precision location of my wrist relative to my elbow. I sense the centimeter-precise location of my elbow relative to my shoulder. So if I'm waving my hands in the air without looking, the effective "precision" they have is about as crude as the crudest link in the chain: my shoulder. Of course this spatial sense can be improved with training, but you know what we typically call people who are really good at that? Professional-level dancers. The ceiling of mastering this skill is pretty high, and there's a reason it's basically a profession all by itself (plus a ton of other things obviously, don't want to sell dancers short here).
Gesture input also will never be as easy on the motor skills as typing: not only does a keyboard provide the haptic feedback from the keys, the precision of my fingers is relative to the wrists that are resting on the desk, not to my shoulders.
Games somewhat get around this by representing a visual avatar to give us feedback, but it's not perfect. On top of that, this feedback is limited by the resolution of the gesture detection, which is ludicrously low compared to the potential precision of our limbs. And if that wasn't enough, it also needs a really low latency to fool our brains and really "feel" like an extension to our senses.
So basically, the fidelity requirements are just brutally high.
And finally, there is only a limited set of use-cases. There are basically just two big ones: "touchless" interfaces (very niche) and pointing and manipulating in 3D space (less niche, with a clear advantage over keyboard or even mouse input, but again having brutally high fidelity requirements). Because of that, as cool as gesture interfaces are, the industry-wide drive to solve all the aforementioned issues just isn't quite as high as we'd like it to be.
Lenticular 3D limits resolution to a fraction divided by double of the number of viewpoints in an axis, e.g. a 4K by 2K panel with 10 viewpoint along its width means effective resolution is 400 by 2K. Okay for demos but not practical atm.
In real life, we are trying to estimate depth relative to us. In order to fool our brains we need both a very low latency and a high frame-rate. That was one of the major hurdles to solve for VR as well, leading to John Carmack's famous complaint that he can ping across the Atlantic and back faster than he can send a pixel from his desktop to his screen[0][1].
Anyway, back to the video: basically, when comparing 3D movement relative to the camera our brains seem to be more "forgiving".
[0] https://twitter.com/id_aa_carmack/status/193480622533120001
The effect is fantastic.
It's based on [1] and runs entirely in the browser, allthough it takes a moment to create the depth map. It's more of a toy project at this point. But I was surprised when I saw that Google is doing the same thing now in Google Photos [2].
[1] https://github.com/FilippoAleotti/mobilePydnet
[2] https://www.theverge.com/2020/12/15/22176313/google-photos-2...
For instance, I can only imagine right now how this technique works; I'd rather not leave that to my very imperfect imagination.
I only skimmed it, but looks like they track midpoint of eyes (having rejected pupil tracking), and spacing (to get viewer distance), and they use a webcam to do it.
When I was working on such a project, I had no way of correctly guessing the distance between the screen and eyes.