I initially made this to experiment with 'faking' depth of field in CSS (check out my penultimate tweet for the demo vid and inspiration from shuding_, link at the bottom of the site).
But last night I remembered that ThreeJS exists, so rewrote it using react-three-fiber. This was my first time playing around with it and I'm super impressed, it's incredibly ergonomic.
Edit: not documented, but right-click drag to pan
> Edit: not documented, but right-click drag to pan
Confirming undocumented feature. Scratching my head why ctrl-left-click on macOS doesn't enable panning, too.
But, of course, you could also look at it in other projections; tilting it around in 3D space (as done here); applying fog to shadow “distant” windows; lighting the scene from a point-source so as to make the windows cast real shadows on one-another (with more light let through translucent areas!); etc. I would imagine that the (ideally HTML5 embeddable) viewer for this “3D screenshot” format would do all those things.
(I do hope someone does try creating such a “3D screenshot” format and viewer, as IMHO it would have a fairly-obvious use-case: reproducing static “look-around-able” snapshots of the already-depth-mapped AR window layouts in visionOS. Being able to tack arbitrary depth-maps onto windows from a 2D desktop OS would just be a bonus.)
I think the Amazon Fire phone also tried something similar in real time with several front facing eye tracking cameras and the gyro+accelerometer to shift the phone UI and simulate a 3D view with parallax. The old mobile Safari tab view also used to shift the tabs based on the phone's orientation.
I would love to see a "globally illuminated" UI someday, even if it's impractical. Something like all those Windows renders the Microsoft Design team put out, but in real time. Totally impractical and a poor use of electricity, but it would be cool to have path traced, soft drop-shadows.
Apple patented a ton of stuff for this probably a decade ago. It seemed at some point they were going to start procedurally rendering aqua materials and the like using recipes for lighting that could all be dynamic.
Presuming some of it made it into VisionOS
It also compensates for portrait photos being best taken from a distance using a telephoto lens. The reason those are best is because it captures the face people remember instead of the face they actually see. The reason compensation is needed is because the same lens configuration will get a much smaller depth-of-field up close but a much higher one at a distance.
I'm sure this has uses but it's hard to argue it does from fundamentals of photography
Things like the modern Link's Awakening would have me projectile vomiting if I tried to play it.
There should never be blur, ever.
But it looks like a great visual effect to use on a marketing site, especially to highlight a specific part of a screenshot to get across whatever you want to emphasize.
You think? I thought it was just a funny quip.
It's a joke, chill
[0] https://www.screenstab.com/editor/ [1] https://news.ycombinator.com/item?id=34729849
Other threads have pointed out that it might be useful for a screenshot to be able to show somebody where to navigate in complicated UI, without cropping out the rest of the UI (and so removing the navigational cues) but without making the screenshot as hard to navigate as the UI itself.
I guess it could be useful for focusing on a particular part of the screenshot if you one could mark what the important part is.
EDIT: But now that I've looked at the demo...I am not sure what I would want this for.
_You_ know what your screens look like, so you might enjoy seeing them blurred and tilted. But _I_ don't know, that's the information that I would be trying to get.
And this is why you write your websites HTML instead of javascript.
Technically html + css + user interaction can be turing complete: https://github.com/brandondong/css-turing-machine