Because of that, I'm also sure that eye tracking will go mainstream in other areas once the Vision Pro is released once everyone else catches on to it as a great input method.
The issue is doubly close to my heart because my father has ALS and is nearly at the point where eye-tracking will be his only means of communicating effectively with the world. While existing Tobii systems work well enough, typing with your eyes is still exhausting to do.
Ultimately I don't think a platform like the vision pro is suitable for ALS patients, especially later term. They cannot support the weight of the headset and/or fatigue will set in rapidly. Many (including my father) also require use of a ventilator, accompanied with a mask that can seal effectively enough to support the positive pressure necessary to inflate their lungs. Unless the form factor for HMD's minimalizes significantly, it will likely interfere with the respirator's efficacy.
The supposed security of blackboxing the eye data itself is illusory and functionally just for marketing.
Is the eye-tracking performance/accuracy step change on Apple's headset purely just a software/algo change? Or is it actually using a new principle/apparatus for eye-tracking?
- Multiple cameras per eye, and at a very short distance from your eye
- The screen is fixed relative to the cameras for all devices, there's no worry about that half of the equation getting off calibration or differing for every customer
- OS is built around eye tracking, which means there won't be any actions that are unnaturally hard to perform with eye tracking
Also, I'm not an ALS expert, but if the only muscular control is in the eyes, then lack of control in the head/neck probably breaks some assumptions about how the vision headset works (just a guess though).