I wish them the best of luck but it seems to me like they're really overselling themselves.
They should work on a better testimonial video.
It is strange that they haven't done any real demos (that I've come across), but an awesome idea nonetheless. Hopefully they release something more substantive soon.
Glass will be held back because most people don't want to look like geeks. Meta's next design looks like it's from an 80's Sci-Fi movie. I'm not trying to be an asshole, but there is no way I could take someone seriously wearing those things.
Don't think about Google Glass / Meta as they are now, think about what happens when they fit on a contact lens...
These devices are going to keep getting smaller and in a few generations will be fairly indistinguishable from a pair of glasses (which many people wear now).
1 - Seriously, look at this guy: http://vni.s3.amazonaws.com/120802142609275.jpg
Like you, I'd never really consider wearing these things in public (or more specifically, in a casual setting when socialising), but I'd definitely consider using them at work/home if I could make them do neat time-saving things that other i/o devices couldn't.
Ultimately, I'm happy to be an early adopter but for me to use them in the public context shown in the video, they'd have to be somehow integrated into something much much smaller. E.g. contact lenses (but that is seriously way off).
Between this, Google Glass and Oculus, we will hopefully see some serious progress in the VR/AR industries.
I'd even go as far as saying the current efforts might be a bad thing for that possibility, if it is even possible, since we might just end up with a patent encumbered wasteland by the time the technology's there.
The 2 things that Glass will have to overcome:
1)The not-so-fashionable look. I'm sure this'll be corrected in the future.
2) The way people feel about a device that may or may not be recording them at any given moment. Let's not even mention anything about a red light glowing during recording because we know that'll be hacked out. Google Glass will be able to record you without you knowing, period. People will simply have to accept that or the product will fail or get banned in so many places it'll almost be not worth owning at all unless you're a hardcore geek. Then of course someone does some super slick mod where Glass just looks like any ordinary pair of glasses; then mass paranoia breaks out and either people get over it or any glasses are banned. ;)
Mind you this paranoia will be happening despite the fact we've had wearable hidden HiRes cameras smaller'ish than a penny for over a decade already... http://www.brickhousesecurity.com/product/b-w+indoor+high+re...
Which makes me wish there was an anti-Glass product out there. Something that makes you disappear, or at least masked as a see-through hologram. It's perfectly acceptable if anti-Glass does not hide your feet for regulatory reasons.
People who are not comfortable being recorded might welcome invisibility as protection.
ps: It's funny how public react to obvious recording devices when they're surrounded by them.
It's pretty difficult to go out in public without being recorded. Security cameras are ubiquitous.
Gribetz and his band of less than 25 employees are ensconced in the Los Altos mansion, filled with mattresses, cables, and aluminum bins of takeout food ... "We are hacking 24-7," Gribetz said, "and making less than McDonald's wages."
Anyone (from Meta maybe?) have any details on the SDK? I see "write code in Unity3D on a Windows PC" from their Kickstarter, but curious if that's the latest word...
We make the real world (surfaces/objects/hands) appear as 3D objects inside Unity. We do the heavy lifting with computer vision and math so you can code the game as you would any other--the cool bit is the 3D objects correspond to stuff in the real world. Our number one goal is to be the easiest environment to dev on.
Depending on how you implement your Unity integration it probably wouldn't be very hard to add support for other code bases, but if you want a lot of developers making applications for Meta it seems like you'd have more options available.
Most of what I know about computer vision comes from deep learning approaches, but tracking a white object doesn't seem like it should be too difficult. Is tracking a large white object actually "one of the hardest computer vision challenges", or is this just a garbage quote?
However, tracking a white object such as a piece of paper sitting on a contrasting desk is relatively easy. Especially if your algorithm is designed to handle such a case. You have the easily detectable corners and edges of the paper, and from that you can infer its transformation. You can also detect its soft deformation (such as bending or crumpling the paper) if your system is assuming a piece of paper as the model.
The way some tracking works is to use a corner detector to find "interesting" features. A naive tracking algorithm will then examine the spatial neighbourhood of each feature in the next frame in order to find out where it has moved to.
There are better feature representations (such as SIFT) which define a "feature" in an image in such a way as to be scale and rotation invariant (you can match the feature against scaled and transformed versions of itself). There are also much better ways to track across frames of video data.
Given that Meta has infrared and RGB stereo cameras it has a lot more information to work with. I hope they can make it work well under all situations, but I am skeptical.
I can see how tracking the scale and orientation of field of view filling single color objects would be difficult/impossible.
It doesn't seem like these worst case scenarios would come up much in real world use. It's fairly rare to encounter situations where one's entire field of view is filled with one (featureless) color. I would image that a wide field of view for the cameras would help greatly with this problem.
A way to get around that is to use an infrared setup like the kinect to project a pattern onto object, but I'm pretty sure that wouldn't work if both the projector and the object are moving.
http://www.youtube.com/watch?v=CSBDY0RuhS4 http://www.youtube.com/watch?v=Sw4RvwhQ73E
since you could assume that the projector is standing still and everything else is moving, the same should be true for the inverse and all points inbetween.
But all in all, I wouldn't say it is. In undergrad the final project of my computer vision class was to track a soccer ball over video frames. White circular object against mostly green backdrop- fairly straightforward.
Looking forward to seeing their future
Some of the people here saying Meta Glasses look geeky are missing the point. They can be used at work in a myriad different ways
* previewing 3d printings can be one of them, with Tony Stark-type visualizations more generally
* collaborative games in offices around the world after work, where you can do things like fire projectiles or see the same objects or stats only if you have the glasses on
* metainformation overlaid for visitors to museums etc.
The top somewhat reminds me of the kinect, or are you guys using bifocal vision? If you are using the latter, does it work outside?
I think I'm way to excited about this, and having to wait for a teardown to find out what tech powers this beast is making me giddy like a 5 year old in a candy store.
When looking at new technologies there are always two questions: is it worth doing and can it be done. The answer to the former is obvious here. I don't know nearly enough about the state of hardware to make a call about the latter, but kudos to the team for unabashedly attacking such a huge problem and trying to make the future happen faster.
wouldn't it be great if Google Glass had an app that
showed me information about the flora and fauna I'm
seeing? Oh wait, it can't yet
I think Glass could do this, but not as an overlay.I would appreciate something like a small beamer a lot more where you can project a UI on a suitable surface (holograms seem to take longer ;) and control it either by a pointing device or gesture control.
http://www.vuzix.com/augmented-reality/
PS and Yes, I understand the advertising potential after GG_AR. Maybe Vuzix should do the same, like Sony did against Microsoft on the E3 (PS4 vs Xboxone)
Singularity shades.
Here, have another: Overlay Optics/Occulars.
I'd love a AR display, but I'm incredulous if it forces me to take my focus off of anything else.
F'k you ageists... Notice how they need the old guy who invented the AR concept (Steve Feiner) to give them any credibility.
I watched the demo on Kickstarter. I couldn't put my finger on it, but something seemed off about the object occlusion. Was that FX or real tech?
I'd love to be wrong, but this is usually the case with entirely CGI promo videos...
And I'm not even a gamer anymore.
Where are our flying cars? Who cares. We have Meta glasses.