Image segmentation wouldn't be used in the Metaverse side of AR. By its very definition, the Metaverse is a "layer" on top of either a VR or AR base. Every object that is superimposed in your view as part of the "metaverse" would be generated. It doesn't need to be segmented, it already is. It doesn't need to be described, its descriptions are already part of its properties.
The only use for segmentation is to identify things in the real world. That has nothing to do with the Metaverse and instead, as I said, "with AR/VR in general". In fact, all of the examples on the page showed examples of segmentation when used in standard AR scenarios. Things like helping you execute a recipe.
How many times do I have to say the same thing in different ways? You two are describing basic features of AR and VR systems and pretending like it’s the next big innovation in AR that only the Metaverse could possibly come up with.
Did you even bother to read my comment? The other guy plainly didn’t. It’s almost like it’s easier to defend this strawman that the two of you keep bringing up which has nothing to do with what I said than to actually read what I am saying and properly respond to it.