It's weird. I've personally experienced cases where the highlighted instant answer is obviously incorrect and the full context actually claims the opposite of what the excerpt claims, and those kinds of examples circulate around the web pretty frequently, and everyone who has ever tried to ask ChatGPT or similar systems tricky questions should know how AIs just invent stuff when they don't know the real answers.
So why do companies like Microsoft and Google push in this direction? Why are they making the results more and more opaque? You'd hope that they would care enough to be good stewards of the power granted to them through their information monopoly, but barring that, you'd hope that they'd recognize that people want results they can verify, not just random answers.
Or maybe they're hoping that people don't care about verifying results, hoping that people just want an answer that's not necessarily the right answer? It seems like a dangerous gamble.