LLMs don't retrieve anything unless there's infra in place so that they can go and do that. It's called Retrieval Augmented Generation (RAG) and it's still pretty new enough that it hasn't been integrated in all workflows. There was a bunch of copy early on claiming LLMs "read the internet and find you results" but it's weaselly wording and was until remarkably recently complete BS.
Bard has had access to Google caches since launch. I asked it about my 8-follower Twitter account and it quoted my first Tweet and the registration month correctly. It also hallucinated my gender and described me as "a popular figure in the French-speaking Twitter community" (because my name is a French word, I guess).
Thanks, I know. I built RAG systems with vector databases and related infra. And yes, that's exactly what it claims to do, even with "results verified by Google" in the interface.