I find it surprising how many non-technical friends and family constantly anthropomorphize LLMs, regularly bringing up instances where they "asked AI" about this or that and it "told them" whatever. I'm tired of trying to explain that they are merely statistical sequence generators, don't have a mind, are occasionally completely out to lunch, and ultimately cannot be trusted. This is usually a losing battle. The sheer bullshit that "AI tells them" is often astonishing or ridiculous, but a lot of the time it's given undue weight and trusted anyway. The future is bleak.