In a way it's worse: Even the "talking to" part is an illusion, and unfortunately a lot of technical people have trouble remembering it too.
In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document. Some humans have prepared a document which is in the form of a a theater-play or a turn-based chat transcript, with a pre-written character that is often described as a helpful robot. Then the humans launch some code that "acts out" any text that looks like it came from that fictional character, and inserts whatever the real-human-user types as dialogue for the document's human-character.
There's zero reason to believe that the LLM is "recognizing itself" in the story, or that is is choosing to self-insert itself into one of the characters. It's not having a conversation. It's not interacting with the world. It's just coded to Make Document Bigger Somehow.
> they think that it's got human capabilities too
Yeah, we easily confuse the character with the author. If I write an obviously-dumb algorithm which slaps together a story, it's still a dumb algorithm no matter how smart the robot in the story is.