No: an LLM that doesn't confabulate will certainly get things wrong in some of the same ways that honest humans do - being misinformed, confusing similar things, "brain" damage from bad programming or hardware errors. But LLM confabulations like the one we're discussing only occur in humans when they're being sociopathically dishonest. A lawyer who makes up a court case is not a "human being wrong," it's a human
lying, intentionally trying to
deceive. When an LLM does it, it's because it is not capable of understanding that court cases are real events that actually happened.
Cursor's AI agent simply autocompleted a bunch of words that looked like a standard TOU agreement, presumably based on the thousands of such agreements in its training data. It is not actually capable of recognizing that it made a mistake, though I'm sure if you pointed it out directly it would say "you're right, I made a mistake." If a human did this, making up TOU explanations without bothering to check the actual agreement, the explanation would be that they were unbelievably cynical and lazy.
It is very depressing that ChatGPT has been out for nearly three years and we're still having this discussion.