I was using an oversimplified example to illustrate. In practice, it appears more in large context statements about the context. If the human is wrong, there’s a good chance the AI will cheerfully agree and then be wrong too.
I was reminded of this this morning when using Claude code (which I love) and I was confidently incorrect about a feature of my app. Claude proposed a plan, I said “great, but don’t build part 3, just use the existing ModuletExist”. Claude tied itself in knots because it believes me.
(The module does exist in another project I’m working on)