The real issue is something was causing the container with the gradient to repaint on every scroll.
As for why people don't like LLMs being wrong versus a human being wrong, I think it's twofold:
1. LLMs have a nasty penchant for sounding overly confident and "bullshitting" their way to an answer in a way that most humans don't. Where we'd say "I'm not sure," an LLM will say "It's obviously this."
2. This is speculation, but at least when a human is wrong you can say "hey you're wrong because of [fact]," and they'll usually learn from that. We can't do that with an LLM because they don't learn (in the way humans do), and in this situation they're a degree removed from the conversation anyway.