LLMs will tell you 1 or 2 lies for each 20 facts. Its a hard way to learn. They cant even get their urls right...
That was my experience when growing up with school also, except you got punished one way or another for speaking up/trying to correct the teacher. If I speak up with the LLM they either explain why what they said is true, or corrects themselves, 0 emotions involved.
> They cant even get their urls right...
Famously never happens with humans.
If you are in class, and you incorrectly argue, there is a mistake in an explanation of Derivatives or Physics, but you are the one in error, your Teacher hopefully, will not say: "Oh, I am sorry you are absolutely correct. Thank you for your advice.."
- Confident synthesis of incompatible sources: LLM: “Einstein won the 1921 Nobel Prize for his theory of relativity, which he presented at the 1915 Solvay Conference.”
Or
- Fabricated but plausible citations: LLM: “According to Smith et al., 2022, Nature Neuroscience, dolphins recognise themselves in mirrors.” There is no such paper...model invents both authors and journal reference
And this is the danger of coding with LLMs....
Me: “explain why radioactive half-life changes with temperature”
ChatGPT 4o: “ Short answer: It doesn’t—at least not significantly. Radioactive Half-Life is (Almost Always) Temperature-Independent”
…and then it goes on to give a few edge cases where there’s a tiny effect.