We don't yet have the luxury of several thousand years of work trying to get LLMs to be less fallible.
Not fundamentally, only until they're compelled to learn from it. The current crop of AI understands neither compelling nor learning.