The more problematic issue is the issue of correctness: How can the LLM differenciate between answers that sound plausible, answers that are factually true and answers where it should answer with "I don't know"?
The issue might not be resolvable at all. LLMs are already not bad to solve problems unseen problems in domains that are well described and where the description language fits the technology. But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true. Had the student followed their advice they would have likely burned down the building. Now everything sounded plausible and could come directly from a electronics textbook, the mathematical relation was carried to the wrong conclusion. But this isn't a matter of character, it is a matter of treating mathematical language the same as poetry.