Because LLMs are not competent professionals to whom you might outsource tasks in your life. LLMs are statistical engines that make up answers all the time, even when the LLM “knows” the correct answer (i.e., has the correct answer hidden away in its weights.)
I don’t know about you, but I’m able to validate something is true much more quickly and efficiently if it is a subject I know well.