>On any topic that I understand well, LLM output is garbage: it requires more energy to fix it than to solve the original problem to begin with.
Is it generally because the LLM was not trained on that data, therefore have no knowledge of it or because it can't reason well enough?