LLM are good at one thing, and totally by chance it is the thing they have been designed to be: be a word probability generator. If you can constrain your usage around that, they are great to use. But the people who think they can reason or know some kind of truth are delusional
It's very obvious from the mistakes they make that they are not reasoning but providing the most probable answer according to their dataset. It's very impressive because their dataset is humanly big, but there is no reasoning