According to your definition of reasoning, which involves surely getting the right answer, no human does reasoning. Probably less than 1% of published mathematics meets the definition.
> Important problems are those which require correct responses.
There are many important problems where formal reasoning is not possible, and yet a decision is required, and both humans and LLMs can provide answers. "Should I accept this proposed business deal / should I declare war / what diagnostic test should I order?" We would like to have correct responses for these problems, but it is not possible, even in principle, to guarantee correctness. So yes, we use heuristics and approximate reasoning for such problems. Is an LLM "unreliable" or "dangerous" in such problems? Maybe yes, and maybe more so than humans, but maybe not, it depends on the case. To try to keep the point of the thread in focus, an LLM should probably not try to solve such problems by writing code.