I'd propose that your claim that LLMs don't understand at maths is very similar to the claim that Neuton didn't understand the Laws of Motion.
Yes – Neuton's laws are wrong, but they're also practically correct for 99.999% of applications. If correctness is viewed as a binary, Neuron is 100% wrong, but as a scalar Neuron is basically right.
Neural networks are inherently bad at finding exact rules, but they're excellent at approximating them to an accuracy that is acceptably good, this is bit that people miss when they say LLMs can't do maths.
When you claim they don't understand the rules of maths, I agree that they don't understand the explicit rules, but with the caveat that they probably understand something that allows them to approximate those rules "well enough".
This is why if you ask ChatGPT a question like 23435234 + 3243423 it's not going to say -33.1. It might not give the right answer, but it will almost always give you something that's close and very plausible. So while it might not understand the exact rules, it basically understands what happens when you add two numbers and 99% of the time will give you an answer that is basically correct.
The larger point I was trying to make here is that I think we humans are kinda biased when it comes to maths because we understand character precision which is the bias I think you're basing your reasoning on here. We humans believe precision is extremely important in the context of maths unlike other textual content. But an LLM isn't operating with that bias. It's just trying to approximate maths in a way that is correct enough in a similar way that it's trying to approximate the likely next character (or more correctly token) of other text content.
I don't think approximations are 100% wrong and perhaps us humans being bothered about LLMs giving answers to maths questions that are 0.1% wrong actually says more about our values and how we view maths than it says about an LLMs mathematical abilities.