Well I admit I used to be of a similar opinion as well, but seeing this explosion unravel over the past few months has me convinced that it's can't possibly be right, at least not to any degree that objectively matters.
Perhaps language is the wrong term to use, since it's not what LLMs are really about. They're about text. There are very few things that cannot be expressed as text, albeit in unconventional ways like base64. Being opaque to humans doesn't mean that with enough data a neural net can't be taught to "see" images that way or "hear" sound files for example. If the original assertion is true, then there must be some kind of universal barrier to skills that cannot be expressed in text. That sounds completely crazy to me, since we humans are also likely just organic data that could be expressed as text with some encoding. The main problem is interfacing with it in some way that's actually useful, which is the extremely hard part.
Another thing to consider is that with a formalized enough language (i.e. a programming language) one can be far more exact in explaining things accurately than any natural language with its cultural specifics and inferred nonsense. That's probably why LLMs designed as coding models first and foremost usually outperform those that aren't in solving unrelated arbitrary problems.
> Note that humans are animals too, btw. And conversely, I would consider nonverbal people as humans as well.
Humans are animals in the biological sense, yes. But very much not in the societal and skill-transferring sense.