we understand everything that a transformer does at computational/mechanistic level. you could print out an enormous book of weights/biases and somebody could sit down with a pen and paper (and near-infinite time/patience) and arrive at the exact same solution that any of these models do. the transformer is just a math problem.
but the counterargument that you're getting is "if you know the position/momentum of every atom in the universe and apply physical laws to them, you could claim that everything is 'just a math problem'". And... yeah. I guess you're right. Everything is just a math problem, so there must be some other thing that makes animal intelligence special or worthy of care.
I don't know what that line is, but i think it's pretty clear that LLMs are on the side that's "this is just math so dont worry about it"