I say, this is indeed the case in view of LLM lessons we have learnt along the way—most importantly, Wittgenstein had argued that in order to model language, you first need to model arbitrary discourses—this turned out to be the case (symbolic v. probabilistic) as LLM have shown to perform symbolic computation from learned representations, whilst the inverse has not been shown _ever_. Layman way of formulating this would be along the lines of "word definitions do not matter, application of words alone is what matters." IMHO, the language-game framework is so much more valuable, in terms of intuition, than anything outside of language philosophy, & pretty much all of linguistics in the first place: think Chomsky et al.
Wittgensteinians won, and we should hope that philosophy department freaks eventually catch on to this reality.