> Changes in how we use language with AI will change even faster when AI starts learning continuously during inference.
This is a stronger claim, and shows that the weakest interpretation of your argument does not apply. I take not event wrong back. As this is a testable hypotheses which offers a solid prediction. I can in fact be wrong.
That said, I am skeptical of your claims for the reason stated above. People don’t interact with LLM’s nearly as much as they do with their dogs, and I am not aware of any research that shows that people who interact with a lot of dogs simplify their languages in human-to-human communication. To the contrary, there is ample research that humans are in fact quite good at context switching. You can speak extremely poorly in a second learning you are currently learning, and then in the next sentence speak fluently without hesitation in your native language.