> they've parasited, sorry, trained on the entirety of accessible human knowledge
I see this as a new development in language, used to be restricted to meat neural nets and books, now it can also be consumed and created by LLMs. A new self replication path was opened for language. Language is an evolutionary system, it's alive. Without Language humans are mere shadows of what they can be. Language turns a baby into a modern adult, and a randomly initialised neural net into chatGPT.
The magic was always in the language, not in the neural network. We should care more about the size and quality of the training dataset than the model. Any model would do, all model tweaks are more or less the same. But the data, that is the origin of all the abilities. But we cannot own abilities, it should be fair game to learn abilities and facts even from copyrighted data. Novel and creative training examples should not be reproduced by LLMs, but mere facts and skills should be general enough not to be owned by anyone.