Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.
So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.
No comments yet.