I personally don’t want that. In my opinion, this current iteration of LLM is completely overblown and I am perplexed as to how quickly it is getting adopted into everything.
Of course, LLMs are useful. For small tasks, there is a significant productivity boost to be had, but they are not trustworthy. And that is the main issue.
If we see them as untrustworthy, then perhaps it is necessary to accelerate their exposure into consumer technology as a means to show that an LLM can cause harm - in whatever form that may come.
It’s very easy to overlook LLMs making things up, but they do (including GPT4) - and if that can’t be solved then it’s safe to assume this hype will be short-lived.