In your analogy to the Apple II, is there a fundamental problem people claimed for early computers, which didn’t just boil down to “they need to be faster”?
There are fundamental limitations to what an LLM can achieve.
True, I mean, just look at all of the examples of technologies that were projected to revolutionize computing and machine learning "..." eventually, here:
Just because they haven't happened to yet doesn't mean they never will!
I've heard very good things about expert systems and IBM's Watson for Oncology.
The Harvard Business Review has a great article on all of the eventual "..." AGIs here:
Integrate it with wolfram alpha and Google, tack on a bullshit detector and spank dispenser conetwork and it looks like limitations of LLMs are easily overcome by not running anywhere near them.