The rate of increase in capabilities is also unpredictable, which is what is amazing & terrifying.
Overinflated hype about “where the puck is going” being wrong is…not a new phenomenon. And the non-tech media (traditional and social, and particularly a whole lot of the elite commentariat that spans both more than the “hard” news side of traditional media, though that is influenced too) perspective on this is driven quite disproportionately by the marketing message of the narrow set of people with the most financial stake in promoting the hype. Even the cautionary notes being sounded there are exactly the ones that are being used by the same people to promote narrow control.
The part that gets me is they aren't just aimlessly making posts, they're getting involved with webinars targeted at educators in their respective topics, speaking at regional events about the future with ChatGPT, etc.
One of them was a really excellent teacher but like this dude absolutely does not have any qualifications to be speaking on ChatGPT, and I hope it hasn't actually changed his teaching style too much because I'm having trouble imagining how ChatGPT could've fit in well with the way he used to teach.
Don't get me wrong, hype often surrounds something legitimately good, but I think ChatGPT is being taken way out of context in both the ML achievement it is and in what things the tool is actually useful for. I guess it is easy for a layperson to make mistaken assumptions about where the puck is going when they see what ChatGPT can do.
[1] "The story took place in 1929: Joseph Patrick Kennedy Sr., JFK's father, claimed that he knew it was time to get out of the stock market when he got investment tips from a shoeshine boy."
I grew up and using a pocket calculator was verboten. You just did not do it. You better learn and memorize all of that stuff. Spin on 10 years after me and all kids have them now. If you have the right app on your phone the thing will OCR the problem and auto solve it and show you the steps. ChatGPT and its ilk are here, now. How we learn and create things has dramatically changed in under a year. It is not clear how much though.
Teachers are trying to get ahold of what does it mean to teach if you can just ask some device to summarize something as complex as the interactions of the 6 major countries in WWII and what caused it. Then the thing thinks for 2 seconds and spits out a 1000 word essay on exactly that. Then depending on which one you use it will footnote it all and everything.
This style of learning is going to take some getting used to. This tool is in their classrooms right now. Right or wrong. It will be there. The teachers are going to have to figure out what this means to their class planning. Not 5 years from now, today.
Additionally, "the rate of increase in capabilities" is very much a false flag. Past performance (especially for second-order things like 'rate of improvement') is an absolute batshit insane metric for predicting future success.
https://russellinvestments.com/us/blog/past-performance-no-g...
The advancements have eaten low hanging fruit. Once all the low hanging fruit is gone, we'll all realize GPT will never be tall enough to reach the good stuff.
But the history of technology says that demand is recursive; the more tech produced, the more demand there is for producers.
There may be a time when we hit the old “the whole world only needs 5 computers”[0] limit, but I don’t think we’re anywhere close. AI is providing leverage to create net more programming; it is not replacing programmers in a zero sum game.
0. https://www.theguardian.com/technology/2008/feb/21/computing...
> In economics, the lump of labour fallacy is the misconception that there is a fixed amount of work—a lump of labour—to be done within an economy which can be distributed to create more or fewer jobs.
And seriously, how many people were actually saying “WolframAlpha will destroy a huge amount of programming jobs”?
But it looks like an exponentially steep climb from here. indefinitely
This is a very bold claim IMO.
Modeling/understanding interactions in a complex system of potential black boxes is much, much more computationally difficult problem that source code to source code operations.