Secondly, nothing you said here changed as of this announcement. Nothing here makes it any more or less likely LLMs will risk software engineering jobs.
Thirdly, you can take what Sam Altman says with as many grains of salt as you like, if there really was no innovation at all as you claim, then there will be a limit hit at computing capability and cost.
We'll just have to agree to disagree. 3 was a signal of things to come but it was ultimately a bit of a toy, a research curiosity. Utility wise, they are worlds apart.
>if there really was no innovation at all as you claim, then there will be a limit hit at computing capability and cost.
computing capability and cost are just about the one thing you can bank on to reduce. already training gpt-4 today would be a fraction of the cost than it was when open ai did it and that was just over a year ago.
Today's GPU's take ML into account to some degree but they are nowhere near as calibrated for it as they could be. That work has just begun to start.
Of any of the possible barriers, compute is exactly the kind you want. It will fall.
And it is not true that computing power will continue to reduce; Moore's Law has been dead for some time now, and if incremental growth in LLMs require exponential growth in computing power the marginal difference won't matter. You would need a matching exponential growth in processing capability which is most certainly not occurring. So compute will not fall at the rate you would need it to for LLMs to actually compete in any meaningful way with human software engineers.
We are not guaranteed to continue to progress in anything just because we have in the past.
This is a lot of unfounded assumptions.
You don't need Moore's Law. GPU's are not really made with ML training in mind. You don't need exponential growth for anything. The money Open ai spent on GPT-4 a year ago could train a model twice as large today. and that amount is a drop in the bucket for the R&D of large corporations. Microsoft gave open ai 10B. amazon gave anthropic 4B
>So compute will not fall at the rate you would need it to for LLMs to actually compete in any meaningful way with human software engineers.
I don't think the compute reuired is anywhere near as much as you think it is.
https://arxiv.org/abs/2309.12499
>We are not guaranteed to continue to progress in anything just because we have in the past.
Nothing is guaranteed. But the scaling plots show no indication of a slow down so it's up to you to provide a concrete reason this object in motion is going to stop immediately and conveniently right now. If all you have is "well it just can't keep getting better right" then visit the 2 and 3 threads to see how meaningless such unfounded assertions are.
I think the stronger argument here won't necessarily be Moore's Law related but a change in architecture. Things like Apple's Neural Engine, Google's TPMs, or Geohot's Tinybox. In Intel's Tick-Tock model, this is the Tock for the previous Tick of larger datasets so to speak.
(Note: I don't necessarily agree, just trying to make a stronger argument than just invoking Moore's Law.)