Is voice and image integration with ChatGPT a whole new capability of LLMs or is the "product" here a clean and intuitive interface through which to use the already existent technology?
The difference between GPT 3, 3.5, and 4 is substantially smaller than the difference between GPT 2 and GPT 3, and Sam Altman has directly said there are no plans for a GPT 5.
I don't think progress is linear here. Rather, it seems more likely that we made the leap about a year or so ago, and are currently in the process of applying that leap in many different ways. But the leap happened, and there isn't seemingly another one coming.
Past the introduction of the transformer in 2017, There is no big "innovation". It is just scale. Bigger models are better. The last 4 years can be summed up that simply.
>Is voice and image integration with ChatGPT a whole new capability of LLMs or is the "product" here a clean and intuitive interface through which to use the already existent technology?
What is existing technology here ? Open ai aren't doing anything so alien you couldn't guess at if you knew what you were doing but image training at the scale of GPT-4 is new and it's not even the cleanest way to do it. We still don't have a "trained from scratch" large scale multimodal LLM yet.
>The difference between GPT 3, 3.5, and 4 is substantially smaller than the difference between GPT 2 and GPT 3
Definitely not lol. The OG GPT-3 was pulling sub 50 on MMLU. Even benchmarks aside, there is a massive gap in utility between 3.5 and 4, never mind 3. 4 was finished training august 2022. It's only 2 years apart from 3.
>I don't think progress is linear here. Rather, it seems more likely that we made the leap about a year or so ago, and are currently in the process of applying that leap in many different ways. But the leap happened, and there isn't seemingly another one coming.
There was no special leap (in terms of theory and engineering). This is scale plainly laid out and there's more of it to go.
>and Sam Altman has directly said there are no plans for a GPT 5.
the same that sat on 4 for 8 months and said absolutely nothing about it ? Take anything altman says about new iterations with a grain of salt.
Secondly, nothing you said here changed as of this announcement. Nothing here makes it any more or less likely LLMs will risk software engineering jobs.
Thirdly, you can take what Sam Altman says with as many grains of salt as you like, if there really was no innovation at all as you claim, then there will be a limit hit at computing capability and cost.
You can think about your daily job and break down all the tasks, and you'll quickly realize that replacing all this is just a monstrous task.
I think there are merits to both arguments, and I think it’s possible that we’ll see things move towards either direction in the next 1/5/10 years.
My point is, I don’t think we can rule out the possibility of some jobs being at risk within the next 1/5/10 years.