Basically what we have done the last few years is notice neural scaling laws and drive them to their logical conclusion. Those laws are power laws, which are not quite as bad as logarithmic laws, but you would still expect most of the big gains early on and then see diminishing returns.
Barring a kind of grey swan event of groundbreaking algorithmic innovation, I don't see how we get out of this. I suppose it could be that some of those diminishing returns are still big enough to bridge the gap to create an AI that can meaningfully recursively improve itself, but I personally don't see it.
At the moment, I would say everything is progressing exactly as expected and will continue to do so until it doesn't. If or when that happens is not predictable.