I guess we're speculating that training llama2 will drop by 1000x or something, so anyone can train their own llama2 from scratch for about $2k.
I don't think compute cost has dropped by 1000x since 20 years ago. Maybe by 10 to 50x. And if you add in the demand for higher quality, the cost has probably increased. Like encoding a video for streaming 20 years ago, at that standard, may have cost roughly the same as it does today, or more, when you factor in the increases in resolution and quality.
My prediction is that training the latest model will continue to cost millions to tens of millions for a long time, and these costs may even increase dramatically if significantly more powerful models require proportional increase in training compute.
Unless of course we have some insane algorithmic breakthrough where we find an AI algorithm that blows llama2 out of the water for a small fraction of the compute.