Major hardware companies have to have fairly long roadmaps, because they have so much inertia. And they have to reveal what they are up to, to their investors. I certainly don't believe these roadmaps reveal another couple of decades of die shrinks and IPC gains. They reveal a real shift of strategy, across the board. Consumers simply won't be able to afford the "CPUs" of the future.
For a scalable application, better performance basically means reducing expenses. If your cloud computing bills aren't high to begin with, it may not be worth the rewrite. Of course there might be new companies or new projects that can use TPU's for machine learning, etc.
But Moore's law is not the only way to scale better. Today's machine learning algorithms are ridiculously inefficient and that seems unlikely to remain true forever, given the amount of research being done. A series of algorithmic improvements might result in a 10-100x reduction in cost, or maybe even not needing TPU's anymore?
Who knows what the future will bring, but making straight-line predictions in a fast-moving field like machine learning seems unlikely to work out.