(NOTE: I don't disagree with you, I am more like paraphrasing you and adding my take.)
In practice most software is light years away from this theoretical limit of "can't be anymore parallelised". And I fully agree that throwing hardware at a problem indeed has limits, although they are financial and not technical IMO.
As mentioned in another comment down this tree of comments, my 10-core Xeon workstation almost never has its cores saturated yet I have to sit through 5 seconds to 2 minutes of scripted tasks that can relatively easy be parallelised -- yet they aren't.
And let's not even mention how my NVMe SSD's lifetime saturation was 50% of its read/write limit...
There's a lot that can be improved still before we have to concern ourselves with how much more we can parallelise stuff. That's like worrying when will the Star Trek reality come to happen.