0: https://www.nbcnews.com/news/us-news/dr-deborah-birx-predict...
If you kept the model exactly the same, you'd nevertheless get tighter and tighter estimates (i.e., reduced uncertainty and a lower upper bound) as more data comes in. This is just how statistics works.
Moreover, we're presumably learning stuff as we go (e.g., putting patients prone seems to work better than on their backs), so the survival rate itself is (hopefully) not stationary.
Has anyone seriously suggested that the US’s measures are being carried out anywhere near “almost perfectly”? Quite the opposite, there’s been lots of concerns voiced that people aren’t taking this seriously.
> The revised model predicts up to ~127k deaths, which is certainly less, but not egregiously so
It’s a nearly 40% reduction!
2) Biological data is often a nightmare to work with. Estimates about behavior too. Getting something within an order of magnitude is often not too shabby.
3) Errors ('up to') are sensitive.
Here's a toy example. Suppose you think two numbers are each around 5, but the data are consistent with anywhere between 0-10. The sum of these numbers must be between 0-20 (low case: 0 + 0 = 0, high: 10 + 10 = 20), and their product between 0-100 (0 x 0 = 0; 10 x 10 = 100).
More data comes in and you can estimate each value more precisely: now you know they're somewhere between 4-6. You know the sum is actually between 8-12, and the product between 16-36. That's a massive decrease in the upper bound (64 percent for the product!) but literally nothing has changed except for the increased precision.
The COVID models have exactly this problem--none of the parameters are known exactly--and the outcome is some function of combining them. Moreoever, we're learning more about what factors matter AND how to fight the virus.