But yeah the idea anyone has magically solved accurately predicting the future is completely bonkers.
The gist being: estimates are given with zero as a lower bound, and some upper bound. In reality, there is no upper bound. Software engineering can and does involve unexpected problems that would take infinite time to solve.
Ergo, project timelines are more accurately averaged between {min_time} and {infinite}, and, given that, remaining risk is the only true metric that should be reported. I.e. how far are we along in terms of project completed, such that we can say only x% of the project remains, and therefore that 1-x% did not explode.
So, why can't we express the upper bound in combination with uncertainty?
Example: "There is a 50% chance that the feature will be delivered within 1 month."
As time progresses confidence starts increasing and we get closer and closer to how much time it actually takes (be that 5 days, 5 months or 5 years). On the day the feature is completed, confidence is 100% because the real delivery date has become known.
Therefore trying to estimate a software project is a bit like trying to estimate how long it will take to prove a new theorem in mathematics, assuming you would know which exact theorem you want to prove.
I tend to find this isn't the main consumer of my software budget :-)
the other thing about the plan is that it attempts to capture what completion might look like. where i come from, the things software developers do right before releasing code look very different than the things they do when they are starting a new project. the plan can allow you to try to do some orchestration, ensure minimum quality bars are met, etc.
having a fuzzy map of the future that you're constantly having to correct sucks. having no map at all leads to the kind of carpe diem development organizations that never seem to finish anything.