The reason this does well, is that oftentimes, (1) overestimates the true answer by roughly the same multiplicative factor as (2) underestimates the true answer by. So the geometric mean cancels the over and under estimates in order to get an estimation that does pretty well.
I find that this works remarkably well for estimating the dimensions of buildings, trees, etc.
Most project managers I've worked with either have a desired estimate already in mind or they don't care about any of the extenuating circumstances.
On one hand, the desired estimate is often based on the knowledge that projects estimated to take more than a quarter aren't going to get a green light.
On the other hand, it's ridiculous how many projects blow through estimates when external dependencies are ignored, newly-hired engineers create a burden on the project, and de-scoped work turns out to be necessary.
Those project managers also pursue the same estimation agenda even after several projects turn out the same way.
Devs over-engineer, add way too much padding for refactoring and cleaning out tech-debt than is necessary, devs engineer solutions with resume padding, devs like playing with cool tech or trying new tech instead of just using "the boring old thing", they over-engineer (saying this one twice), they get it wrong, devs over-compensate because they got burned previously, they over-compensate because they got negotiated down and then it went bad, they want to impress their peers or whoever they report to, they get bullied by end users that somehow get access to them, etc etc. Yes a lot of those are avoidable, but we don't live in an ideal world.
This is an aspect of the https://en.wikipedia.org/wiki/Planning_fallacy.
I've worked for guys that shop around and give the work to the lowest estimator, even when they have a track record of low-balling and then running 5x over their estimates.
In other scenarios, to your point, optimistically low estimates are used as a political tool by product/management to wrestle some task/responsibility from some other team in the org.
Inevitably what I see again and again is everyone takes (and fights devs for) low-ball estimates, which assume the happy path of "nothing can go wrong". They are then happy to hear & communicate to clients the various excuses when each "downside surprise" is discovered through the development process. Of course the estimate high-baller has built in time for these as theres rarely positive surprises that make tasks faster, and few tasks are surprise free.
(Best + Worst + 4 * Average) / 6
One nice property is that it imposes a distribution that adjusts for longer tailed risks.
At the same time, there are occasions where it can be useful to collapse a distribution for some types of reports, or for quickly looking across estimates.
1. Starting with an End date and create a plan from there. Some top leads want to get promoted and want to achieve X by this quarter or next.
2. With so many people leaving, there is not enough time and resources to onboard new folks, who are accounted in deliveries
3. Too many parallel initiatives
4. Unstable production taking daily attention
5. Not able to priortise tech debts over business features
6. Decision makers at top don't have grass root visibility or don't want to have.
7. Difficulty in getting clearer specs when it's discovered the original specs are not detailed enough.
8. Decision makers at top (or middle?) having too much grass root visibility and micromanaging the project.
Then multiply by 4.14.
This provides room for
Phase 1: First verions of the deliverable: xpi/2
Phase 2: Trying to work around all the cases where the initial design was bad: xpi/2
Phase 3: Re-factor from scratch (with the same team) : x1
Sum phase 1-3 : x(pi+1)
This is for facing the customers/stakeholders. When facing the team, only present them with Phase1, with the estimate of xpi/2.
Otherwise, phase 1 alone will take x(pi+1).
R = Realistic estimate. Based on work being typical, reasonable, plausible, and usual.
O = Optimistic estimate. Based on work turning out to be notably easy, or fast, or lucky.
P = Pessimistic estimate. Based on work turning out to be notably hard, or slow, or unlucky.
E = Equilibristic estimate. Based on 50% probability suitable for critical chains and simulations.
The issue I've had is that most "stakeholders" just want it boiled down to one number and don't care about any nuance. If I say it will take 2-4 weeks, they will assume 2 weeks if it's convenient. If there's a list of tasks, they will add up the lower bound of each task and discard anything they don't deem necessary.
It seems to me that any estimation tactic works well if you assume fair intent.
So "oh, an hour or so" becomes 2 days. A week turns into two months.
I don't usually express those estimates, but it gives a good check on an initial, usually optimistic guess.
The main feuture is that you can report more work than there are hours of work. E.g. complete 8 days of work in two days by doing 4 2-day tasks.
I find it way easier to estimate when something is done rather than how much time it will take to do.
P10 = 10% chance of occuring, P90 = 90% chance of occuring.
How it's typically done is standard scheduling packages, like Prima Vera, allow you to specify a band or range of duration/effort for individual tasks/activities.
This, when combined with task dependency information (which you must give it in the form of a Pert chart or similar, it takes a few different formats of data) means it can calculate the critical path across the whole range of activities for an overall outcome and yield the project P10/P90.
Then, you can run sensitivity analysis and identify key pivot points, look at assigning more resources to certain efforts etc etc and optimise the schedule, plus track actual progress as you go and make forecasts.
But this is all based on the premise of doing the kind of engineering where you have some reasonable idea of what your actual goals and methods are before you start, so if you are running under agile you are probably screwed because even if you tried this planning, your planner/s could probably never keep up with actual.
To understand the difference between an engineered project and an agile one see my comment https://news.ycombinator.com/item?id=31299834#31301616
Having multiple people independently estimate a given problem is a more robust way to estimate, in general. Other than as a lead and a dev each sizing something up, I don't think I've seen that in practice. It feels a little weird for developers to be estimating each other's work. I think it's interesting if and only if Committment is truly separated from estimation.
Partially, this is on engineers. They'll say "That'll take 36 days" and not realize that they're implying a higher level of specificity than they intend.
Partially, this is on consumers of estimates (managers, etc). They'll hear "It'll be about 36 days, but that's just a super rough estimate, we haven't planned it out yet, it could be way more..." but they stopped listening and wrote down "estimate: 36 days."
My current eng team has fixated on two distinct levels of eng estimates. The first is super high level: "minutes to hours", "hours to days," "days to weeks", "weeks to months," or "months to quarters." The second is a number of hours. We give out the high-level estimates freely - they're super helpful for project planning. We give out the second number only when we have a pretty solid plan with estimated tickets.
It's worked pretty well because engineers can always be clear on which estimation type is called for. It's also helpful because non-engineers can get used to hearing the high-level estimates pretty quickly and know to treat them as super vague.
* We actually deliver all hour estimates as 30/60/90 estimates: "we're 30% sure it'll be done in 36 hours, 60% sure it'll be done in 50 hours, and 90% sure it'll be done in 80 hours". There's still a tendency of people to just use the 60% estimate as "The Estimate," but it's better than nothing.
I've never been in a position to actually use this approach well, but I like the idea of it a lot.
Thus, if your goal is to lie to management stating that "we met 98% of our estimations last year" implying that because of this, there is not problem, then yes, go work on improving your estimates.
If your goal is to get things done so you can get some real progress, go set realistic targets on time or ROI and learn to throw away tasks that failed it.
And if your goal is to never get an estimation wrong, because they are commitments that you can never get free after you made them, go practice your interview skills and move to a better environment.
often folks don’t know if what is being estimated is actually knowable.
building something new pre product market fit is a crapshoot until it’s not.
we try to tightly manage estimates & timelines on small iterative tasks, breaking up any larger work items into smaller and smaller stories, in beautiful hierarchies of tickets..
the only innovative work happens out of sprint / nights & weekends as unapproved stuff that would have been nickel&dimed into 27 stories across 10 sprints if it went through the agile process
in a Greenfield project these sorts of estimates driven iterative work methods are innovation killers
It seems incredibly ironic that one would be shocked by this.