And also fundamental. When I was directly estimating big software projects the key, for me, was to trust developers recommendations but apply a different multiplier for each developer. Multipliers ranged from x1 to x3. Those rare devs with x1 were, of course, a blessing. And those with x3 were not necessarily bad; they were often the ones working on the really hard problems. Of course, it meant getting to know those developers and a prior (which we set at x2, for new starters).
Individuals were remarkably consistent in terms of their actual performance; so a x1 developer would almost always be a x1; a x1.5 would almost always be x1.5.
If that’s how long it’s going to take then that’s how long it’s going to take. I think there’s a group of people who are used to getting lots of pushback on their estimates and so they estimate low to avoid that conflict. The future conflict of things being late is not something they have to deal with right now and maybe they actually will get it done.
The key is backing it up continually and really not getting mad about estimates that are longer than I’d like. And, for people who I do think are sandbagging, asking more specific questions about the details of the estimate in a non-combative way.
And of course some people really are bad at estimating. But I find if they don’t improve with coaching then it’s a symptom of a bigger problem (not thinking things through all the way) which manifests in other ways beyond estimation (eg poor design)
So... my experience with this mindset is that you won't be mad, but you'll just say "that's too long/expensive" and cancel the project entirely. Then the same project will come up again in two months with different wording, again and again, until I give you the estimate you think you can afford. And then the thing will end up taking longer than the original "too expensive" estimate (in part because too many things were rushed in the beginning to try to meet "the date"), but nobody will ever compare the final outcome against the original estimate anyway... because estimates are never meaningful.
The biggest issue I have is developers making business decisions without realizing it or without saying it — saying no to something because it would take too long or be too expensive, without first asking how much time/cost can be consumed
The same occurs with estimating — they’ll give a risky estimate instead of a safe one, because they’ll unilaterally decide the stable one is unacceptable; and not bother mentioning this with their estimate, or knowing what the rest of the timelines look like.
The other half of the problem is that people tend to only want to give one number (and people typically ask for a single number), but what you really want for project planning is the median +- margin of error timelines. The 1x,2x,3x rule is just a hack to work around that unwillingness.
Let's say the project was first proposed in January 2021. Got estimated as taking a year. Didn't get scheduled, since that was seen as too long, and there was other stuff people wanted done in 2021.
It's pitched again in June 2021. Same story.
Come Feburary 2022, it's pitched again. Now it's estimated as 15 months - there's more to do since more code has been written in the meantime! - and gets started, and ends up taking 19 months (estimates are never perfect!).
You might say "this was stupid, we should've just done it in January 2021" but in the cases I've seen, pushing it off a few times made perfect sense. The payoff wasn't seen as the most valuable thing that could be done, and the effort was high. The effort became higher after it was postponed, but by that point so was the relative value compared to other proposed projects.
On the other hand, if you hadn't come up with that first estimate, maybe the assumption is "ok this will take several months but we'll still be able to do this other stuff in 2021", but instead you work on it throughout 2021 and ship it in March 2022 or so (so ~15 months, still faster than doing it later), but that causes you to not do 80% of those other things you thought you could do.
Postponement or cancellation of a project because it's just too expensive to be worth it right now is a perfectly valid use of an estimate!
As an engineer, it's my job to give an estimate. It's my boss' job to figure out whether that can work within his budget or if the task can be re-scoped. But I have to give him a number he knows is legitimate, and not some half assed BS I pulled out of my ass.
At a recent job, I witnessed an engineering team use some kind of agile "planning poker" game to arrive at estimates. Everyone on the team thought this was a great method. Yet they consistently failed to hit their targets, and it had a very profound impact on the performance of the ENTIRE company, (where various teams had serious backlog-coupling issues).
Because I'm a devops engineer (despite previously having been a development lead at a different company - which was a Fortune 500 by the way) and therefore, not considered a "real programmer" - of course I wasn't taken seriously when I tried to advocate a more data-driven, disciplined approach. But what do I know.
Well, as far as this stuff goes, I kind of have it on easymode because I'm a VP Eng who also does a bunch of product management work. When I'm doing product design, I have the benefit of knowing approximate engineering LoE/order of magnitude, so I can make sure that I design products where the engineering effort fits into the product development cycle.
But it's not unusual for something to be harder than I thought because of some detail that I'm just too far from to realize, and in those cases it's clear to the team that they should be honest with challenges and we'll work around them together, vs. some bullshit because they're afraid I won't like their estimate. The last part is built with trust though and basically never "shooting the messenger" when someone tells me that there's a challenge.
If you have a manager that is unable to deal with this kind of stuff, then your problem is the manager, not the estimates. Estimates are extremely useful, and you’re doing yourself a disfavor if you think that they are never meaningful.
I hate multipliers. It's not poor estimates that need to be doubled. It's the lack of experience in identifying the complexity or likely risks in a project.
So rather than multiply, ask for risks. You'll find that most engineers know the risks (or at least they may know it is without risk). As you explore risks, you'll find the estimates will lengthen automatically.
So rather than do the multiplier, explore the risks. They are what makes projects late, not intrinsic engineer productivity. You'll end up with better engineers who know themselves and their problem spaces better. You'll also get a clearer view of what could screw up your project.
I led a small team to deliver software over a multi-year project. The software was used to control a machine. The machine was contracted as part of a multi-million dollar deal. We delivered software on time and with good quality. It was very far from trivial. The key to this is that the team had experience in doing similar projects, so we generally knew how long things take. We knew what needed to get done. We knew we would be debugging the machines. We knew we would need to work around issues. We knew we'd be limited until the first prototypes are assembled. We knew what existing tools and libraries we could reuse. And we also knew what's the minimal viable product and what are nice to haves. For me personally this was the third consecutive project of building very similar machine control software, for very similar applications, using very similar technologies. This wasn't exactly the same but I went through this process and I knew how it works. The first of those actually failed because the software wasn't ready on time (and other reasons, but that was the main one).
A team with no experience, building something new, has zero chance of correctly estimating anything. Not only will they likely not deliver on time, they might not deliver at all, ever. Or they'll deliver something that doesn't work. Someone external who has experience with these teams might be able to provide the right estimate (e.g. they'll say it'll take them 3 years and they'll deliver nothing). In my current job I asked a junior engineer for an estimate, he broke down stuff to tasks, and came back with something like "a month". Ten months later it wasn't done. Much simpler technically than the other project but the engineer didn't really see all the details, didn't have experience with similar projects, so he basically just made stuff up.
You always have to plan for surprises and you need to have contingencies. An estimate is always some sort of distribution but for a large enough project these distributions do form some sort of coherent picture. Sure, you might have a surprise, but you do the risky things first to reduce those risks.
We usually came relatively close to our estimates, but would sometimes encounter anomalous situations that would force us to re-evaluate. I found that my managers were pretty good at accepting these, as long as they were not too frequent, and as long as I didn't do the "Lucy in the Chocolate Factory" thing. They set hard plans, and we were not to deviate from them.
But our customers often did not like the software that we delivered on-time, and to-spec. Since we were a hardware company, it was considered OK, but we hated it.
So we delivered yesterday's technology; tomorrow. :(
But I also ran a small team of really high-functioning C++ engineers. They stayed with me for decades, and I got to know them quite well.
I can't even imagine running one of these shops with half a million inexperienced engineers that flow in and out like guppies in a pond. My management experience would be worthless for that environment.
Since leaving, I have taken a very different tack with the software that I write.
I assume that my estimates are basically fiction. I am going to make big changes, to suit changes in the deployment environment, customer expectations, competitive landscape, platform tech, etc.
I tend to write very flexible software, in layers and modules. It allows me to react to these changes fairly quickly, and to maintain my very high quality standards, throughout.
I often toss out huge gobs of code, when I hit one of these challenges. That actually makes me happy. The code I don't write, is the best code of all.
Flexible software is a very double-edged sword. Most quality processes advise against it; for good reason.
But I tend to know what I'm doing. I've been at this a long time, and have been very humbled by many, many mistakes. I would probably have a difficult time trusting developers with lesser experience to take the kinds of chances that I take.
So my personal process depends heavily on me being me. I know myself, fairly well; warts and all. This means that I can trust the Principal engineer on my projects.
I manage scope. My projects are small-batch, artisanal, projects. I don't bite off more than I can chew. That said, my "small batches" are bigger than a lot of folks' personal projects. I often spend months, writing my code, so change comes almost invariably, during my projects.
Works for me. YMMV.
I know a bunch of very self-aware devs, but they almost all have a very double-edged relationship with team work: theylove to learn and share yet see other people as impediments or risks on their way to delivery.
How to improve on these situations?
I lasted for almost 27 years.
I’m working on a team, now. I am doing the native app development, and I was the original author of a couple of the servers, but there’s a relatively young chap, working on another server (we’re up to 3 servers). He has some experience, but nowhere near mine. I’ve learned to work with a light touch, in these circumstances. I demand a lot, from myself, and expect his work to meet my bar -but only at the API level. I stay out of his kitchen. I make sure that he has no problem, asking me any questions, or for help.
When he asks for help, I immediately provide it, with no judgment. In turn, he has introduced me to a couple of new tools and techniques that I have adopted.
Here's an example:
Before this project, I'd never used Postman. I used much simpler REST explorers. You'd probably laugh at the primitive tools I used, when developing my servers.
I suspect that you wouldn't laugh at the results, though.
He asked if I could use Postman to give him examples of using the API for one of the servers that I wrote.
I could have been a dick, and said "RTFM" (It's a very well-documented API). Instead, I learned Postman. Didn't take long. He's thrilled at the results. Our exchanges seldom take more than a couple of Slack messages and a Postman query example.
I now have experience using Postman, and will have a new tool at my disposal, for working with others. It probably won't be a regular part of my solo work (Insisting on using team tools for solo work can be a bit problematic), but it's nice for teams.
I am encouraging him to learn Charles Proxy. He's not really following up on that. It's not the end of the world, but he's missing out on some really awesome inspection capabilities by ignoring the tool (and it means that I have to be a bit more creative with my Postman examples). It may cause problems down the road, and I'll deal with them, if they crop up. I will do so in a non-judgmental way. Our relationship is valuable, and needs to be treated with respect. I treat him with respect, and he does the same for me.
In my world, that’s how we team.
I don’t disagree that it is important, I probably wouldn’t label it as fundamental, at least in a context of a company developing its own software internally.
The most important question for your internal software team is still "what should we work on" and estimates are a fundamental part of answering that.
It's someone's job to figure out what things that team could do to provide the highest value. Usually some combo of product/sales/marketing/tech.
But that's only half the equation! You also want to know how much it's going to cost, in terms of those scarce dev resources. If one project would add X value but take a year, and another set of projects would add Y, Z, and A value, but each only take four months, you want to compare Y+Z+A vs X to see what gets you the most value over the next year. So the better the estimates, the more rational you can be about what work gets done. If you have no estimates, you might pick a project that will go on for way too long, and not get the value out of it that you expected.
(However, I doubt anywhere truly uses NO estimates. I've never seen a place that didn't at least have an informal level like "oh that'll be really hard" or "that'll take us a long time.")
As soon as an estimate is really a commitment it totally changes the dynamics around it and actively impacts autonomy, so my impression is that what people mean by "no estimates" is more like "no timeline commitments".
All teams have far more work than they can possibly do. A handful of things win; most things lose. Reasonable cost tradeoffs to pick which two or three things per quarter get worked on require estimates. And even in small startups, millions of dollars are spent on salaries per those decisions.
If the answer is never or rarely then that lends a reasonable amount of credence to the idea of just multiplying your estimates. Because you most likely should and the chance of overestimating is low.
Breaking down a long task to be as granular as possible then rating each component against a scale of how much experience/confidence you have is the best approach to managing risk. As a manager, if you aren't asking people to explain this, you should be.
I've never seen anyone do it (I also haven't worked anywhere massive), but monte-carlo analysis with a tornado chart etc. would be the best way of managing this kind of risk at a complex-systems scale. It is a system used by the top real-estate developers.
The problem is that everybody schedules to the estimate assuming it's the 90% point (project done) when it's actually a pretty accurate mean (50% point). In reality, the number is probably closer to the 33% probability because software can be an unbounded amount of time late but only a bounded amount of time early.
The primary problem is that when you feed people the 90% number up front they freak out. The secondary problem is that nobody goes back after the project was done and checks whether the 90% estimate was accurate (it generally is pretty close).
It's a little harder when the work is more nebulous and I don't have to do it for others but it's all based on personal estimations of productivity as well as understanding of the problem. It's not easy to do.
Of course, this is mostly a toy idea that's fun to think about, but seems pretty untenable given that it requires keeping track of time, which most developers hate. That, and there'd no doubt be incentive to game the system by padding estimates or other shenanigans.
Asking b/c I've taken two courses in dev process or project management in my academic career, and neither provided substantial value or benefit to how I've lead projects professionally.
However, there were techniques (like critical path analysis) that as I've got more senior, started working at larger companies, and started stepping down the senior leadership path, I've started to see some applicability to. Not direct applications - they still need taking with a massive pinch of salt, and modifying for modern learnings in industry, but they do start to provide some value, even if it's just learning what the grey-hairs in the exec are used to seeing :)
Lots of other old school stuff that we "just turning grey" people need to translate for the new kids.
Realistic estimates aren't padded, but they still have significant probability of being inaccurate. After all, they're estimates, not information from the future transmitted to the past.
I can't talk in too much detail, but in general, the deadline date was fixed through commercial contracts signed at a high enough level that engineering didn't have sight of them. The concept and commercial case was sound, but the implementation hadn't been worked out yet, when a date was set.
My strong preference would be for estimation to come first, of course, before a deadline is picked (and even then, only picked if it is really a necessary deadline), which is then based on reality, and also include some slack for unintended discoveries.
I delivered a spreadsheet with detailed information of how long it would take to develop each feature of the new module they wanted. It totalled around 3 months of development.
Fancy suit folks told me 3 months wouldn't do because sales promised it would be ready in 2 months. Then I was asked where could shortcuts be taken.
In the end we had to cut features that will certainly upset our client given their initial expectation.
How do you incentivise devs to hit those timeframes?
Hope you're doing well too!
Once the project is ongoing and you'll need to account for changes, I've either done it manually (which takes an age) or handed it off to PM's to oversee using either MS Project, airtable with a custom-authored set of actions/etc, or PrimaVera.
In a perfect world, I agree.
In the real world, which has a remarkable knack for failing to live up to expectations, what I find is that companies are rarely willing to allow the development team adequate time to do their due diligence. Answers, in and of themselves, are cheap. I can give you those all day. If you want to be able to hold me accountable for their accuracy, though, then you need to be looking at my correct answer rate sheet.
For me, the magic of #noestimates is the magic of open, honest cynicism. If my realistic options are silence and blowing smoke up my boss's ass, I'd really prefer it if they would allow me to choose silence. That way we can both keep our dignity.
In my view, traditional software project management is ineffective. I would put it somewhere between the Myers-Briggs personality test and modern day astrology.
I'm just not sure why software projects are "special" -- if you can avoid it being a project and instead make it ongoing OpEx like, for example, GDS managed for the UK in 2016, then great, you've sidestepped that, but until the entire PM industry discovers how to improve overall project management techniques, I don't see why we'd consider our industry "above" them.
Every non-tech startup founder who's approached me with "how long will it take to build an MVP for my startup idea?", I've answered with this. Development is an ongoing cost, a process, not a once-off capex cost.
I recommend to them going the other way. Start with "how much can you afford to pay a dev team sustainably?" then work out how many devs that works out to, then work out how long your MVP will take to build based on their estimates (and estimates are not deadlines).
Not quite the same as #no_estimates (which I also try to argue for whenever possible), but close.
How long until we cure retinoblastoma? Everyone understands there is no way to produce a meaningful timeline. There are too many inter-related unknowns - various causes, various treatment modalities, varying funding on various fundamental and applied research, no real idea if the final answer is gene therapy, nano-something, chemo, etc.
I used to develop software for a defense contractor, and it was pretty waterfall-y. But we built risks in, to an extent. Not by multiplying Sally by 1.3x, Joe by 2.7x or whatever, but you'd chart it all out, showing interconnections (x depends on Y, which depends on Z and Q, which...). And then roughly figure out risks of each of those sub-tasks going long.
The idea NOT being you then just multiply by a weight, and ta-da, you have an accurate schedule. The idea is that you have now identified particularly risky chain of events, and now you at least have a chance of managing risk. Every day/week you'd have a risk assessment meeting. Where are we on X, Y, Z. What can we do to get X back on track? Can't, okay, can we we-jigger the dependencies, or is this a hard slip? And so on. I've never seen this done on the commercial side, and it just seems like people are flying blind as a result.
"Waterfall is terrible" you reply. Sure. But when you are building an airplane, ya kinda need the 1553B/ARINC bus installed before you install and test the avionics. You can't attach the engines if the wings haven't shown up. You can't redesign the fuselage after the wing builder started building the wings (in general). These are hard, unavoidable dependencies, and changes are often extremely to destructively expensive (hence the endless mind numbing meetings arguing about change control).
It is just (IMO) not an unsolved problem, but unsolvable. Too many unknowns results in unpredictability. Your only bet is to manage the risks, adjust as necessary, and accept some things are just unknowable. Agile does that in one way, sophisticated waterfall in another.
Obviously as you progress through your project, you predicted end date should become more and more accurate.
A project plan, though, is a prediction of the future. I've not seen anyone that can predict the future perfectly. That's not to say that you shouldn't do it, but defective project management creates a project plan on Day 0 and then tries to bend reality to meet the plan.
*Obviously all the above is caveated with reasonableness - you do try and bend reality a certain amount to meet your plan, and you try and keep to your plan as much as possible.
- at this point all project management is pretending it takes 100 managers to land something one girl / guy got flying.
- stop project managing, stop estimating, and just start treating companies as VC firms. Hire good devs, make them care about your mission, invest in those that take off. Don't take the control away from the original devs
What I find frustrating about the whole situation is that no matter what process you use for making these estimates you have a roughly 30-some-odd percent chance of being right. It almost never has anything to do with the process you used when it does go well. If it did estimating software projects would be trivial, wouldn't it? Everyone would use this process and we wouldn't have 60-some-odd-percent of large enterprise software projects going over time and budget.
In reality people have used this very process, I'm sure, and have been in the 60-some-odd-precent. People have been studying this phenomenon since before I was a nerdy kid hacking on my Amiga.
Having a roadmap or a plan to get from A to B is good. It will need to be readjusted as you explore the problem space and navigate the waters so to speak. But the only real guarantee we can make as engineers is that we'll make progress in my experience. I'm only giving really rough estimates in the beginning and those estimates improve as we get closer to our end goal. I only start talking about actual release dates when we get close to being finished and are mostly polishing out the rough corners and have already done a few iterations internally.
If someone makes a promise they can't keep or have no business making -- in my books -- that's their mistake and they've made it a problem for everyone else.
Anything else is just enabling (or even rewarding) bad behavior. If you do, expect to get more of it.
Note well: I have never been a manager, especially not an upper-level manager over both sales and engineering. I don't know how well my recommendation will fly in the real world. (Hey, I guess that makes me the guy who just sold something without knowing if it can work...)
Clear procedures to remove impediments, decision makers in the loop to ok scope/feature changes, resourcing agreed upfront, user engagement for testing locked-down, senior leadership aligned and kept informed regularly. Someone running the administrative side of the project, keeping people on target, etc. (could be a double hat, of course).
Sounds all rather "menial", but high degrees of organization really make a difference in delivery on larger projects.
At my current company, it's reached a point where we flat out reject product proposals for features or changes that would need to be hacked together for an MVP without a time commitment from all necessary stakeholders on how it will properly be implemented for phase two (iff phase one is a success). It's amazing how quickly "critical" features become irrelevant to product when they understand even half the amount of work required to properly implement them.
I get the same response sometimes when I talk about the long tail of maintenance at work.
> Now that I know what software estimation is, I no longer need you.
I would be wary with just "adding all the estimates together". That's because we tend to estimate the median or the mode of the task duration, and not the average. Means can be added together, but not medians.
Is the error distribution of task size estimations normally distributed? Because I do really expect it to have a fat tail, and if it does, you can't add means either.
You do see truncated log-normals, though, when the estimates are padded.
Some fat tail distributions also break the law of large numbers, but I don't think task size estimation is this flawed.
There was no feedback loop that rewarded developers to meet the estimates. Stock options weren't an option, and I didn't want them to do a sloppy job just to hit the 'deadline'.
If we're talking about a long-term estimate, as in "this project will be finished in 6 months", it's your job in management to find a way to do this. You've got to break the goal down into achievable sub-goals, and monitor progress along the way.
Long term estimates will be wrong. Software projects generally take longer than expected, so it's up to you as a manager to anticipate this and communicate to stakeholders with the correct degree of uncertainty.
If it's an external deadline which must be met, firstly you should engineer enough extra time into the timeline to handle inevitable delays. And if at any point you feel like the timeline is unachievable, it's up to you to renegotiate with stakeholders, or adjust the scope to make it achievable.
And if you have the feeling your team is slacking off and not getting work done, honestly this sounds like a lack of leadership skills. It's up to you to have the kind of relationship with your developers so that they are motivated to meet the team's collective goals and take responsibility. That's basically all that being a manager is.
As a developer, I was into making sure I hit my goals, and at work to work. As a manager I do struggle with how to emphasise that ownership of product, quality, time. Why should developers care about hitting Monday with effort, instead of coasting to Friday?
What can change the game is spending as lavishly as possible on the factors which make a project take extra days. Bad interfaces. Missing documentation. Sharp edges. De-prioritized bugs. Deferred refactors. Commit the greatest number of the most expensive people to work which is not connected to customers, deadlines, or business metrics, but to mitigating their own frustrations and sensibilities. Of course that is anathema to business culture so it’s rarely done.
In the case of the author's payment system, say they go with a vendor. At that scale, there may be weeks of contract negotiations for fees, rates, minimums, etc. What if some piece of documentation is flawed and you need to have a back-and-forth with their support? What if they need time to onboard you into their systems?
It makes me appreciate Apple's supply chain mastery that allows them to deliver exactly on time because they own their whole process inside and out and demand similar rigor from any vendors that supply them. If we could imitate that in software, we could eliminate a huge source of uncertainty in many projects.
* Steps 0-2: determining "what" the project "is" (design, architecture, ontology)
* Steps 3-6: procuring and allocating resources to complete it (economics, management, politics)
It is tempting to decouple the two phases, and as a technologist focusing solely on architecture while leaving the economics up to leadership. However, social factors (the real people involved with the project) are an integral part of actually getting anything done, so I agree with the author's premise that the whole process should be viewed holistically (and ideally run by one technologist).
Answer: everything else. Anything that the planners and executives didn't consider ahead of time. Other aspects of the work: code quality, product design, accessibility, performance, robustness, edge-case-handling... Team learning, culture and satisfaction. At the limit, the team will compromise on anything that you don't need to claim a feature is "done" with a reasonably straight face.
It's simply not a good system even when the estimates work out reasonably well—and, empirically, they usually don't.
To be fair, this isn't entirely the fault of estimates in general or this estimation approach in particular; I believe those contribute, but it's primarily a reflection of how the company and culture are structured. If you're already in a system like that and you can't change it, trying to do estimates well might be the best option forwards, but only because you're already in a corner.
If the relevant data aren't tracked or stored anywhere, that kind of tells you how serious the org is about making accurate estimates.
"Based on first principles in software engineering and a comprehensive set of matching tools and techniques, Löwy’s methodology integrates system design and project design. First, he describes the primary area where many software architects fail and shows how to decompose a system into smaller building blocks or services, based on volatility. Next, he shows how to flow an effective project design from the system design; how to accurately calculate the project duration, cost, and risk; and how to devise multiple execution options."
Stressful, but I try to have fun. And extremely satisfying when things work out!
If you have a very good detailed spec you have already done much of to work to make the implementation easy.
If the spec is high-level and "fuzzy" it leaves the work of "resolving the spec" to the programmer.
So trying to estimate the time it takes to code a system depends on the quality of the spec, and therefore is difficult if there is no standard on how detailed the spec is to be.
I'd say leaving the overall roadmap (which is all this produces, at the end of the day, if you ignore the estimation piece) fuzzy and allowing the team to work that out with users/subject matter experts is the right approach, imo.
Software Project Estimumption (or Assumptimation, the professional community is split)
Software Requirements GatherWhims
Software Requirements Analysthetics
Unit Test Coveroverage
Indeed. And they’ll have been built on lies, damned lies and wishes.
Which is a different motivator to actually getting it done.
Dive straight into debunking #noestimates but don't get to the fact that no one ever really knows what they want.
Other people in other fields are held accountable for deadlines because their work does not completely change and is not severely under-specified. If it is, then they are also just guessing.
I've had either luck or misfortune to "flip a switch" from two decades of being a techie/architect, to being a mid-manager, on basically a specific date as opposed to over the years, due to project's needs; and it's like that B&W picture of two faces and one vase in middle. Both perspectives are true, even if opposing and contradictory.
Good business lead will understand software engineer's perspective - even if it's not their primary view of the world, they can squint and catch a glimpse as needed. Likewise for good software leads.
Obstinate unwillingness to see or ascribe merit to other perspectives puts a ceiling on everybody's progress.
Of course it's a guess. The question is how to make more accurate guesses.
I think the question should be about how to maximize the net return on time dedicated to development, which may mean spending less time on (and investing less reliance on) estimates rather than expending unbounded effort improving the quality of estimates.
I understand the numbers are given as an example but I think the problem is that the scale of the task hardly justifies the complexity of the planning and using "teams" and "weeks" as sizes instead of "developers" and "hours/days".
The article clearly points out discount codes as an example for a feature that could be moved out of the initial spec. And the spec itself is clearly missing various features necessary for the app to be even remotely usable.
This is why I don't bother with estimates any more - not because I think it's impossible, or even necessarily too hard, but because I've observed (consistently over a 30 year career) that it's pointless. Even if you could estimate with perfect precision exactly how long a software task was going to take, they would just push back, say, "that's too long" and argue with you until you told them what they wanted to hear.
Around this point the penny finally dropped that software estimation is a political process, not a technical process.
I was lucky in that I was dealing (in both cases where I've run similar flows to this) with above-board exec teams who wanted the best quality information I could give them - even assuming that estimates are just assumptions - even if it meant having some tough conversations about scope or headcount.
Good managers / business leads/ execs / champions CAN be reasoned with, as long as you find common language, think and understand their priorities, provide alternatives that meet their underlying goals (all of which frequently falls on the presenters). E.g. in your situation, it may be that unspoken expectation was to cut scope or increase resource contour or find another way to meet deadline rather than just changing estimates; or something completely different.
Occasionally though, you're as you say stuck between other people's indecipherable politics. I find in such situations, I'm most comfortable speaking the most honest truth and working hard, openly and explicitly, to understand/ask/bring to surface everybody's actual critical goals.
I can only speak from my own experience, but every single web application I've worked on has had a wildly different structure than the others and the only consistent thing between them has been endpoint routing mechanisms.
If the business is a relatively generic e-commerce store, it should usually not be building bespoke e-commerce software. Unless, of course, there is some technology feature that will be your competitive advantage/differentiator. But let's be honest, that's pretty damn rare in the space of e-commerce stores.
I'm mostly just saying if the author is able to sell what amounts to half a broken Woocommerce installation for half a million dollars (assuming it's at least a team of two billed at $20/hr), I must be in the wrong market.
I tried to think up an accessible example that didn't require too much context on the part of the reader, so obviously, as you correctly point out, all the numbers are made up, and I'm trying to use it solely to demonstrate the workflow :)
Well... have you actually applied this process successfully? If so, wouldn't you have some actual numbers to point to from a past project? Names and details changed a bit to protect the innocent, of course.
The problem with the real world examples is the business domain, which was hyper complex and the specific "pieces" of work I described wouldn't have been easy to grasp for most not familiar with the esoteric side of fintech that the project took place in.
So I went with a simpler, albeit contrived and more accessible example.