I don’t think this is anywhere close to an accurate account of history.
If you look at the history of “waterfall model” then you find Royce (1970), and if you dig back farther you find Bennington (1956). From their writings, it sounds like people understood how bad the waterfall model was, even back then. The waterfall model primarily shows up as an example of what to avoid.
> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!
My explanation for this is that corporations are just really bad at building incentives for long-term thinking. The developers who ship, ship, and ship get promoted and move up, and now they’re part of the leadership culture at the company. The right incentives are not in place because the right incentives are too difficult—we want nice, easy-to-measure metrics to judge employee performance. Shipping features is a nice metric, and if your features move the needle on other metrics (engagement), then so much the better. You get retained, you get promoted, because you gave management a nice little present full of data on why you’re a good employee, wrapped up with a bow.
The reason that managers want nice metrics is because they want to avoid being blamed. Managers want to avoid being blamed for the wrong decision more than they want to make the right decision.
The way to counteract it is to cultivate trust. With trust, you can work on other things besides avoiding blame. When you’re working on other things besides avoiding blame, you can take the long-term view. When you take the long-term view, you can advocate for employees that fix problems and give them resources.
This kind of planning is _necessary_ for traditional engineering projects where certain steps of the process can take months or years and requires scheduling of many resources.
Purely software projects usually don't need that because if your tools are good you can ship a completely new version in minutes to hours instead of months to years. You also get very easy do-overs if there are mistakes which you don't get if you're building a bridge or a factory.
I'm not saying agile has no place, just that many projects would benefit from a little bit more foresight.
Also, I think successful agile projects are often those where the long term planning is there, even if it is not obvious or talked about a lot.
"If the physical properties of concrete changed every 6 months, then structural engineering practices would look very different."
But that's the world most software is developed and lives in.
Waterfall in software was continuously changing those fundamental parts. You'd be fine tuning the irrigation and needed to destruct the whole bridge because some new requirements came up. That's where the anti-Gantt and "bad waterfall" comes from, the reality is the software world as a whole moves way quicker and requirements are way more malleable, subjective than building a bridge.
People keep repeating that, without any empirical evidence at all. If it were only software developers justifying a body of knowledge, it would be ok, but all kinds of people keep repeating that.
Yet, that kind of planning predictably fails every single time in every single engineering practice.
I can guarantee you, if an engineer designs the structure of a bridge, send it to a crew to construct, and go away to only test after it's done, the bridge will fall down before it's even built.
Not all that long ago, software was distributed on physical media such as floppy discs and CD-ROMS.
This was my thought also. I've been building software systems since the early 80's and I've never been part of a project that looked like waterfall, but I've also been in small to midsize companies, not large.
The approach on projects has been somewhat consistent for my entire career which is just a pragmatic look at the project and approach the problem based on the nature of the specific project. There are some projects or parts of projects that really require up front design and planning (e.g. when coordinating changes in multiple external systems that must all play nicely together) and some other parts of projects where the best approach is to iterate (e.g. we've never done something like this before and can't use our experience to guide us very much so we need to try some stuff). Projects are typically a combination of all of the above and more.
When I originally read the agile manifesto my thought was "ugh, ya, this is all pretty normal common sense, sounds like these guys must be working in some large corporations where the bureaucracy has taken over and they are trying to push back."
"Long-term thinking"--often times, you ain't gonna need it. "Engineering" also rhymes with "bike shedding".
Most corporations are really bad at long-term anything. CEOs want every quarter to be more profitable than the last, mostly everything else is irrelevant.
Hordes of $300k developers trying to become $500k developers who are trying to become $1M developers.
Even when I joined the team, we never discussed trust and what we can do to be a more trustworthy team. It was always centered around how can we be more efficient in our deliverables.
This is the video I usually reference: https://www.youtube.com/watch?v=kJdXjtSnZTI
I don’t know, it feels like customers buy our product because has features, but who doesn’t like a good K8s stack.
The market never seems to end producing those developers who spend their life refactoring and never shipping anything, nor taking responsibility for the fact that their pile of technologies SLOWS DOWN the other developers while never solving the problems they were implemented for.
It’s not a manager problem. Developers don’t like taking responsibility for their mistakes.
How much risk (to delivery) is left out there? What can we do to reduce that risk?
It's a known thing, but seems rarely used in software shops. https://en.m.wikipedia.org/wiki/Spiral_model
Who cares if 97% of the code is complete, but the remaining 3% is a critical feature, with unclear requirements, that's never been prototyped, and has no test data?
I’m glad you agree on both of these points. Like I said, it’s a problem with building the right incentives, and a problem with people who are trying to avoid blame.
A developer with a good, long-term view has a light touch. On one side, you have reckless developers who ship out buggy products that cause problems for operations and customer trust. On the other side, you have fastidious developers who just keep refactoring everything and never ship features.
Sometimes, you find a team that can do both. When I’ve seen this, the team has been made of both types of developers. A mix of skill sets, and a high level of trust that they won’t be blamed for failures (outages, missed deadlines). Your developers who move fast and break things will take the time to write a few tests and break things less. Your fastidious developers will relax their standards a little bit and push out code faster, knowing that their contributions are still valued.
The market absolutely produces those developers, it's just that their salaries start at 7 figures and go up from there.
For example, I remember when I was told patterns were important, the time I put in memorizing, implementing and reviewing all these patterns, reading GoF, etc... Now, I'd just like something that works. Patterns are great for recognizing how code is intended to function, but tests are still the only way to verify the implementation of nontrivial code. And the smaller each contained unit of logic is, the more testable. So modularity and sensible, predictable interfaces are all you need.
I used to overengineer a lot more in my younger days. Now I'm just smarter about writing extensible code up front and improving it as I go.
That said, I think the GoF stuff is mostly “stuff that works”, but…
1. A lot of it is specific to the language, or certain languages, and
2. Most of the patterns are useful only rarely.
But there are exceptions. We’re just blind to them. We use the Command pattern for UI programs because it’s a sane way to implement undo/redo. We use the Factory pattern all over the place. YMMV.
The mistake these young developers make is that they use “does this use patterns?” as a proxy for “is this good code?”
But who tests your tests?
Utility shafts and traffic lights are not particularly complex either, and are built using off the shelf parts. Nonetheless they make up the daily work of tens of thousands of civil engineers, and what they are doing is engineering.
It's engineering because at its core it is a practice of tradeoffs. Building the strongest possible bridge for the sake of it is research or maybe art, building a strong enough bridge within budget is engineering.
Applying known solutions to familiar problems is engineering, whether the project is a retention basin or a payment system.
The problem is the link between plan and work. As you work, you learn. That is the primary activity of software development. Learning is a menace to planning. As you learn, you have to replan, but your budget was based on the original project plan.
You can talk about engineering and culture and whatever you want, but if you're working for money, the problem remains of connecting the work to money and the money to the work.
I'm reminded of the Oxygen Catastrophe - https://en.wikipedia.org/wiki/Great_Oxidation_Event - we need oxygen to live, but it also kills.
However, they generally rely on the practitioner being both skilled, and experienced.
Since the tech industry is obsessed with hiring inexperienced, minimally-skilled devs, it's unlikely to end well.
At its best, its not only an operational but an architectural older-sibling to immature software devs.
Devs get to go-fast, SRE plays adult and keeps everything on the rails.
For example, with startups the time to market, pivots, and not owning your decisions long term (which often happens) leads people to move fast and not consider consequences.
It's about goals and following the money. If a bridge fails there's significant legal liability, guilt over lost lives, and more. If software doesn't scale is can be rewritten or a company is hacked and customer info gets out there is a marketing black eye. It's different.
I say this as a classically trained engineer who thinks more engineering needs to be layered into software development. We need to justify it to the business.
So even if some decisions cause a business to need to hire additional people, just to keep that bad decision alive, the business usually does not stop and take a step back, to think "Wait a moment, if we hadn't done that, could we have avoided all this time spent on XYZ?" and then correct itself.
I do not understand why the Crowdstrike change was not tested appropriately, why this problem not found in testing. My company has a automated test suite that takes some time ( several hours ) to run along with manual tests before any software is released. If its a risky change needs to be reviewed another developer. If it is a emergency production change the testing is much less , however change is still reviews by an experienced developer and still manually tested by a tester. The regression tests are not run...
After reading I had to scroll back to the title to see if maybe I clicked on a different article.
I'm not sure how companies battle this kind of fear. It looks Amazon's Working Backwards, Netflix's Freedom and Responsibility, Uber's Let Builders Build more or less can counter such fear, but ultimately it's about finding the right people who have good sense of product and who strive to make progress.
For example , release change to some group customers, say 5000, if thats OK release to another group of larger customers.
There was no planning for a failed update. There should have been a mechanism for auto rollback if problems encountered. To me this is 101, very basic
In my experience, a lot of ills in software come from the working set of facts around a particular problem becoming too large for one person to hold.
Then you get Healthcare.gov v1 -- everyone proceeds on incorrect assumptions about what their partners are doing/building, and the resulting system is broken.
As a salve to that problem, napkin-math upper/lower bounds estimation can be incredibly useful.
Especially because in system design "the exact number" is usually less important than "the largest/smallest likely number" and "rate of growth/reduction".
Simplifying things that aren't useful to know in detail (e.g. the exact numbers for author's users) leaves time/mental space for things that are (e.g. if it makes sense to outsource a particular component to SaaS).
It's only in 00s when any sort of methodology (even if it's agile) starts to get wider recognition, and academic languages like Haskell spark interest. 00s was also the peak era for architecture astronauts, for example JavaEE and C++ Boost were almost completely a 00s products.
The counterreaction for that was the rise of low ceremony stuff like Ruby on Rails, or html5 toning down w3cs overwrought stuff (xhtml), and now the pendulum has been swinging back with typescript or rust as examples
I've seen a lot of projects that try to 'make it simple', engineers go around saying "KISS". Hyper focused on simplifying everything. But they failed.
They only realize later that by simplifying, they have just shoved the complexity into some corner, and never dealt with it head on, and it just corrupts everything.
It's like cleaning house, and you just shove it all in the closet. Does it mean you are really neat? Is your life really simple?
It's like squeezing a water balloon. The complexity is going to bubble out and break somewhere. But you aren't in control of how it breaks.
So, just acknowledge that not everything can be 'simple' and deal with complexity.
As a result no matter the skills, if you do not know the world you working on it's only by chance if you design something good for such world. Developers MUST KNOW the big picture.
The dominance of continuous deployment and auto-update culture changed it from ship-building into something more like an exquisite corpse or papier mache project, where people just constantly tack new things on, pull old things off, and rush to make repairs as haphazardly as can be gotten away with.
In a giant software market, they both have their place and both bring upsides/downsides, but they're radically different ways of making software and some of us strongly prefer one approach to the other.
The difference between then and now is the MBA-ization of tech. The MBA cancer infests and does the only thing it can do. Create spreadsheets to force people to track time (points) and other stupid metrics. These can then be boiled down to near meaninglessness but make the non-technicals feel like they have control.
The result is people like me who want to do good engineering can't. You budgeted 40 points for a 70 point project. Congratulations, I have a gun to my head where I can't write good code so I end up fixing it when it eventually causes some level of SEV. That is, if I'm lucky. If I'm not lucky I have to bandaid the bandaid and hope to god I can run out my tenure so I either get promoted out of dealing with it or quit. Only to do it once more at another company.
If tech companies would be run by engineering, like they used to be, things would not be this way. Non-technical garbage Mckinsey level MBA consultants are the problem. Second to them is completely incompetent project management. Typically these two groups intersect on more than 80% of traits.
> If tech companies would be run by engineering, like they used to be, things would not be this way. Non-technical garbage Mckinsey level MBA consultants are the problem. Second to them is completely incompetent project management. Typically these two groups intersect on more than 80% of traits.
I'm not altogether convinced this is the solution either. Google, famously, is an incredibly engineering-driven company and these days it can't keep a product around to save its life. Engineers aren't necessarily great at project management. I'm not sure if it's MBAs at Google sending Reader to the graveyard, or perhaps engineering management reading the usage tea leaves and making a cold calculated numerical decision rather than considering for other customer factors.
This title should be:
"Fear of iver-engineering has killed software engineering altogether"
Though I've understood this to be true, it's not a problem unique to that era.
What was perhaps more unique to that era was that there was less room for bad software, and software businesses were more directly impacted by bad software.
I would argue that there's little to no objective evidence that the industry was actually made better by Agile-inspired methodologies. If anything, methodologies served as a means to distribute blame and, incidentally, allow bad software to continue to be written.
This phenomenon probably wouldn't have ended well if it weren't for hardware picking up the slack and the ever decreasing standards users have for their software. Today, everyone I know expects the apps and websites they use to be broken in some way. I know that every single effing day I run into bugs I consider totally unacceptable and baffling. No, I'm not making that up. I'm serious when I say that I run into bad software every day. Yet we've normalized bad software, which begs the question of what these artificial methodologies like SCrUM are actually for.
> To make things worse, engineers took Donald Knuth’s quote “Premature optimization is the root of all evil” and conveniently reinterpreted it as “just ship whatever and fix it later… or not."
People should stop listening to people like Knuth and "Uncle" Bob Martin as gods of programming bestowing commandments unto us.
> I do think the pendulum has gone too far, and there is a sweet spot of engineering practices that are actually very useful. This is the realm of Napkin Math and Fermi Problems.
> Developers must ship, ship, and ship, but, sir, please don’t bother them; let them cook!
I don't think it's a pendulum. This phenomenon is real, but I've just as often seen teams of developers ruled by inner circles of "geniuses" who either never ship anything that valuable themselves or only ship horribly convoluted code meant to "support" the rest of the peon developers.
These issues are less a reaction to something someone like Knuth said and more to do with businesses and teams that make software failing to understand what competence in software engineering actually means. Sure, there's subjectivity to how competence is defined in that domain, but I'll just say that I don't consider either YOLO or geniuses to be a part of that.
> Fermi problems and Napkin Math [...]
I honestly don't get what the author is trying to achieve with the rest of the article. Perhaps that engineers trying to do actual engineering should use math to approach problems? I guess that makes sense as a response to YOLO programming, but effectively just telling people to not YOLO it really doesn't address the organizational problems that prevent actual competent engineering from taking place. People didn't forget to use math; they're disincentivized from doing so because most companies reward "shipping" and big egos.
My take as an engineer (not a PE, but have the degree) is that engineering mindset is quite a bit different than computer science mindset, which is quite a bit different than technician mindset.
Each has their strengths and weaknesses. Engineering is pragmatically applying science. Computer science has more of a theoretical bent to it - less pragmatic. Technicians tend to jump in and get stuff done.
Especially for major work, I'll do paper designs and models. The engineers tend to get it, but the computer scientists tend to argue about optimal or theoretical cases, while the technicians are already banging something out that may or may not solve the problem.
More recently (past 5-10 years), I've seen a notable lack of understanding from new programmers about how to do a design. I'm currently watching a sibling team code themselves into a corner due to lack of adequate design. They'll figure it out in a few months, but they're "making progress" now and have no time to waste.