The things that make a good software developer are the things that make any professional good: learning = good, creativity (hard to measure stuff) = good, no consensus/uniqueness thinking = standard human problem, requirements are always hard and always suck.
Software development isn't different. It is just more loudly voiced on the internet because programmers have the most voice on the internet (by virtue of building the damn thing).
I'll add a bit more to your last statement about internet and programmers: these bloggers usually have had experienced the pain with software development in general but they rarely offer constructive criticism (like this article) or they would treat the symptoms (.e.g: coming up with new process, new tools, or new whatever).
The basic problem is still there: people (including themselves) are the problem.
Sort of the blinds leading the blinds.
Software Development is not too much different than gadget/hardware development. Just ask Nokia...
Most people who call themselves experts at something usually just read a couple of articles more than the next guy.
These days, you just have to write a blog entry with a slightly more authoritative tone than the next guy.
One of your most vocal developers read a blog or an article and tomorrow he'll implement the new core threading part of the system because the idea was just damn good and suddenly y'all lose customers :) and he'll get the pat in the back.
That particular bullet point is perhaps the most disturbing flaw in the article, I might add. All the projects that I've seen go off the rails are the results of -poor- requirements gathering.
Granted, poor requirements gathering seems to be the rule more than the exception, I have worked on plenty of projects that had excellent requirements that went extremely well, even with the usual set of unexpected scope changes.
My experience is that if you've got multidisciplinary teams populated with reasonable people who have a good grasp on real world constraints (usually a delicate balance of compromises between scope, quality, time and money), all of his four points break down.
On the measurement point: While I'm sure some of the more clueless companies might use some arcane and pointless measurement systems, I haven't seen any half-decent dev shop use any of the measurement practices he talks of.
Really, we need some kind of measurements or metrics. Otherwise, how else do we know we're progressing or improving or at least not sucking more.
I agree that metrics is hard to do and easy to game. That's why don't put a specific number or targets. And also don't tight metrics with employee performance. Put it on team's performance (still not perfect, but I'm sure it's better).
One example of metric that probably might have made sense is that if your shop is doing Agile (code review, code coverage, unit-test, # of bugs etc.) is to make sure that these numbers don't go down in every iteration. Make sure these numbers either stay the same or go up (in the case of improvements or new development). If the numbers go down for 2 consecutive iteration, someone or some team doesn't care about quality. Track improvements, not specific number.
Side note: I would suggest people to read Standish Group Chaos Report to understand how bad our industry was and how far have we improved over the years.
The "one programmer can be 1000 times more productive than another programmer" thing is true... but mostly not true.
The reason for that is that some programmers are active liabilities; there was, for instance, the guy who was "working at home" on two different projects. Whenever I asked him how my project was doing, he'd say he was too busy doing the other one. I'm sure he said the same thing to the manager of the other project.
Some guys spend a month developing a system, that is put into production, that it then takes somebody else six months to make correct (work that your firm has to do ~unpaid~.)
Once you get to the range of programmers who actually make a contribution, the differences aren't so clear. I mean, what is "productivity?"
If I spent two years developing an application which is well-implemented, but unwanted by the marketplace, the value of my labor is $0. On the other hand, some kid might write a 2500-line Perl script in a month that's critical to a $50G hedge fund, he might claim that his code is worth millions.
I'll agree with him that consensus-building can be a real challenge, but "requirements gathering" can't be dismissed -- projects frequently 'fail' in the requirements gathering phase, but tragically, the failure is often detected much later, after many many man-months have been wasted.
I think most developers who feel "highly productive" are in a place where requirements gathering is easy, sometimes so easy to be almost imperceptible; if you're developing, say, an implementation of "MapReduce" your programmer's view of the product matches the view that the end users, other programmers, would have.
On the other hand, if you're making a product aimed at, say, salespeople, you need to work with a marketing team that sees things the way end users do. As he points out, it's a challenge to get consensus in software projects -- especially when dealing with stakeholders who don't understand anything about how the system works: who don't have an intuitive sense of "this ticket can be resolved in 15 minutes" from "this ticket can be resolved in 2-3 years."
btw, it's buffett. buffet is all-you-can eat luncheon etc., pronounced a la francaise.
I.e., if I say, I'm really, really happy right now. A trained psychologist, or someone who looks at it objectively, might note: there is this issue with happiness that I'm going through. Similarly, all of the points in this blog post usually are introduced via the very opposite stance (esp. the latter two -- i.e., requirements are the most important thing except for maybe measurement). But you can actually learn things from people if you look very objectively at what they're talking about and less the direction of their conclusions.
In the real world, average developers tend to more successfully sell themselves into roles developing new systems using the latest programming technologies, while good developers are most useful in roles maintaining existing systems, i.e. cleaning up the crap left by the "average" developers who've moved on to another job.
So the measured order of magnitude tends to be low in practice.