Two years ago our team wanted to buy a small cluster (~300 cores, ~$50K). We talked directly to two good vendors (good recommendations from university partners) and came up with a fine machine and 2 bids for it. Sent recommendations to procurement.
Procurement put it out for bid, and a fly-by-night company undercut the bid by $10K... by noticing that procurement had not specified details of service level (that were in the bids we'd gotten and forwarded). Procurement, once it goes there, is a true black box. No communication, no understanding.
Five months later, we were basically delivered 2 pallets of unassembled parts and no instructions. Believe me, we spent 3-4x as much in labor as the $10K savings to get it working, and it's been plagued with issues that would have been under the onsite service warranties for the better companies.
The biggest irony is: I firmly believe that procurement acts this way not because the government is fundamentally incompetent, but because the Public, and thus Congress, BELIEVES we are incompetent, so puts so many levels of "check" bureaucracy in place that the people who know what they want can't participate directly in getting it.
sounds like a loophole one can procure a truck through :)
There's almost 3 million federal workers. Many more if you include people who work on government contracts. The Federal Government is by far the biggest enterprise in the US by both employees and revenue.
With such a large organization, there are undoubtedly large swaths of both incompetence and competence.
The challenge with any large organization is that the rules are there to reign in the bad people, but are equally enforced on the good. (For example, most people won't abuse their company's T&E policy, but some will, so everyone has to be treated as suspect.)
Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."
Didn't mean to imply that any sufficiently large organization shouldn't have an audit trail and reasonable accountability!
Maybe not, but "treat me as a liar" doesn't seem viable either.
Any time you take an extra measurement you introduce a chance for that measurement to be in error. If you make that it so that any single measurement is a show-stopper, then every time you add an extra check you make things a little bit worse, right up to the point where your false positive rate overwhelms your data.
Even with an extremely low error rate, the number of best deals in the world for a given thing is 1. If the goal of your system is to get that, then making any measuring system a single point of failure that tests once is a death sentence. If the number of good and honest software houses that will respond to your call is low, then your false positive rate is going to be extremely high... and again, having such a system is going to be ill-advised.
Especially if the system itself suffers from not having the people who are actually going to be using the system, and people who understand how the systems should be created and run, making at least part of the decisions.
If there were a greater degree of feedback between procurement, requesters, and providers, with the ability to modify the plan, then you could potentially check your work - reducing the consequences of such failures. Not "absolute trust," but at least "hear my side of the story, maybe you've just misunderstood something."
I agree -- this is not a simple challenge. But I don't think stultifying bureaucracy is the answer, either. There must be some government out there, somewhere in the world, that has sorted out an efficient, effective procurement process.
This might be a good starting point:
http://www.amazon.com/Liars-Outliers-Enabling-Society-Thrive...
Without them, you could reward your friends with fat government contracts, regardless of what's in the public's interest.
Former federal IT contractor here.
The procurement rules were designed for that, yes. But in real world scenarios, those rules effectively do exactly the opposite. Since there are so many hoops for potential vendors to jump through, only the most established players get to bid on most contracts. And in my experience corruption and cronyism is still alive and well in federal IT contracting.
It's really all about optics rather than reasonable checks - no department wants to be the one that Congress targets, so especially in poisonous political environments the "checks" are significantly more expensive IMO than the actual "waste, fraud, and abuse" they guard against.
Except this happens anyway. Much like patents, people just become better at drafting. What is currently done is not an effective mechanism for stopping cronyism and corruption at all. In fact makes it easier in a lot of cases, because it provides plausible deniability (It's not that we gave it to our friend, it's that you didn't meet the requirements!)
Those rules are there because a malicious worker can cause a huge amount of damage. They are a pain (one entity I worked once spent about $15k (in people time) contracting $100 worth of ssl certificates, in a process that took more than a year (so, no certificates for the site during a period), we were forbiden from contracting the service for more than a year... and the contractor was another governmental entity. The rules are maddening, but they are necessary for a democracy.
The problem is that governmental IT is out of place. The government will never be competent in contracting software development - the only known tool that works in keeping government contracts honest is auctioning, and agile is simply not compatible with auctioning. The only possible way out is by doing IT in-house.
I've been part of the procurement process a few times from the vendor side and the layers of nested black boxes makes solving procurement issues virtually impossible and once the procurement is made massive overruns are almost inevitable.
While I think that's likely, I'm a little bit hesitant to claim things are clear when we can't observe the alternative.
Here is the actual, original source for the Waterfall approach, first published in 1970:
http://leadinganswers.typepad.com/leading_answers/files/orig...
If people would just bother to scroll past the first couple of pages, they will notice that the approach already includes some iteration cycles between steps.
In other words, this whole "agile vs. waterfall" debate which has wasted countless hours of human effort is based on a complete misunderstanding of what "waterfall" is in the first place. No one ever seriously proposed a model without iteration. It simply never existed in the first place!
Isn't this the essence of the critique, though? Its not the lack of iteration, but the logic of the spec-formation.
To put this another way, you are right that it has nothing to do with the waterfall model per se. But it has everything to do with accepting the same set of behavioural assumptions. Namely, that the people who spec the model have perfect foresight regarding the spec itself.
What needs to be accepted is "incomplete spec". The problem with this is then the hold-up problem. you get held to ransom to fill in the incompletions. So what is really needed is a capability to execute this more in-house. This would prevent the re-negotiation of the economics (the hold up), because the project manager would just execute properly, directing resources (already paid for) through fiat rather by then re-negotiating vs a modified spec.
One problem with this is accountability. There would need to be more accountability, because execution of the incomplete spec will not be the same as farming responsibility for the spec out to a third party vendor.
Project managers need to get used to the idea of working intensely with a development team rather than asking for a specific thing and then walking away
It would be lovely if vendors would bid based on their experience, and be compensated for it, at a time and materials basis. The more experience and better you prove to be, the higher we're willing to pay for a better outcome, sooner.
Except that world is rife with bait and switch. And writing the code that delivers the spec is just a small part of the picture. Companies are terrible at specifying the non-functionals, delivery process expectations, operational requirements, supportability requirements, etc. When they do get it right, the costs go up, because many places quote without any concept of these things.
The sad reality is that the people who get asked to quote for these things in the software world are generally clueless about the actual domain, out of touch, and wildly wrong most of the time. And the people that specify these things are often barely any better. And lets face it, software developers are also terrible at quoting times to do a task, though that can be mitigated with a lot of experience of that task and the code base to do it on.
In my jaundiced view, none of this shit works right, and we'll be much the better off the sooner we all stop pretending that buying bags of magic beans will do anything other than make a bunch of cynical grifters rich and their naïve dupes miserable.
Quoting Dr. Royce:
> Management of software is simply impossible without a very high degree of documentation.
How much documentation?
> In order to procure a 5 million dollar hardware device, I would expect that a 30 page specification would provide adequate detail to control procurement. In order to procure 5 million dollars of software I would estimate a 1500 page specification is about right in order to achieve comparable control.
5 million 1970 dollars is equivalent to about 30 million 2013 dollars, so for a $30 million project, his advice is:
1. Write a huge stack of documentation. Literally stop everything until that documentation is written.
2. Get feedback from reality and from your stakeholders exactly once.
Sure, having one opportunity to act on feedback is better than never having any, but it doesn't fundamentally change the process or the risks involved.
What the Agile folks realized is that reliable software development is an experimental process. If you only do one experiment, you're still missing that point.
The answer was any time you are developing software for the government! The professor specifically mentioned it in lecture once, so that alone was enough for full credit on the question (other reasonable answers were fine too).
Later I TA'ed the class twice and made sure to eliminate these pure lecture-attendance-check questions.
My theory - agile/iterative development rarely gets sold because we continue to believe we aren't susceptible to planning fallacy.
while (true) {
// Contractor working for a year
Government: This isn't what we wanted!
Contractor: We met all of the requirements... See all of the boxes are checked.
Government: Well we want to change 1,2...n things.
Contractor: Okay, Let's do a follow on contract.
Government: Okay, Here is the money; Go.
}One of my requirements was that it was able to send text messages.
What I got, was a ticked box, and an application that could also send text messages to 1 number. No, a number I could choose. One number as in, one cell phone.
And it wasn't even my own cell phone
Any market based solution website has to be very agile and responsive [edit: to succeed at it's goal] but the government can't be super responsive and in many ways shouldn't be super responsive. The state spends all of our money and enforces mandatory decisions concerning our lives. The state shouldn't have the agile qualities needed to produce the beautifully flexible websites created by the private sector.
In general, I'd claim the state should certainly be smaller but that it shouldn't be less bureaucratic, shouldn't be more like a corporation. Civil service is boring and bureaucratic by design, specifically it was created to combat the "spoils system" that plagued the early American state [1] (though the prize of modern state eclipse what Tammany Hall etc could have imagined). Modern corporations are agile by having a command structure which lets them quickly maximize profits - which is great if we believe the market system benefits everyone when operating properly. But states with the ability to trample the fences of ordinary market shouldn't not be also given the ability to move quickly and agilely to do this. Corporations have no internal limits to their "greed" but we citizens of democratic market capitalism are assuming that's OK, indeed desirable, as long as the corporation face the strong external limit of markets and individual choice.
The current fashion for what could be called "state-enforced private consumption" is sold as giving us the best of all possible worlds but in reality gives us the worst (IE. the reality is the wealth of this society is indeed being vacuum-out by a kind of private-public rent seeker limited by neither the traditional bounds of the democratic state or traditional bounds of the market).
Note: I'm not a conservative rooting against Obamacare. It seems like it was a terrible approach for achieving affordable healthcare but I still would prefer it succeeded that failed because, well, I and many friends need it.
This is provably false by visiting any number of old, large company websites, esp in healthcare or banking. In addition the government isn't in this kind of business, it outsources almost everything to "free market" companies, who extract as much margin as they and their lobbyists can get away with.
Complexity has much more to do with size of and number of evolved entities than whether they are profit motivated or not.
http://digital.cabinetoffice.gov.uk/
Aside from creating https://www.gov.uk/ which laid down a lot of principles on how to fulfil a government contract (as well as the foundations of what goverment websites should look like and how they should be developed), GDS is also looking at the problem of procurement.
The GDS team essentially are wrestling back from the big contractors the major contracts, breaking the work down into a large number of bitesize contracts and then farming them out to a wide variety of smaller vendors.
So instead of finding a Fujitsu/Siemens JV team, or an IBM Professional Services team, operating a £50m project, the plan is to offer 100 x £250k projects to a large number of smaller suppliers instead. Each project having a clearer purpose that is more able to be fulfilled.
Of course there are obvious overheads in managing so many projects, and of course some of these projects will fail. But... overall the savings will be such that the overheads are cheap, and the failed projects will only have a smaller impact on a major programme initiative than a failure would today.
http://www.ft.com/cms/s/2/794bbb56-1f8e-11e3-8861-00144feab7...
GDS is trying to in-source a lot of work that should be controlled by a central publishing and transaction design group, and contracting out the relatively boring task of following their rules. Good idea, as long as their architecture/design/program management review boards are well staffed and motivated.
I don't think they will stop these large failures. And many of these large failures do have clear up-front requirements that cannot be changed. Iterative waterfall is not so terrible.
I work on projects that are probably on par in terms of complexity. We typically only involve a handful of firms. And even then, coordinating them all is a challenge. I can't fathom making the process work with 50+ firms.
Maybe that number was hyperbole. I don't know. But if it's true, I shudder at the thought.
The public reason for this crap is that the system is supposed to be structured to make sure that accountability exists, favoritism is minimized, and employment is boosted (by spreading out the work). The reality is that the system is designed to maximize the enrichment of the lobbyist-connected owners of the contractors.
Obviously when you have so many targets, you don't hit a lot. Maybe the obamacare website was perfect in some secondary objectives.
But it also arises from perfectly true premise, that big spending is a force on the society and that it as secondary effects, and that maybe we can twist those effects in a political manner. The whole Silicon Valley is living on federal actions.
It's all about organization, and trying to have just enough people to get the work done and nobody more, and the right people, and this has nothing to do with open or closed source.
It was a massive success -- everyone put in the part they did well -- kernel + userland + distribution = WIN.
The government, that is, HSS's CMS, took on the role of integrator, including integration testing (perhaps "Prime Contractor" as mentioned elsewhere). They aren't known for expertise in this (the Pentagon can do this with medium sized weapons projects, which are a rather different field anyway), and ... really screwed up:
They and those above were late with specifications and requirements, kept changing them (7 major ones in the last 10 months per the NYT), were making changes in the week before launch, and when they did a simulation test of 200 simultaneous logins just before launch the modules locked up. As did the site shortly after its midnight launch.
Oh, yeah, three days after the launch CMS panicked and proposed to fire Quality Software Services Inc. (QSSI, a unit of United Health Group) and punt their identity backend system (based on an Oracle package that's known to work), but eventually decided that would take longer than QSSI getting it to work. Who knows, but that's another sign of CMS as the integrator failing hard while distracting both QSSI and CGI Federal.
Now, maybe it ended up being too many cooks because CMS didn't provide strong oversight and coordination, but....
I work as a contractor for the Australian government. I personally know of multiple project failures in the 10s of millions of AUD range and a few in the hundreds of millions. These stories don't even make the news.
I've worked at small companies, research institutes, universities and now in government. I've not worked big corporate but have heard that it is similar, although more efficient than government. Size means you get less feedback on what is really useful.
Government fundamentally lacks feedback on what really matters. In the US the department of health cannot be driven out of business by another department that does what is important 10% better. In private industry that discipline and feedback makes things work.
If you build a widget X and it isn't something that people want you go under. That doesn't happen in government. If you build a donkey but it's the donkey they paid for it could be in service for 20 years.
It's hard to see how to make it all better. Perhaps trying to keep components small and having multiple groups build them and select the best might help. Then at least 2+ groups would have to compete to build a better system.
Thinking a little more about the situation, consider that the majority of startups fail. I would not be surprised, if one were to take a look at the success rates of internal projects in large companies, to learn that those are fairly low as well. So if the private sector is more efficient than the public sector in IT, the differences are probably more subtle than one would at first think. In both cases you have lots of capital being spent on projects that won't come to fruition. Perhaps the incentives in the private sector are a little better aligned towards a successful outcome.
For some time I've thought this was one of the primary attractions of offshoring: if you must maintain the pretense of developing new programs and systems, it's a cheaper way to inevitably fail....
I'm sure that system would blow up in some other way, however.
This is what the current government contractor system looks like.
> and citizens could choose which provider to use.
And how would we receive these choices? Balloting is a government function... Should we ask Diebold?
Also, I didn't explicitly say so but small companies are more efficient. If they are not, they tend to go insolvent. That's what is so good about them.
The way they work is purely in water-fall project management mode. Project managers are gods and spend insane amount of time on ms-office calculating hours per each task that are 2 years out. then they bring CGI indian sub-par programmers on L1 to save on costs. Technology is least of their concerns since its about shipping code. Also blame shouldn't be just on CGI, the government is at fault as well. Simple request for information would take about 4 biz days to get it. everything is slow and the Gov IT staff has no clue on how to scale. Anyway, when I heard CGI won this project, I knew it would fail.
And even the software for info systems can have dangerous consequences. Does anyone remember the underwear bomber, who almost brought down a plane and caused a nice surge of invasive security measures afterwards? His own father exposed him, but the State Dept's visa system failed to find the terrorist because someone misspelled his name when entering it into the system
http://www.cnn.com/2010/POLITICS/01/08/terror.suspect.visa/i...
Think about it...the State Dept has been dealing with foreign terrorists well before 9/11, whose names are easily misspelled by Westerners...there's not even a consistent way to spell Osama bin Laden, depending on you interpret he phonetics. And yet no one thought that a fuzzy spellcheck would be useful, apparently. And a whole bunch of people almost died because of it (and the security apparatus greatly increased)
Bureaucracy reduces efficiency.
The more organizationally significant the software the worse risk there is.
Processes and procedures are the ways institutions manage risk.
So the more significant the software is the less efficient the production of it is.
We built a platform, and had a consulting wing that built custom "apps" on top of it.
We did Waterfall and CMMI. Waterfall was most intense for the consulting projects. I remember being assigned to build feature 3.2.2.1 in the spec.
You have to treat prototyping as part of the requirements gather process. Then, when requirements phase is done, you have to treat "development" as really "testing". Because, for the types of clients that are going to insist on a Waterfall project, the final testing is really only a cursory user acceptance testing and they really don't have the skills necessary to determine if you've met their requirements or not.
[1] People who work for companies like MITRE that are basically privatized extensions of the government.
The campaign's site is fluff.
"Someone counted nearly 10 distinct DBMS/NoSQL systems, and we wrote something like 200 apps in Python, Ruby, PHP, Java, and Node.js."
http://arstechnica.com/information-technology/2012/11/how-te...
Governments are huge entities, with laws that look like they're there to protect public trust, but are really written to often reward key players (for this area of government).
Pretty much the opposite of a government project.
https://www.gov.uk/transformation (fully responsive design pages together with new service backends delivered with Agile / Scrum)
and the teams doing it: http://digital.cabinetoffice.gov.uk/ (they are hiring more than a dozen people in the moment)
I will point out, however, that there's a huge assumption lurking in there that wasn't explicitly stated: somebody on the government side has to know what they want and be willing to take the heat if they get it wrong. _This_ is the reason so many agencies prefer waterfall -- there's enough obfuscation and paperwork involved that when somebody complains, and in high-risk projects there'll always be complainers, nobody is really at fault. The coder guys can point back to the designer guys. The designer guys can point back to the requirements guys. You'd think that the requirements guys, the guys at the front of the waterfall, would catch all the blame, and they do. But they just write bug tickets because some aspect of the process wasn't followed well enough.
You can spend hours or days trying to figure out what went wrong and not know anything more than you did before you started. Which is exactly why the system has evolved the way it has.
I hear a lot more government projects are going to be Agile. Here's wishing them luck. If done correctly, Agile will 'debug' the organizational problems that lead to this bad performance over and over again. If they just sprinkle a little Agile nomenclature on top of things, it won't do anything at all.
Get that down, and then get several neighboring communities--again, HOAs, neighborhoods, or towns--and get them to adopt your ideas as well. With that amount of variation, you've got a strong base from which you can convince a major city, or a county, to adopt your ideas: after all, many of their constituents are already on it and can endorse it.
This isn't meaningfully different from founding a startup taking on governments as clients.
I've led or worked on tech contracts and grants for ED, HHS, NSF, CDC, and others. Several people have pointed out some important points that are not getting enough attention in my opinion:
by @mcone: "The procurement rules were designed for that, yes. But in real world scenarios, those rules effectively do exactly the opposite. Since there are so many hoops for potential vendors to jump through, only the most established players get to bid on most contracts. And in my experience corruption and cronyism is still alive and well in federal IT contracting."
It's true, incest between government and industry is rampant and has led to wide spread cronyism despite the system's best efforts to limit the effects. People that once worked for company X, now serve on the proposal review panels when company X competes for work. No, they don't receive direct compensation and thus there is no immediate conflict of interest, but the reality is humans are drawn to (or don't want to disappoint) the people they know (former colleagues) and thus pick their old companies. In addition, they know there is a chance they may once again return to said company (so there is long term conflict of interest potential).
Another point that hasn't been discussed is how the government's procurement process provides next to no incentive for companies to efficiently produce good products. If our industry loves the DRY concept, everything about the gov procurement process points to a !DRY (or do repeat yourself). We built and rebuilt the same database for offices of the government that shared a building with one another. But because everyone is in a silo, they don't collaborate well and don't realize that they could pool their needs to develop more universal products. (And on the industry side, as long as gov continues to work this way, they don't even have an incentive to re-use their work or propose innovative, generalizable solutions.)
And for those of you that might say, 'but can't you win a gov contract by bidding lower by working off of existing work?' The truth is price has very little to do with who gets selected for a gov contract.
if government were competent stuff like capitalist economy and privatising public services would always be bad (provably - actual proof, not just overwhelming evidence - but there is overwhelming evidence too).
leaders intuit things like agile and don't label them - instead of picking it up from a book and implementing it badly because they don't understand first principles.
I can't see any way CGI Federal et. al. could have won.
So, just like every client every developer ever had?
One of the basic philosophical differences between the agile and waterfall approach is that agile assumes that you cannot know all the requirements at the beginning of a project. You must start building things before all the little dirty edge cases become obvious. Additionally you don't actually know if you're going in the right direction until you have something concrete to work with, even if it's mock ups or wire frames.
What I think they should of done in this case is rolled out the site in stages, probably starting with one state that had an easy backend to work with and gradually adding complexity to the site.
Geeze, such amazing ignorance. If you're vaguely interested in this sort of thing, and want to learn all the process and engineering reasons the Abrams M-1 became the King of the Killing Zone, get a copy of the book by that name: http://www.amazon.com/King-Killing-Zone-Orr-Kelly/dp/0393026...
Written by someone who initially expected to castigate it due to early (mis)reported teething problems (e.g. the whole "it throws tracks (more than other tanks)" was due to a proving ground's faulty tension meter), he got completely sold on the tank which has since totally proven its worth.
Lots of fun stuff, from their modeling everything with strong constraints like weight (i.e. what bridges can it cross), e.g. they didn't want to provide a heavy M2 .50 BMG but the tankers demanded it. To the successful development team's leader, a grizzled Chrysler car exec who drove them crazy with "that doesn't look good" sorts of complaints.
Which often turned out to be a boon (ignoring that weapons should look good so their users feel good about them, which the M-1 delivers on). Said it was too high in an ugly way, so they figured out how to shave a foot off, which is very important for the European theater (not so good for deserts). Didn't like how the armor skirts didn't extend all the way to the back. So they gave in (I'm sure the modeling said it was only a minor net loss) ... and found that made a critial difference in keeping cruft thrown up by the tracks out of its turbine engine.
Very much an iterative process, in a domain where you truly "bend metal" to get things done.
So take the author's words with a big grain of salt, she's woefully ignorant of a huge domain in which we've been building for a very long time the world's most sophisticated artifacts, and learning how to, and how not to do it ... with stakes no less than national survival. Digital computers used for IT are a very recent development as these things go.