That rings true to my experience, and TDD doesn't add much to that process.
I've advocated the same for early projects or new features. Dump something bad and flawed and inefficient, but which still accomplishes what you want to accomplish. Do it as fast as possible. This is your vomit draft.
I strongly believe that the amount a team could learn from this would be invaluable and would speed up the development process, even if every single line of code had to be scrapped and rebuilt from scratch.
An executing program is a lot less forgiving, for obvious and unavoidable reasons.
What TDD brings to the table when you are building a throwaway version is that it helps to identify and deal with the things which are pure implementation errors (failure to handle missing inputs, format problems, regression failures and incompatibilities between different functional requirements). In some cases it can speed up the delivery of a working prototype, or at least reduce the chance that the first action that the first non-developer user of your system does causes the whole application to crash.
Genuine usability failures, performance issues and failure to actually do what the user wanted will not get caught by TDD, but putting automated tests in early means that the use of the prototype as a way of revealing unavoidable bugs is not downgraded by the existence of perfectly avoidable ones. It may also make it easier to iterate on the functionality in the prototype by catching regressions during the changes cycle, although I'll admit that the existence of lots of tests here may well be a double-edged sword. It very much depends on how much the prototype changes during the iterative phase, before the whole thing gets thrown away and rebuilt.
And, when you come to building your non-throw away version, the suite of tests you wrote for your prototype give you a check list of things you want to be testing in the next iteration, even if you can't use the code directly. And it seems likely enough that at least some of your old test code can be repurposed more easily than writing it all from scratch.
Isn't there some law as well that states that successful software projects start with a working prototype, and software designed from the ground up is destined to fail?
> In most projects, the first system built is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done piece-by-piece. But all large-system experience shows that it will be done. Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time.
For more junior engineers, I think TDD can help, but once you "master" TDD, you can throw it out of the window: "clean code" will come out naturally without having to write tests first.
But it got me thinking about testing while writing the class instead of shoehorning a class into a test just before the PR went up. That’s what I think my takeaway was. To this day I think about not just how clean/maintainable the code is, but also how testable the code is while I am writing it. It really helps keep monkeypatching and mocking down.
“Write it three times. The first time to understand the problem, the second time to understand the solution and the third time to ship it.”
The only other clue I have is that it was apparently someone who was super productive and wrote a ton of the early common Unix tools.
Doesn't matter how good you are the v1 of a program in an unknown area is always complete crap but lets you write an amazing v2 if you paid attention making v1.
(Only slightly joking)