Maybe it's because I'd been coding for years before I ever tried TDD, but when a test fails, I logically debug the code the same way I would if I wasn't using TDD.
As far as I'm concerned having tests just flags possible errors much quicker, and also gives me more piece of mind that my code isn't gonna be riddled with hidden bugs.
The canonical example is the master of XP solving Sudoku in the TDD way: http://xprogramming.com/articles/oksudoku/ (part 1 out of 5) -vs- Peter Norvig: http://norvig.com/sudoku.html
TDD isn't a be all and end all, its just one more tool in a developers toolbox that allows us to be better at your job. If you solely rely on TDD or [Insert newest popular development technique] you are going to have a bad time.
One place that I've found TDD to be insanely helpful is exposing the interaction between pieces of the system before building it out in code. I tend to write my tests from the bottom up, i write the assertion first, and then build up the stuff i need to test that assertion. This makes it easier to see whats needed to test the functionality and whether or not the test looks good or not.
Gary Bernhardt does a really good job at explaining his philosophy on TDD, which I really agree with, http://destroyallsoftware.com
If you say that you never randomly modify code in order to make it work (according to the tests), then that is great, and that is how every developer should work. However, my experience is that people who lack discipline (or face a looming deadline) tend to not take their time to properly reason about code and instead rely on the test suite to tell them whether code is right or not.
Also, it's not like this we haven't seen this kind of behavior decades before the invention of TDD.
This is just another example of a craftsman blaming his tools. TDD is not a silver bullet, but no method or tool can serve as an excuse for mindlessly poking around until it works. This isn't limited to programming either.
I can't describe how shocked I was.
The reason for writing the article, however, is because I have seen the same mindless behaviour with other people as well.
More importantly, I have also seen this happen in professional environments. A very large test suite is very useful, but it is absolutely not a catch-all safety net.
Why do some people always assume bait/trolling when no such thing appears even remotely possible (as is this case, which is a well reasoned and argued post), is beyond be.
You might disagree with the author, but we tries and provides arguments for what he writes.
| writing code in a test-driven way bypasses
| your brain and makes you not think properly
| about what you are doing.
(Test Driven Development makes you not think properly and bypasses your brain) to: | no matter which software development methods
| you use, do not forget to use your brain
"Just don't mindlessly program."Nonetheless, it's still useful! You can still write TD code and use your brain - it is only slightly easier to be lazy (and specifically, lazy in a way you're not supposed to care about, yet.)
In the end, production use crash reports will reveal any bugs that matter in the system (if any), and you can write new tests for those extra cases and make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast release cycles and so on, this isn't a road block.
TDD never promised that, and practitioners of TDD understand that 100% coverage doesn't mean you won't have bugs. This doesn't invalidate the TDD or testing (as you are obviously aware of =)).
Because for those of us who do TDD every day the blowout in time is at minimum 2-3x longer than without it. Not to mention the detrimental impact on build times.
All of that aside. Have you noticed how there are no decent metrics available for TDD's effectiveness ?
[1]http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...
What do you think would be a good reference with regards to TDD practices, as opposed to "I saw some people do it and it looked seriously wrong?"
[1] http://programmers.stackexchange.com/questions/109990/how-ba...
(This isn't to say that unit tests are bad, but rather writing tests first may not benefit all people)
One of the problems here is language: TDD as a general concept can cover everything from high-level behavioral testing to a method-by-method way to design your program. There's a big difference between those two!
In general, of course, programming is balancing what the program is supposed to do with how the program is constructed. That's true whether you have TDD in the mix or not.
The result? running the complete BDD test suite takes about 5 hours, which must be checked for every commit.
1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
2. When I'm writing in a dynamically typed language, it forces me to be not lazy and have adequate test coverage since I don't have compile time type safety
I do less of TDD when dealing with a statically typed language and/or when I'm working in an exploratory mode. TDD doesn't help me when I'm just trying out different things to get going.
The thing that pisses me off is when people don't realize that EVERY technique has caveats and try to promote it as a golden rule - a lot of "agile" consultants preach TDD as the golden grail for writing code without any bugs.
EDIT: grammar
1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
A concept often used in TDD is spiking. If you don't know what you're doing, do a quick and dirty untested version until you do know what you're doing. Throw that code away and TDD it with your new found knowledge.If your goal is to fix this behavior, go for the root causes. TDD isn't a root cause for this particular problem.
Contrary to this article, one great reason is that TDD/BDD allows me to make refactors and major changes and know whether or not I broke something. I find it passe to have the opinion of this article.
A perfect example for TDD/BDD is a complex REST API with dozen of end-points and refactoring a piece of the authentication system. How do I know if I broke something or introduced a bug?
My experience is that most developers do not test and this is exactly the kind of way complex bugs get introduced. You actually make the job more difficult on yourself because instead of knowing YOU broke something, a bug gets introduced and you spend more time tracing the cause. I have worked at many places that have this obnoxious cycle of deploying, breaking, deploying, breaking.
It is irritating to see articles like this pop up because it's not like it's a school of thought or a religion. It's a purposeful tool that can and will save you time and effort and probably impose a few good design practices along the way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a few complex pieces are working. And I don't always think it's a good idea to design API's from the tests, especially when you are experimenting and researching.
[1] http://pragprog.com/the-pragmatic-programmer/extracts/coinci...
http://www.infoq.com/news/2009/03/TDD-Improves-Quality
http://research.microsoft.com/en-us/groups/ese/nagappan_tdd....
In fact, TDD is not simply "tests first". It is: write ONE test, make it pass with the MINIMUM amount of code, refactor, loop.
I rather write proper designed code and write the tests afterwards, before delivering the code.
If not, write a new test, make it pass. The naive implementation can be substituted with a different one easily since the tests guarantee correctness.
Generally though, since the "third leg" of TDD is refactoring, this ensures that the proper structures are going to be used in place as soon as they are actually needed.
Hence over time the codebase becomes this huge tangled mess of "formally correct solutions".
I would add to this that algorithms must be understood before being tested, something with which I suspect most TDD proponents would agree, and which would dispense with the need for the rest of the article.
(More specifically, read everything from the "The Pragmatics: So when do I not practice TDD?")
Because what has happened is that the obsession with code coverage has meant that developers create a whole raft of tests that serve no real purpose. Which due to TDD then gets translated into unworkable, unwieldy spaghetti like mess of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the simplest feature takes 5x as long to develop and is borderline unmaintainable.
It just seems like there needs to be a better way to do this.
- Focus on code coverage
- Testing nothing
- Spagetti code from doing TDD
- IOC + UI/Cucumber takes long to write and run
I would have to say I agree, there is a better way to do this. My guess from your last statement is that you are relatively new to software. Don't mistake your teams poor practices for the practices not working. Try to promote better practices.
Tell your team code coverage only informs you on what isn't being tested. It doesn't help with quality.
Tests, like code, should be deleted if they don't do anything. Strictly adhere to YAGNI,
If TDD is producing spagetti code, you are doing something very wrong in your tests. The tests should be short and focused, just like your code base. Those tests are hard to write on a messy code base, which forces you to refactor, which leads to clean code. Maybe read up on the SOLID principles and other code quality practices to see what you are missing. Refactoring techniques can be very helpful too. This takes years to get good at.
Cucumber is over used. Read about the testing triangle (http://jonkruger.com/blog/2010/02/08/the-automated-testing-t...). My guess is that your team is focusing on the top levels. Those tests provide little long term value, fail without good explanations and can be complicated to write and maintain.
What I do now is, well, I'm going to actually test stuff while I'm coding anyway right? Regardless if I'm doing TDD or not. Well unit tests give me a useful harness where I can write those tests, instead of hundreds of Console.WriteLines. It's basically not much more effort than Console.WriteLine() style "testing", except you are left with some reusable artifacts at the end that may come in handy later on.
Also, don't test stuff that isn't going to break, and avoid writing system and UI tests unless you absolutely have to.
> Also, don't test stuff that isn't going to break
Hah! =) But how will I prove that i++; is actually incrementing i!
TDD, Agile, Scrum, XP etc are a religion.
And a lot of people have managed to make their lives easier by making the teachings of this religion mandatory. So what I've been witnessing the last few years is that saying "no I don't think we need a test for this" is a position that will get you no where. So instead every one just puts up with longer and longer build times and spending more time each day fixing broken tests.