Have you ever stopped to think that one of the reasons that people evangelize TDD is because it has worked for them and they want to share that with you?
Here are things TDD gets me in my everyday work:
* I don't have to refresh the web page and re-fill in my forms to see if the record saves correctly this time.
* I don't have to log out and log in as a different user to see how things work
* I can plan an API much easier with tests that I write first.
* I know that my features are done when my tests pass
* I can write new tests to prove that bugs my users encounter are really bugs. (You can't get everything tested the first time, but fixing bugs is much easier with tests)
* I can upgrade to a new version of a framework or library because when stuff breaks, I have a roadmap of what I need to change. I can then give a customer an accurate estimate of what it will take to do the work cos I KNOW what I have to fix.
You're doing it wrong if you write ALL your tests first. You write one, and you only write enough test to cover the feature you're implementing. then you implement the code to make it pass. This iteration takes no significant time at all to an experienced programmer, and automated tools can run in the background, monitoring your code for changes and only running the tests your code impacts. Because we're programmers, and we know how to write code that does that.
You think this is a joke? Something to be ignored? Go ask a CPA (accountant) how they do the books.
They sure don't just do the math once and say "trust me I'm really just that good." They balance the books. Dual-entry accounting. They do this because they are disciplined professionals. And people pay big money to accountants to get that right. Your tests shouldn't exist to exist, they should prove your code does what you want. Checks and balances.
Of course the path to this is the same path one takes to learn a new language. It will be slow to start. It was for me, but really, it's such a huge win for my long-term productivity. I am happier, my clients are happier.
I like spec based testing because I think the tests you end up producing with it are seriously useful and can actually help guide your programming if you think of the test names (not the actual test code) before you start coding, but I've never been able to actually write tests in full before coding.
I try to think about how I'll test something when I'm coding it (so that it's easy to test), I try to consider what I'd actually be testing in a given piece of code (so the concerns are separated), and I try to stick pieces near each other that are coherent (so they can use similar setup/teardown for a testing context as needed). But I don't have the actual test code down.
I'll fight anyone that says automated tests aren't wortwhile though.
I prefer development driven testing or DDT. Don't write tests until the refactoring churn has settled down a bit. Then write unit tests to to document expected behavior and prevent anyone from breaking that code in the future.
I like to treat tests as extended compiler warnings. E.g. in Haskell the type system is so strong, that once the compiler stops yelling at you, your code is more often than not correct. To catch the latter case, I like to write properties in QuickCheck.
QuickCheck is a Haskell framework. It allows you to formulate properties about your functions. Like --- given any list of comparable items, `sort' should conserve its length, produce an ordered list, and conserve the occurrence (and non-occurence) of each item. It should also be idempotent.
QuickCheck then generates a lot of random inputs and tests whether your properties hold. You (almost) never have to write test cases by hand. And since Haskell is pure, you never have to care about setting up the right state before invoking your tests.
This approach allows me to code with success, even when I am not intensively focused. (Of course Haskell helps with this even without QuickCheck. Not having to worry about state, assignment, order of execution, loop indices or recursion keeps the number of items I need to hold in my working memory low.)
Right. With Haskell and OCaml, a lot of testing can be pushed onto the type system. The language is better about letting you express semantic constraints via types, without making you declare every single instance of everything the way C and its descendants do.
I wrote a QuickCheck-eqsue library for Lua, FWIW: http://silentbicycle.com/projects/lunatest/ (It's the first released version, and the documentation still needs work...) For table "objects" and userdata, it checks the metatable for how to generate an arbitary instance.
> With Haskell and OCaml, a lot of testing can be pushed onto the type system.
Yes. There's even a paper about how to guarantee the balance invariants in a red-black tree with the type system. (Can't find it now, it's probably written by Ralf Hinze.) You can see an implementation at http://www.cs.kent.ac.uk/people/staff/smk/redblack/rb.html
The problem with that is that unit tests are so useful during refactoring. If you have enough unit tests, you can pretty much refactor at will without worrying about your program silently breaking.
This isn't always (or even usually) true. Sure, the IDE will verify the compile time correctness of the code, but unit tests check for run time correctness. So if you refactor a method to add an additional parameter, unit tests will catch the null that Eclipse inserted into all method calls to make the code compile.
If you are changing the signature of a method that is part of the implementation in order to improve readability, clean up design, or remove dead code, it is refactoring. Refactoring isn't just as simple as renaming classes and methods; it often requires more significant changes than that.
Example:
/* old */
LoadWeapon(WeaponTypeEnum type);
UnloadWeapon(WeaponTypeEnum type)
LaunchWeapon(WeaponTypeEnum type);
/* new */
WeaponCommand(WeaponTypeEnum type, WeaponCommandEnum command);
Note... assume that the interface is a GUI, so changing these methods doesn't change any external interface.
seriously getting tired of all the people using such inane ways to grab attention so they can rant about something. It's like these people didn't learn how to be polite and kind in kindergarten. Ohhh look at me I'll use as many superlatives and curse words as I can fit in a page and jump up and down and maybe you'll flock to my site.
Sam Hart, Zed Shaw, and anyone else who feels like acting out like a 5 year old kid, it's time to go to time out. When you feel like talking like a grown up and a professional, please send HN a link and most of us will be happy to have a friendly discussion and constructive argument with you.
Why would you say this article is merely an attention-getting ploy? Seemed pretty sincere to me. And if you can't take "curse words", I can recommend some filtering software.
It certainly reads like a deliberately over-the-top rant, mixing in a little comic exaggeration as a spoonful of sugar to help the "TDD won't solve all your problems" medicine go down.
Alas, it's difficult to tell if the strawmen being setting up (e.g., tests preclude estimation) are part of a serious argument or just more dramatic effect to help underscore the general point.
I wish this post were separated into two articles, one of which discussed the genuine tradeoffs of TDD (such as the possibility of painting yourself into a corner), and the other of which contained all the spurious, unsupported assertions (tests cause scope creep, tests are blinders, etc.) One would make excellent discussion fodder, and the other, well, the Delete key is right over there.
Sure he might have a sincere message in the end, but the way in which it is presented gives the impression that the guy is a 5 year old throwing a tantrum. It's not the curse words that bother me, it's the childishness and unprofessional way that he presents his arguments.
This guy's rant includes a bunch of things that aren't true. Just to pick the first two I noticed: initial proof-of-concept code doesn't usually get thrown away, and TDD doesn't say you shouldn't change tests when you find out they were wrong. In other words he's guilty of the same overzealous generalization that makes TDD cultism annoying in the first place. I get the feeling he had a bad experience on a project or two and turned that into adamant universal opposition.
Edit: I think the people who oversold TDD in the first place ought to have known better, though. Either they believed their own silver-bulletism or they cynically computed that it was the only way to push something in the corporate software world. I'm not sure which is worse.
Sam Hart, whoever you are, you rock. Like any number of other software development techniques, this one has some applicability in some situations, but the superlatives and the dogma with which it is foisted upon the world is unwarranted and insane. TDD (Test Driven Douchebags), you are emperors without clothes.
Hey Sutro, the first part of that is a fair enough comment. Testing's great for the problems it solves, and we should simply be aware of when it's applied in inappropriate ways.
But I did want to bring up an issue with the second part of your comment, TDD (Test Driven Douchebags), you are emperors without clothes. That added very little positive contribution to the conversation, and dragged your comment's overall value much lower than it should have been.
Just to point out a line from the guidelines...
When disagreeing, please reply to the argument instead of calling names. E.g. "That is an idiotic thing to say; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Hey Sgrove, did you read the guideline that recommends flagging items you find inappropriate rather than complaining about them in the comments? Because your patronizing quotation of selectively applied guidelines added very little positive contribution to the conversation, and dragged your comment's overall value much lower than it should have been.
When I was interviewing for a development position, the interviewer quipped "it is interesting that you come from a testing background, but didn't even mention TDD as part of your approach." I reminded him that my approach involved adding unit tests concurrently with development, to which he replied "Right, but I said TDD." I came very, very close to saying "well I'm not going to get all dogmatic about the latest 'flavor of the month', if that's what you're looking for."
If I were interviewing you, telling me to take my dogma and stuff it would be a great way to win me over. Nothing says Smart And Gets Things Done(TM) like being confident enough to say that you don't do things the way I do things, and that's okay.
(Of course, if I were interviewing you, I wouldn't have asked you about TDD anyway)
I've given up on trying to convince others of the usefulness of automated unit testing. It works wonders for me in my own projects, but the hassle of trying to police my co-workers just isn't worth it at work.
Love it? Hate it? I don't care anymore. I know what works for me and that's enough.
Unfortunately, in the Rails world at least, the two have been conflated more often than not - the need for testing has been presented as the need for TDD and vice versa. For newcomers, this may well lead to confusion between the two.
I feel the same way.
I suspect that people are having a knee-jerk reaction because it sounds like more (and boring) work. Just like if someone asked you to write comments to every line of code. Hence the argument "sounds good on paper" which seems to always come up. To me, TDD doesn't sound good on paper at all. It just turned out to be more fun (to my surprise) because it is much easier for me to get into flow.
I've given up on trying to convince others of the usefulness of automated unit testing. It works wonders for me in my own projects, but the hassle of trying to police my co-workers just isn't worth it at work.
The problem with this though is when you end up assigned to "their" project, and you curse how un-unit-testable "their" code is. At which point it's "your" code too.
I don't even bother with that struggle you described anymore.
Get assigned ticket for new feature. Write test outlining how it would work with the assumption that the existing code does what it says it does.
If you find - surprise! - the code doesn't do what it says it does, create a ticket and recurse.
Compared with trying to make the tiniest possible change and running into gotcha after gotcha, this is a far more sane and sanitary procedure. It takes longer, but in the end "your" code meets requirements and you can prove it.
I consider myself an X-File TDD'er: I want to believe.
Having said that, articles that are titled "Cram your X up your ass" don't pass the conversational sniff test with me. That's not the way you talk to strangers. So either you have some kind of emotional problems, nobody taught you how to express yourself in writing without sounding like you're seven, or you're trying to manipulate me through hyperbole.
Shame. Bet there was some good material in there somewhere
In my opinion, TDD is a double edged sword. Not only does it depend on the persons preference but also experience. [1]
For the experienced or good programmers TDD is a hassle and often results in less efficient code and often in less code. (read: less done in the same time). Later tests are written to cover the code which is of course arguable. [1]
For the beginning or (below) average programmers TDD is great as it often results in much better code quality and helps produce more robust code. [2][3]
Some people think the way to respond to fanatical zealotry and dogma is with fanatical zealotry and dogma they like.
The problem is usually with zealotry and dogma itself, though. If you approach the problem from the angle of, "Ok, let's try to figure out where TechniqueX is a good fit and where it's a poor fit," it gets rid of most of the arguing. It also tends to highlight the people who persist in arguing that TechniqueX will solve all problems for all people, all the time.
It doesn't drive blog traffic as well, though. (The middle path usually avoids all that drama...)
Ok... At first, a disclaimer: Yes, I program Test first about all of the time (Acceptance tests first, unit tests later), and yes, I do consider this approach better than not writing the tests earlier. (About every nontrivial test tends to fail around 0-3 times during development. Every failing would be a headache later on).
So, what are his points?
- Extra, often useless up-front development
This paragraph just makes less and less sense the more I read it. I think he is ranting against testing spikes, because spikes will get thrown away. Well, know what? Spikes are not tested. Spikes are written to see what is possible and thrown away after this.
- Development with blinders on
Well, his point is: you might develop the wrong thing with TDD. Well, duh. I write my tests in the wrong direction, and thus, my code is written in the wrong direction. Good. Throw the tests away. What remains is: my code is written in the wrong direction. Where are tests evil there?
- Tests are weighted more than the code
Here he shows that he has pretty much no clue about the second meaning tests have (besides checking the code a bit): Interface design. By using your unit in a unit test, you can design your interface, and usually, it is easier to get the interface in a nice, usable way, because you are using it already. Thus, if you write the test first, you first think about the interface of a unit (or, at least a small part of it) and later on, you implement this. So, is it bad to think about nice, clean interfaces first?
- Coding yourself into a corner
This is another of those points of the kind 'Your development practice is bad, because you might end up in a dead end'. I hate those points, because they are so general that they can be made about ALL development practices, unless you are god.
- Narrowly applicable uses
I can 'just' test drive my libraries, datastructures and such? Well. That is a very, very bad 'just'. I watched my brother went nuts when he was working on a compiler with several optimizations heavily based on binary decision diagrams. Eventually, I was able to convince him to just write a bunch of automated tests for this datastructure. He wrote them, found several bugs and everything was done in a few days.
So, reiterate: testing the foundation of your application can be of gigantic value, far, far more than his 'just' allows, because, if the foundation is broken or shaking, things break down in unexpected and hard to debug ways.
- Tests solve problems that don't exist
His point: Tests don't prevent all bugs in a software. Again, a point I can just answer with 'well, duh.'. The only thing which could prevent all software would be a correct formal verification of the program (which is surprisingly hard). So, this point simply holds for ALL real-life programming practices again.
- Scope creep
Point: Separation of concerns is bad? Management of Requirements is not well done in TDD? Well, if his point is the first one, he would just disqualify himself completely. If his point is the second one, his point is that some code creation strategy has no clue about managing requirements. Well, duh? Managing requirements is in the management area, TDD is in the area of coding things, so they are entirely orthogonal.
- Inefficiency
Yes, I have to give him this point. If you have tests, reworking things can take longer, if the product radically changes. It has to be noted that this radical changes are reached on the assumption of fauly requirement engineering, which will cause major problems with all other development methods either.
- Impossible to develop realistic estimates of work
His point: TDD has no clue about estimations. Well, duh. Estimation is, in the world of Kent and Extreme*, a management and planning problem, and a development strategy has no clue about planning and managing.
Don't get me wrong, I don't want to be one of those "Do TDD or I whack you with a flyswatter"-guys. I am just a humble, fellow programmer who has learned that writing the right tests for the right components early can be a blessing. Certainly, writing the wrong tests or writing too many senseless tests can be bad, I give you that. Tests cannot make all bugs disappead, I give you that (I already had to track down some really nasty bugs which occured due to subtle assumptions which were not obvious from the tets). I also give you that you might write more code, and I give you that some things cannot be tested.
But please, give me that my tests catch a lot of stupid errors, like off by one errors, or mistypes. That my tests help me to define the interface I want to used, that my tests tend to encourage modularity, because modularity increases testability. That userstories, and the acceptance tests for them help me to guide myself where to develop to (and yes, once you have the userstory and the acceptance test, it is race-horse style ;) )
I don't think you can really give him a point for the inefficiency remark; If something radically changes, and you haven't any automated tests, then you're going to have to manually test the changed code and anything related that it will impact. If the change is something core to the app, then this will mean manually testing the whole thing.
If you had a bunch of tests you could just re-run, then re-testing the entire system becomes trivial (it's a given that you'd need to rewrite the tests for the changed parts, but if your tests are written well, then you won't need to rewrite them all).
Isn't that a "no real scotsman" argument? Anyway, what's there to understand? TDD plainly means writing tests before writing the implementation, and he's railing against the whole concept.
One also wonders how, or by whom, you might wish his argument to be "backed up". A compelling argument should stand or fall on its own merits.
How about his mentions of evidence to the contrary being backed up by actual evidence to the contrary. Or the speculative weaker points like, "TDD wrecks time estimates", "inefficiency", and "scope creep".
Heck, in "Tests solve problems that don't exist" he sidesteps right past the real problem that TDD solves: refactoring code without inadvertently breaking something else.
So yeah, "No true Scotsman" would ever rail against TDD without proper evidence.
TDD doesn't help prevent breaking things when refactoring, automated testing does
In that section he's complaining specifically about the type of tests that TDD results in -- that they strongly tend to test for totally unrealistic bugs, structural crap that would only possibly come up when writing the code from scratch.
I think a lot of people do what zach did, though - because of hype, perhaps, they're loose with terminology and conflate TDD with automated testing in general. Then, people argue until they're blue in the face, because they're talking about completely different things. (This happens with OO all the time.)
Yeah, very true, especially with things like Rails where the community is somewhat .. faddish. Someone whose first real exposure to programming was Rails - and there are a lot of such people, coming from other web development areas - might well confuse TDD with automated testing itself. The two are taught/preached as one and the same.
The Rails community does tend to be a bit more interested in the "new shiny", but the heavy weight placed on testing has done wonders for improving the quality of the work I do. As for automated testing and TDD being considered one in the same, I believe there's a strong argument to suggest that they are. Allow me to explain.
When I am coaching, mentoring, or training people, I have no problem with bringing up both TDD and automated testing at the same time.
* We write a failing test
* We then fire up the automated test runner
* We make changes to our code until the test passes
* We repeat, adding new tests.
I can show that our automated test runner catches anything we do wrong immediately.
I don't believe you get the full benefits of TDD if you don't automate the process, and you can't have automated testing without good tests. I used to write lots of code without tests and I paid the price - lots of wasted time tracking down bugs, delayed releases because I could't figure out why my new feature broke old stuff, the usual stuff.
Investing time and energy learning how it's supposed to work has been the most valuable thing I've ever done for myself, my clients, and my company. The process I teach others is the process that I learned, and I feel that we're all much happier developers for it.
I just want to come back to the point I made earlier... a lot of the people who evangelize this stuff believe in it not because it's "cool", but because it has actually saved us and our projects. If you want to separate the hype machines from the true believers, ask one to show you. The ones who believe in it will almost certainly devote their time to show you how they do it. I know I would.
I agree with you that it can be an excellent way to develop, but in some languages the same testing-as-you-go is typically handled by the type system or other tools instead* . While I'm quite convinced of the value of automated testing, TDD is just one method, and one better suited to e.g. Ruby than languages like OCaml. (This is sometimes lost on its most vocal proponents.)
In all fairness, the language about type systems is often impenetrably mathematical, so I'm not surprised that the parallel isn't clear. (I really like OCaml, so I tried reading _The Definition of Standard ML_ to get a better understanding of the language family. Wham, brick wall. Then again, I'm not a mathematics grad student...I studied history.)
* Which is just a different way of communicating semantic constraints ("this is never null", "this list always has at least two unique UTF-8 string values", "this can only be used on writable files", etc.) to the language and having them automatically checked.
Your point about typing is well put. However Ruby has strong typing, but has no compiler to check. So you're gonna have tests for that, but the existence of a compiler and strong typing enforced by said compiler is not an excuse to avoid testing. My tests check logic. If I decide that User.to_string() needs to return "User: Brian Hogan <brianhogan@example.com>" then I write a test for that first. Then I go make to_string() do what that test says. Seems trivial, but no compiler's going to catch if I implement that wrong.
That example seems trivial.. but here's a real example: I had a very complex health insurance system I helped write. A mistake in the formula could have cost someone thousands of dollars in premiums. We asked the end users to give us names of people and their health scores. "Jon, with this height, weight, bmi, and all this other criteria should get a score of x"
Writing the test cases first helped out a ton. Even with a compiled strong typed language, I still would have needed tests to make that work.
And it's been six months since then and I've heard of no issues with the calculation routine. From that, I'm reasonably happy with test-driven development and that's why, regardless of compilers or type systems, I still find it vital to my success as a developer.
Oh, I'm definitely not saying that it replaces automated testing. A mixed approach is almost certainly best. I like randomized testing, too. Just, (conservatively) half of the things people write as unit tests could be expressed in the type system in OCaml* . If not correct for all possible input, they prevent the program from compiling, and lead you to everywhere the property doesn't hold. That's arguably a stronger guarantee than passing tests for known input->output at runtime. (In all fairness, much of it can also be done with Java or C++'s type systems, but without type inference it becomes incredibly tedious, and ML type constructors are far more direct than wrapping things in classes all over the place.)
Type systems are also "just another tool", though, and more useful in some cases than others. I wonder how many other properties about a program could be declared, inferred, and verified for the complete domain of input at compile time -- this is safe to use concurrently, that is fully isolated from non-deterministic input, this can have its results memoized, those can fully evaluated once at compile time and don't even need to run in final executable, etc. (Haskell can do some of this, but I don't like lazy evaluation as a default, among other things.) Playing with Prolog and logic programming has gotten me curious about how far that sort of analysis could be pushed.
About the insurance example -- I know exactly where you're coming from. I've done something similar for the architectural industry, and a huge suite of example results helped tremendously.
* And very likely Haskell, SML, Clean, and others, but my "advanced type system" experience is mostly with OCaml.
Here are things TDD gets me in my everyday work:
* I don't have to refresh the web page and re-fill in my forms to see if the record saves correctly this time.
* I don't have to log out and log in as a different user to see how things work
* I can plan an API much easier with tests that I write first.
* I know that my features are done when my tests pass
* I can write new tests to prove that bugs my users encounter are really bugs. (You can't get everything tested the first time, but fixing bugs is much easier with tests)
* I can upgrade to a new version of a framework or library because when stuff breaks, I have a roadmap of what I need to change. I can then give a customer an accurate estimate of what it will take to do the work cos I KNOW what I have to fix.
You're doing it wrong if you write ALL your tests first. You write one, and you only write enough test to cover the feature you're implementing. then you implement the code to make it pass. This iteration takes no significant time at all to an experienced programmer, and automated tools can run in the background, monitoring your code for changes and only running the tests your code impacts. Because we're programmers, and we know how to write code that does that.
You think this is a joke? Something to be ignored? Go ask a CPA (accountant) how they do the books.
They sure don't just do the math once and say "trust me I'm really just that good." They balance the books. Dual-entry accounting. They do this because they are disciplined professionals. And people pay big money to accountants to get that right. Your tests shouldn't exist to exist, they should prove your code does what you want. Checks and balances.
Of course the path to this is the same path one takes to learn a new language. It will be slow to start. It was for me, but really, it's such a huge win for my long-term productivity. I am happier, my clients are happier.
Really, what's not to like?