The Rails community does tend to be a bit more interested in the "new shiny", but the heavy weight placed on testing has done wonders for improving the quality of the work I do. As for automated testing and TDD being considered one in the same, I believe there's a strong argument to suggest that they are. Allow me to explain.
When I am coaching, mentoring, or training people, I have no problem with bringing up both TDD and automated testing at the same time.
* We write a failing test
* We then fire up the automated test runner
* We make changes to our code until the test passes
* We repeat, adding new tests.
I can show that our automated test runner catches anything we do wrong immediately.
I don't believe you get the full benefits of TDD if you don't automate the process, and you can't have automated testing without good tests. I used to write lots of code without tests and I paid the price - lots of wasted time tracking down bugs, delayed releases because I could't figure out why my new feature broke old stuff, the usual stuff.
Investing time and energy learning how it's supposed to work has been the most valuable thing I've ever done for myself, my clients, and my company. The process I teach others is the process that I learned, and I feel that we're all much happier developers for it.
I just want to come back to the point I made earlier... a lot of the people who evangelize this stuff believe in it not because it's "cool", but because it has actually saved us and our projects. If you want to separate the hype machines from the true believers, ask one to show you. The ones who believe in it will almost certainly devote their time to show you how they do it. I know I would.
I agree with you that it can be an excellent way to develop, but in some languages the same testing-as-you-go is typically handled by the type system or other tools instead* . While I'm quite convinced of the value of automated testing, TDD is just one method, and one better suited to e.g. Ruby than languages like OCaml. (This is sometimes lost on its most vocal proponents.)
In all fairness, the language about type systems is often impenetrably mathematical, so I'm not surprised that the parallel isn't clear. (I really like OCaml, so I tried reading _The Definition of Standard ML_ to get a better understanding of the language family. Wham, brick wall. Then again, I'm not a mathematics grad student...I studied history.)
* Which is just a different way of communicating semantic constraints ("this is never null", "this list always has at least two unique UTF-8 string values", "this can only be used on writable files", etc.) to the language and having them automatically checked.
Your point about typing is well put. However Ruby has strong typing, but has no compiler to check. So you're gonna have tests for that, but the existence of a compiler and strong typing enforced by said compiler is not an excuse to avoid testing. My tests check logic. If I decide that User.to_string() needs to return "User: Brian Hogan <brianhogan@example.com>" then I write a test for that first. Then I go make to_string() do what that test says. Seems trivial, but no compiler's going to catch if I implement that wrong.
That example seems trivial.. but here's a real example: I had a very complex health insurance system I helped write. A mistake in the formula could have cost someone thousands of dollars in premiums. We asked the end users to give us names of people and their health scores. "Jon, with this height, weight, bmi, and all this other criteria should get a score of x"
Writing the test cases first helped out a ton. Even with a compiled strong typed language, I still would have needed tests to make that work.
And it's been six months since then and I've heard of no issues with the calculation routine. From that, I'm reasonably happy with test-driven development and that's why, regardless of compilers or type systems, I still find it vital to my success as a developer.
Oh, I'm definitely not saying that it replaces automated testing. A mixed approach is almost certainly best. I like randomized testing, too. Just, (conservatively) half of the things people write as unit tests could be expressed in the type system in OCaml* . If not correct for all possible input, they prevent the program from compiling, and lead you to everywhere the property doesn't hold. That's arguably a stronger guarantee than passing tests for known input->output at runtime. (In all fairness, much of it can also be done with Java or C++'s type systems, but without type inference it becomes incredibly tedious, and ML type constructors are far more direct than wrapping things in classes all over the place.)
Type systems are also "just another tool", though, and more useful in some cases than others. I wonder how many other properties about a program could be declared, inferred, and verified for the complete domain of input at compile time -- this is safe to use concurrently, that is fully isolated from non-deterministic input, this can have its results memoized, those can fully evaluated once at compile time and don't even need to run in final executable, etc. (Haskell can do some of this, but I don't like lazy evaluation as a default, among other things.) Playing with Prolog and logic programming has gotten me curious about how far that sort of analysis could be pushed.
About the insurance example -- I know exactly where you're coming from. I've done something similar for the architectural industry, and a huge suite of example results helped tremendously.
* And very likely Haskell, SML, Clean, and others, but my "advanced type system" experience is mostly with OCaml.
When I am coaching, mentoring, or training people, I have no problem with bringing up both TDD and automated testing at the same time.
* We write a failing test
* We then fire up the automated test runner
* We make changes to our code until the test passes
* We repeat, adding new tests.
I can show that our automated test runner catches anything we do wrong immediately.
I don't believe you get the full benefits of TDD if you don't automate the process, and you can't have automated testing without good tests. I used to write lots of code without tests and I paid the price - lots of wasted time tracking down bugs, delayed releases because I could't figure out why my new feature broke old stuff, the usual stuff.
Investing time and energy learning how it's supposed to work has been the most valuable thing I've ever done for myself, my clients, and my company. The process I teach others is the process that I learned, and I feel that we're all much happier developers for it.
I just want to come back to the point I made earlier... a lot of the people who evangelize this stuff believe in it not because it's "cool", but because it has actually saved us and our projects. If you want to separate the hype machines from the true believers, ask one to show you. The ones who believe in it will almost certainly devote their time to show you how they do it. I know I would.