Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe that contrary to the conventional wisdom one should write tests from top down. First integration tests, then move down to testing individual functions if necessary. Not the other way around.

On my current project, for the first time in my long career, I have 100% code coverage. How I achieved it? By ignoring best practices on what constitutes a unit test. My "unit" tests talk to the databases, read and write files etc. I'll take 100% coverage over unit test purity any day of the week.



100% coverage doesn't mean much if you aren't testing that the low-level code is doing the right thing.

For example, imagine you're testing a calculator app. Your integration tests make sure that it never crashes, the UI works, basic math is working, etc, but maybe the sin() function is only accurate to two decimal places.

Edit: I do not mean to imply that you, specifically, are missing things. Rather, it is possible to write tests that have 100% code coverage while missing many possible bugs, and I think those risks are increased without the presence of unit tests.


> 100% coverage doesn't mean much

That's absolutely right. 100% coverage doesn't say much. In fact, by itself it's a pretty meaningless measure of code quality. But the top-down idea still holds. In practice bugs are more likely to happen at the seams between components, and therefore it's where one should test first, not last. It of course depends on the type of the project but I believe it's true for the majority of projects out there.


100% means nothing, but 30% means something is wrong. Coverage tells you nothing, but the absence of coverage is interesting and useful.


Repeatedly talking to the database (i.e. "retesting known database functionality over and over and over") is not a problem intrinsically. If you have a smallish codebase it's not a problem, that is. But get into the hundred-thousand-line and beyond point, and they become a very serious time-consumer on test runs, which need to be fast for high team productivity.

Learned this the hard way by working on a million-line codebase for a couple years on a team. 40 minute test suite runtimes before you know if you broke something become a serious flow-breaker.

It is of course up to you if you want to design for scalability in the code sense, however.


Top-down / outside-in test-first development is BDD.


The TDD style bottom-up approach can be handy too. On a recent project I write most of the tests first, to consider the API before implementation. These should come in handy when future developers join the codebase and need to understand the intended behavior of these components.


Would a future developer care if you wrote your tests before or after the API was implemented?


Actually yes, it's really hard to write a test without mocking for an existing (non-pure) API.


Interestingly, if you go right back to Kent Beck's TDD book, you'll see you're in agreement with him. Also with rails testing as it comes out-of-the-box. Testing individual functions is orthogonal to testing a unit.


how big is your project? what's your test suite run time? I honestly think unit test purity is very worthwhile if you have a large project. It means the difference between a 1 build time vs a 6 minute build time.


What is your stack? I feel like you are doing something odd if total build times are a bottleneck on development.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: