When TDD goes red

Red lightThe past four days has been particularly grueling: I grossly underestimated the singleton abuse I talked about. In the end I ended up revising more than 500 compile errors (the 236 errors stated in the previous post was just the start of it!). There are 330 compile errors left, but I gave up on them.


Two reasons: the errors were already those in the unit tests, and because these tests have, themselves, abused the singleton that I was fighting against.

Facing this, I started to think the use of Test-Driven Development has, to some extent, failed in this project. I hope I don't offend some previous project members (I absolutely mean none), but, here are my insights as to why TDD failed here:

TDD was treated as a task, when it should have been treated as an an approach. Enforcing TDD on those who haven't heard of it or find it preposterous and expensive makes the affair an uphill battle, and once the enforcement stops (e.g., its proponents leave), the other team members regress into easier, test-less or test-last coding.

There was also a failure to recognize that TDD is not about tests, it's about design. The rampant case of singleton abuse in the unit tests made this obvious: instead of the test writers thinking "WTF are these singleton = value; statements doing in my tests?", the test writers just propagated the singleton into the tests. 330 times.

The unfortunate consequence is that the build server-enforced testing was made to pass, whatever it took. As I went through the test classes, many tests were either commented out or placed with an Ignore attribute. The "DDD" comments jokiz found was just the tip of the iceberg. Unfortunately, we don't know what to do with these tests anymore, because we have no idea if the tests are up to date with our current requirements.

However, I'm not giving up on unit-testing entirely.

I'm proposing to our lead to start on a clean slate with regards to the tests, and move towards a Behavior-Driven Development (BDD)-like approach, wherein we're going to write tests to check if we fulfill our requirements. That is, we will enforce writing tests to check whether the code is doing what the system requirements say it should. The nitty-gritty won't matter, for the meantime -- no need to test each and every method.

This is definitely not genuine BDD (we don't exactly have user stories -- system requirements can never be equivocated with user stories) but I hope it will make it easier for the team to appreciate the tests.

I also hope that it is a step towards the right direction.

About Jon Limjap

Jon Limjap has been programming since he was 12 and hasn't stopped yet. He was gone for a while in iOS and Java land, but is now back in .NET searching for unicorns and hunting down dragons.
This entry was posted in Tech Musings and tagged , , , , . Bookmark the permalink.

4 Responses to When TDD goes red

  1. cruizer says:

    in parts of the project you speak of, i tried to implement a bit of BDD. what i noticed is that TDD/BDD doesn’t necessarily lead to bug-free code (because some issues reported were actually mine!) but it helped me make the code easy to refactor/change. it also helped flesh out the collaboration between objects. the issues that were reported during QA were actually misinterpretations of the requirements on my part. so when clarification came in which necessitated some changes to the classes i was easily able to adapt to it and add new test cases which described the proper behavior before finally coding the necessary changes to pass the tests.

  2. LaTtEX says:

    I know bushing — that’s why I particularly like the categorization review process tests. Those are the kinds of tests that I will encourage be written in the application.

  3. Pingback: Ang Kape Ni LaTtEX » Blog Archive » Captive Audience

  4. Pingback: .NET @ Kape Ni LaTtEX » Blog Archive » Repeat after me: Test Driven Development is about design, NOT testing!

Comments are closed.