When TDD Goes Bad #1
Although Test Driven Development (TDD) is one of the greatest steps forwards in software engineering, especially when combined with modern languages and testing frameworks (i.e. xUnit), there's a definite anti-pattern lurking in there - Test Oriented Development.
At its most basic level level, writing tests is done for one reason and one reason only. It's not about ensuring you meet requirements, it's not to have a repeatable means of knowing the code functions properly after changes, and it's not to aid design. The key reason for writing tests is the same as everything else in software development - to make the company commisioning the software more profitable. So, the justification for TDD is based upon the premise that, in the long run, writing tests will save the company money on down time, effort in re-workings, and so on - but these are just implementation details of the underlying principle. Now, although most developers that do TDD understand the long-term payback view of the approach, I'd question how many actually consider the value of each test or set of tests that are being developed.
Using an agile methodology, tying the (acceptance) tests to the stories is fairly simple, seemingly giving visibility of the value of tests. But, given that many verbal tests are quite open ended, how do you know how much testing is enough? The test may be stated as "When I enter the correct data, but there is a connectivity issue, I will see error message X". A verbally described test such as this can have a hundred different programmatic tests implemented - simulating standard firewall issues, XML-firewall issues, web servers being down, latency in connections, time-outs, corruption of data, etc. There is no immediately visible answer to "how much testing is enough?" in such a scenario. This is even worse with unit tests, where they aren't likely to be defined formally (verbally) at all, and are largely up to how meticulous a developer is.
Added to the lack of metric is the fact that the developer is concentrating on testing before "coding" - the primary function is hence the development of tests, leading to a situation of "test-oriented development". This is where the production of tests becomes the overriding concern and output of a team, with the development of required functionality being secondary. When this occurs, the design, quality, and scope of the tests all become greater than that of the system actaully being created. As the development of tests follows the law of diminishing returns principle, beyond a point, each additional test you write for a certain circumstance adds only more and more marginal increases in quality. Eventually, you will reach a point where there will never be a point in the lifetime of the system where the development of the tests achieves ROI. Basically, the development of tests becomes counter-productive to the aims of the business. This is compounded by the Heisinger's Uncertainty Principle-like facet of TDD - it's alteration of the design of the code it's testing. But that's a subject for another post in the next few days...
Avoiding test-oriented development can be quite difficult - changing a developer's mindset to take on TDD is difficult enough to start with, trying to then amend that to temper the creation of tests goes back on that principle to some extent. My current thinking on this is that, along with acceptance tests and development tasks, a separate list of requirements needs creating for each story - a list of required unit tests. Rather than just estimate the effort required to complete a story point in its entirety, the effort required for each unit test (and its value in pound-notes/dollar-bills) should also be calculated to some extent. Each test is then subject to the same cost-benefit analysis and prioritisation as stories at a higher level. The task of doing this should also help developers to consider the reason for creating a test; how realistic/contrived the situation it's testing is, how better the effort could be expended, and finally, help remind them what they're actually trying to achieve at a less programmatic level. If this seems like too much effort, then an alternative may be to look at the ratio of time that is to be spent on writing tests to that of the production code itself for a given story-point to ensure that the focus isn't shifting from the system to the tests.