December 2004 - Posts
I've been debating whether to post another entry on this for the last couple of weeks due to another bad smell I've spotted around Agile practices. However, it's my blog, and I think it's worth saying. But first, the bad smell...
Zealotry. Many advocates of Agile practices have had to fight their corner hard against supporters of traditional methodologies. Having had several such confrontations myself, it does become tiring, and you do beging to expect fairly incongruent arguments against it. There is a danger, though, that the habit of dealing with arguments against Agile that don't stack up will lead to a blinkered view where it's assumed that any negative statement about the principles and approach should be dismissed. No methodology is perfect - different circumstances call for different approaches, etc. Also, just because a methodology when applied correctly is appropriate and should result in success, it doesn't mean that it will be applied correctly and appropriately. People can, and will, get things wrong - especially when lacking experience.
So, TDD Gone Bad #2... Mock oriented development. Mocks are a good thing where you really want to extract a dependency and ensure you're testing the right thing. But there are a couple of issues that can arise:
- Poor integration tests, as everything is being tested in isolation - we can end up with a system where the constituent parts are clean, isolated, well tested, and known to be correct. But how they fit together is a greyer (or even blacker) area unless mocking is accompanied with a complement of integration tests. Dave describes this well in his blog, so there's no need for me to go on about it here: http://www.twelve71.org/blogs/dave/archives/000616.html
- Mock oriented design. Whilst mocking is good at removing dependencies, their introduction regularly alters the design of the system. I've seen numerous occasions where the introduction of mocks has added a large amount of complexity to an otherwise simple design. This complexity leads to higher implementation costs, a higher cognitive load on the developers working on the system, and higher maintenance costs (as there's more code to maintain). All of which go against the principle of "the simplest thing". The irony is that the introduction of mocking can, sometimes, make completing a system far more time consuming due to the different levels of granularity, and the additional code required to implement interfaces, etc. Again, that's not to say mocks aren't very useful things, just that they're a tool to be used where appropriate, not a pattern to base the foundations of your entire implementation on...
Whenever mocking is used, the value that the mock gives versus the cost it will introduce over the lifetime of the system should be measured (or at least estimated/considered)
So, #1.1 was all about the "business" - people defining requirements, and how these can cause issues. #1.2 is just a short entry about the underlying statement I was trying to make in the original post:
Anti-pattern: Where inexperienced/misguided developers keep on testing where it's not realistic or financially astute to do so, thinking that's what TDD is, and thinking they're "adding value". As we all know, that's not what TDD is. And it's something we should try to avoid by coaching, mentoring, and working closely with development teams; helping to give them ways of judging the value of any piece of work.
It's all too easy to work with a group of savvy technical people that get TDD, and not be able to see how it could go wrong. If you try and scale this across an enterprise where some people just don't get it, and the support isn't there to keep things moving in the right direction, things can, and do go wrong.
OK, so people don't seem to get what I was trying to say in my previous TDD post, to the point of claiming (on certain newsgroups) that I "don't get TDD". There were two points I was trying to make. So, I'll try and lay out the first of these now, in a simpler manner:
- Every test you write has a cost associated with it - the development time/cost: C
- Every test you write has a value associated with it - the time/money it'll save over the lifetime of the system: V
- At its simplest level, V must be greater than or equal to C for it to have been worth implementing the test (I'll probably post something separately on the mathematical modelling of this in the future)
The reason that V isn't always greater than C can arise from several situations from a "story" (business) perspective:
- Developers have been provided with open ended requirements - where there is no bounding to the tests that have to be written. At its most extreme case, this could be something like "Test: An unexpected physical occurence takes place outside the system. Result: The data will be backed up correctly". That's an open ended test, though. What if someone cuts power? Or a lightning bolt hits the server rack? Or a tiny blackhole forms over the CPU? "C" would become infinite if all scenarios were dealt with (just because I've used an extreme example here doesn't mean it couldn't be something much more subtle, either). Some of these scenarios are more likely than others, and there's diminishing return on each test being implemented as the likelihood of the scenario occuring drops. To stop this situation from occuring, requirements need to be closed and finite. If they're not closed/finite, you arrive at the situation in my previous post where a developer can simply "judge" how much testing is enough. Unless developers are they calculating the C:V ratio of each test in a situation with any open-endedness, then TDD can go bad.
- Developers have been provided with requirements with negative value - where the cost of implementing the test + code is inherently greater than the value it will deliver. Accurate estimation of tasks and comparison against the value of any given story will avoid this situation.
[In part #1.2, I'll discuss the second part of this]