Test Driven Development vs working at the right level of abstraction

I like the idea of TDD. I think it could be a good fit for a part of the kind of development I do, but I still don't have the discipline it takes to start doing it.

On the other hand, suppose we could write something like:

throw NotEnoughInventoryException if  Product.Inventory < 0;

and we had a framework that ensured that the constraint will be checked in the right places (the UI, the business logic layer, the database, whatever). Let's suppose that framework is already built and tested by whoever built it.

Now suppose I want to write this code in a TDD way. I will start writing a test that will set the product Inventory in -1 and try to save it. The test will first fail, then I'll write my implementation code, and then it will succeed.

But I'm actually a little (OK, more than a little) lazy, and I want the same test to work for the Win UI, the Web UI, the Business Logic Layer, the database, whatever, so I write a framework that let's me write something like:

throw NotEnoughInventoryException if  Product.Inventory < 0;

and generates the code for the tests.

Now I start feeling a little stupid, as I'm writing the same code twice, one for the test and another for the implementation.

I can't help thinking that if we could write code at the right level of abstraction, then we won't need TDD or even Unit Testing. No one (I hope) writes a test to check if a statement like 'a = 1' works OK.

That's probably why I feel DSLs are much more interesting than TDD.

If we had a language that lets us express ourselves in the right level of abstraction, would we need TDD or Unit Testing?

Any TDD converted can help me here?

18 Comments

  • Does the law of leaky abstractions ring any bells?

  • I think the aim of all testing should be to minimise risks and an appropriate level of abstraction is the lowest level where the following hold:



    1) Where something is likely to go wrong, given the experience of the programmer developing it. Adding 1 to a variable isn't likely to go wrong, and is too low a level to test.



    2) Where if something does go wrong is it likely to cause damage to life or financial damage to your organisation. There's less need to test everything on throwaway code than on an application to monitor a nuclear power station.



    3) Where you wouldn't discover the problem easily by other means - if the thing you are testing would crash your program the first time you ran it in a debug environment if there is a problem, there probably isn't a need to write a unit test for it.



    TDD is sometimes the best way to do test (especially in library functions that will be used by multiple applications). However, in my experience a lot of the time things go wrong and where problems are difficult to discover is in areas such as joins between layers of the application (e.g. where code connects to a database), and TDD should not be used exclusively as it does not reduce the risks in these areas.

  • &gt;Does the law of leaky abstractions ring

    &gt;any bells?



    Yes, let's see which bells.



    All abstractions leaks.



    a) Does it mean that I need to test all the abstractions I'm using? I don't think so.



    b) Does it means that sometimes I won't be able to express what I want using the abstraction? Probably. And in those cases, I could need to do TDD. Or I could put more effort in refining the abstraction.



    Which bell did I miss?





  • I agree that DSLs are much more interesting than TDD, but when you use DSLs, you must test your code AND the code generated by DSLs.

    No DSL it's 100% bulletproof yet. :(







  • The main benefit of TDD is that it helps you design each module of a program. In other words, what level of abstraction do I need to expose for consumers to use this module and is what I exposed working properly. I haven't used DSLs much, but it seems to me that no matter now domain specific your language is, you still have to decide how to use that language to solve your problem. I can't see how a DSL in any replaces the benefits of TDD.

  • Reggie,



    You are right in the sense that if you are developing a software module, it looks that you still need TDD.



    Your post made me realize that DSLs are mostly used to configure an existing software module.



    Let's think about another example. Should we use TDD to build a WWF Workflow that we design with the WWF designer?









  • &gt;It doesn't duplicate the functionality of the

    &gt;exception itself.



    Instead of writing



    if (Inventory &lt; 0)

    throw Exception()



    I write something like:



    [ExpectedException(typeof(Exception))]

    public void Test()

    {

    Inventory i = new Inventory();

    i.Inventory = -1;

    i.Update()

    }



    Conceptually, it still looks similar.



    I want to test in several layers for some reasons, because I want the user to be notified ASAP (i.e, in Javascript without a postback instead after trying to update), and because sometimes you could want the database to do the ultimate check. Anyway, this is just an example, let's not focus on it ;)



  • TDD will likely turn out to be one of the most unfortunate terms ever chosen for a worthwhile concept (running a close second to Design By Contract). I've been doing things the &quot;TDD way&quot; more often over the last couple of years, especially after I finally &quot;got it&quot;. The trouble I have had in trying to convince peers to try it out is the four letter word in TDD - &quot;Test&quot;. The thing I always hear is, &quot;...but I already test my code.&quot; Most everyone misses that it is a better way to _Design_ software - it took me awhile to see it myself.



    DBC naturally blends into TDD, but again, people throw around the &quot;Contract&quot; word with such abandon that pre/postconditions and invariants are totally absent from most everyone's concept of Contract.

  • Although I agree with the need for testing software for accuracy, I too feel too lazy to write unit tests, especially when it's seems to be more buzy-work than anything. I don't think the technique of DSLs resolves the need for unit testing because i think you still have to write functional peices that act upon your DSLs. BUT, you could write a UnitTest DSL that allows you to simplify your unit testing.

  • How do you know that you're working at the inherently optimal level of abstraction?



    TDD is a design technique that helps software workers to arrive at the right level of abstraction by providing methodology and patterns that allow the design to be tuned - like a guitar string - to approach 0 degrees of phase shift in the multiple frequencies of the multiple concerns inherent in the code under design.



    TDD can be used to arrive at a design-tuned DSL, but once that DSL is statically locked down, it only continues to deliver value going forward unless if change forces coming from the business environment or the underlying technology environment cease to evolve.



    Ultimately, if a tool were able to generate code that approaches near 100% confidence, why generate tests? The answer has to do with the need to change software, evolve design, and leave open opportunities to re-leverage code that is factored to enable reuse harvesting in context that we possibly can't even conceive of.



    TDD is like a word in a language that another language doesn't have - a word that requires multiple words in the other language to approximate, but that only an approximation is ever arrived at rather than a deep inherent understanding.



    Developers who become fluent in TDD tend to understand that traditional software design verbiage doesn't really have a whole lot of tools to account for what TDD does. To understand TDD, it has to be understood in its native context. That requires spending some time in its native land engaging in the practices that give it context.



    I think most people underestimate the extent of TDD. In my experience, it has turned out to be a paradigm shift rather than merely an a minor adaptation of existing waterfall approaches that ultimately beget high end code generation.



    The question of code generation and TDD is a really interesting question. I wish there were more writing, investigation, and research on the subject. My suspicion at this point in the game is that they are orthogonal practices whose artifact can be used together on the same project, but rarely in addressing the same concerns.



    My 2 cents, anyway.

  • I'm not a fan of TDD... as in the name chosen for this design/development methodology. I think a *better* acrroynm would be TIOASIIIWYM... as In &quot;Try It Out And See If It Is What You Meant&quot;...pronounced &quot;Tee-OH-See-WHIM&quot;. TDD is all about challenging your design abstractions by &quot;trying it out&quot;. If your abstractions suck, then your tests will have trouble doing what you want.

  • &gt;If we had a language that lets us express &gt;ourselves in the right level of abstraction, &gt;would we need TDD or Unit Testing?

    You're right, given the code generation framework works correctly (proved by unit test cases)

  • I am currently in a project where NUnit tests writen in C# are being used to test BizTalk Orchestrations (=DSL). A conceptual mess, but it seems to work.

  • It may look similar but it is actually very different. In the first case, your actual implementation code is ensuring that an invalid situation cannot happen. In the second case, your unit test is making sure that your implementation code enforces business cases properly. Unit tests don't test business logic! Unit tests ensure that your application handles business logic properly. The two snippets you posted are complementary, not competitive.

  • I would still use TDD :)

    IMHO it helps me to check all my assumptions rather than take it on faith or the success of one execution path.

    I haven't had much exposure to DSLs but for me TDD helps my design/software/system talk back to me as to what I need to do next/ do better.

  • Hi Andres,



    I see now what you're asking. There have been discussions on the TDD mailing list about Test-Driving code generated from a Wizard and other generation frameworks.



    I think it comes down to how much you want to trust the tool you are using, and the pain points you are experiencing. So, with your WWF example, my question would be, what problems have you run into with it? When you have to change something, how can you know that nothing broke, or how can you know what is going to break?



    I guess for me I sketch out a design or prototype, then use TDD to fill in the details, being cautious to allow my code to evolve where TDD is leading me without sacrificing where I need to end up. I like to work in small chunks so that I can get rapid feedback about whether it works, and also so that if it doesn't, I'm Ok with throwing it away.



    I can't see that holding true for things like WWF. Again, I haven't seen it, but I have seen some Speech Server stuff, and the workflow generation in it. It's cool. But how do you test it?



    And maybe the answer is that you don't. You trust the tool.

  • As a fulltime TDD'er, it seems to me your design is flawed. Shouldn't the inventory check be a business function for the business tier only? That would reduce duplicated logic and allow for a single unit test to cover the logic?

  • Hi Rob,



    Yes, you are probably right, it's a bad example, but don't focus on the example ;)



    If it's a simpler validation (like a date range) I'll also want to perform it in several places.



    I think the workflow sample is better. Would you test a Workflow diagram? Would you test rules written for a business rules engine? How?

    If they are difficult to test, they must be disregarded as a good solution?

Comments have been disabled for this content.