November 2008 - Posts

C# 3.0 has introduced lots of great features to make our life easier and syntax sweeter. Lots of people talked about it already, and I am not scooping here anything new. What I do want to demonstrate, is how to make the code easier to understand.

Confession is that I am relying on a tool to help me achieve what I will demonstrate – ReSharper. Not only it’s a great refactoring tool, but an excellent guide to C# 3.0 features (and not only). So lets roll to an example.

The test is trying to confirm that whatever path is being returned by system under test (SUT) is ending with either “\bin”, “\debug”, or “\release”. Normally, this is what I would write:

   1: [Test]
   2: public void Should_return_the_path_represented_by_system_as_a_BaseDirectory()
   3: {
   4:     var isMatching = result.path.EndsWith(@"\bin", StringComparison.InvariantCultureIgnoreCase)
   5:                    || result.EndsWith(@"\debug", StringComparison.InvariantCultureIgnoreCase)
   6:                    || result.EndsWith(@"\release", StringComparison.InvariantCultureIgnoreCase);
   7:     Assert.IsTrue(isMatching);
   8: }

Way too chatty. The whole EndsWith is more of a distraction, rather than help.

Next step – introduce an Extension Method that would encapsulate testing. One option would be to write a method that takes boolean and asserts.

   1: public static void should_be_true(this bool item)
   2: {
   3:    Assert.IsTrue(item);
   4: }

Then the code would read a bit more descriptive, but still with lots of ‘noise’.

   1: isMatching.should_be_true(result.EndsWith(...) || (...) || (...));

Not nice :) Another way is to have an extension method accepting a Func<T, bool> and leverage method group

   1: public static void should_be<T>(this T item, Func<T, bool> evalueationWith)
   2: {
   3:      Assert.IsTrue(evalueationWith(item));
   4: }

To escape the ‘noise’ I do what have described in the previous post (Test Helper with Fluent Interface).

   1: private class True
   2: {
   3:    public static bool for_the_given_path(string path)
   4:    {
   5:      return path.EndsWith(@"\bin", StringComparison.InvariantCultureIgnoreCase)
   6:             || path.EndsWith(@"\debug", StringComparison.InvariantCultureIgnoreCase)
   7:             || path.EndsWith(@"\release", StringComparison.InvariantCultureIgnoreCase);
   8:    }
   9: }

Bringing the test to the form it becomes cleaner.

   1: [Test]
   2: public void Should_return_the_path_represented_by_system_as_a_BaseDirectory()
   3: {
   4:   result.should_be(s => True.for_the_given_path(s));
   5: }

With some hints from R#…

image

It becomes “now I get it”, as my team mate Terry Thibodeau says, to hint me that finally the test becomes readable and simple to be considered good.

   1: [Observation]
   2: public void Should_return_the_path_represented_by_system_as_a_BaseDirectory()
   3: {
   4:    result.should_be(True.for_the_given_path);
   5: }

 

 

 

 

 

Develop code, not bugs!

Today was a great day. One of the things we do with the team is experiment how we write our test. Experimenting seems the most effective way of figuring out what should be our testing approach. At this point we are mostly doing specification driven tests (unit tests).

We had a few questions in regards to code duplication and keeping it DRY and self-explanatory at the same time, hiding no important details. At the same time overwhelming details should not be annoying the test reader/explorer. 

One of the things that is crucial, is to keep the spot light on the most important in the test – subject under test (SUT) and what are the assertions about it behavior/state. Anything else is secondary, but not unimportant. Therefore, anything that is not primarily should be communicated in the simplest manner and preferably in human language with simplistic logic in order not to steal the scene from SUT and result.

The component my pair partner and I were working on was a strategy component, based on time. A very simple one with the following rules:

  1. A report can be issued to a client if reporting time is within 10 minutes in the past from the current system’s time
  2. Anything that outside of the time frame defined in #1 means no report issued to the client

Simple one.

First step was to abstract the System.DateTime object in order to manipulate the current time. There are plenty of resources on that, and despite the fact that I loved a particular testing implementation proposed by Oren here, we ended up doing contract based implementation and usage of an instance based custom SystemDateTime solution.

   1: public interface ISystemDateTime
   2: {
   3:     DateTime Now { get; }
   4: }
   1: public class SystemDateTime : ISystemDateTime 
   2: {
   3:     public DateTime Now
   4:     {
   5:         get { return DateTime.Now; }
   6:     }
   7: }

Once that  was in place, we started to craft the tests for the strategy object. But something was wrong.

   1: [Concern(typeof(TimeBasedReportExecutionStrategy))]
   2:   [TestFixture]
   3:   public class When_time_based_report_execution_strategy_is_asked_to_determine_can_report_be_executed_for_a_given_client_information : SpecificationContext<IReportExecutionStrategy>
   4:   {
   5:       private ISystemDateTime systemDateTime;
   6:       private DateTime clientReportExecutionTime = new DateTime(2008, 1, 1, 12, 0, 0);
   7:       
   8:       protected override IReportExecutionStrategy EstablishContext()
   9:       {
  10:           systemDateTime = Stub<ISystemDateTime>();
  11:  
  12:           return new TimeBasedReportExecutionStrategy(systemDateTime);
  13:       }
  14:  
  15:       protected override void BecauseOf(){}
  16:  
  17:       [Test]
  18:       public void Should_allow_report_execution_when_escalation_time_is_within_executable_threshold()
  19:       {
  20:           systemDateTime.Stub(t => { var readProperty = t.Now; })
  21:               .Return(clientReportExecutionTime.AddMinutes(5));
  22:  
  23:           var result = system_under_test.CanIssueReportFor(new ClientEscalationInfo("client_id", clientReportExecutionTime, "recipients"));
  24:           result.should_be_true();
  25:       }
  26: }

What bothered us is the expression that was setting up a stubbed version of the  system current time, “clientReportExecutionTime.AddMinutes(5)”. It was too much of information to process just to determine that we want to test what happens when SUT has a client report time that is less than 10 minutes old.

So we started to think, and gosh I love pair programming. I know it’s unnecessary to promote it any further, but just can’t stop getting excited about pair programming I recall “the days of silo”. And if you read these lines and think this dude went from the “days of silo” to the “days of psycho”, then yes, I am mad about it :)

 

 

 

Anyhow, this test was not the only one, we wanted to test more than 10 minutes in the past, as well as what happens when report time is in future, etc. How we can make it as clear as possible? This is where we went to the Test Helper class pattern. It was nice, but still not what we wanted, we wanted something that would feel natural. Next step was to think towards the natural syntax – fluent interface. After a few iterations, name changes and debates, this is what we got:

   1: [Concern(typeof(TimeBasedReportExecutionStrategy))]
   2:     [TestFixture]
   3:     public class When_time_based_report_execution_strategy_is_asked_to_determine_can_report_be_executed_for_a_given_client_information : SpecificationContext<IReportExecutionStrategy>
   4:     {
   5:         private ISystemDateTime systemDateTime;
   6:         private DateTime clientReportExecutionTime = new DateTime(2008, 1, 1, 12, 0, 0);
   7:         
   8:         protected override IReportExecutionStrategy EstablishContext()
   9:         {
  10:             systemDateTime = Stub<ISystemDateTime>();
  11:  
  12:             return new TimeBasedReportExecutionStrategy(systemDateTime);
  13:         }
  14:  
  15:         protected override void BecauseOf(){}
  16:  
  17:         [Test]
  18:         public void Should_allow_report_execution_when_escalation_time_is_within_executable_threshold()
  19:         {
  20:             systemDateTime.Stub(t => { var readProperty = t.Now; })
  21:                 .Return(TestHelper.When(clientReportExecutionTime).Is(5).minutes_in_the_past);
  22:  
  23:             var result = system_under_test.CanIssueReportFor(new ClientEscalationInfo("client_id", clientReportExecutionTime, "recipients"));
  24:             result.should_be_true();
  25:         }
  26:  
  27:         [Test]
  28:         public void Should_not_allow_report_execution_when_escalation_time_is_within_executable_threshold_in_the_future()
  29:         {
  30:             systemDateTime.Stub(t => { var readProperty = t.Now; })
  31:                 .Return(TestHelper.When(clientReportExecutionTime).Is(5).minutes_in_the_future);
  32:  
  33:             var result = system_under_test.CanIssueReportFor(new ClientEscalationInfo("client_id", clientReportExecutionTime, "recipients"));
  34:             result.should_be_false();
  35:         }
  36:  
  37:         [Test]
  38:         public void Should_not_allow_report_execution_when_escalation_time_is_outside_of_executable_threshold_in_the_past()
  39:         {
  40:             systemDateTime.Stub(t => { var readProperty = t.Now; })
  41:                 .Return(TestHelper.When(clientReportExecutionTime).Is(11).minutes_in_the_past);
  42:             var result = system_under_test.CanIssueReportFor(new ClientEscalationInfo("client_id", clientReportExecutionTime, "recipients"));
  43:  
  44:             result.should_be_false();
  45:         }
  46:  
  47:         [Test]
  48:         public void Should_not_allow_report_execution_when_escalation_time_is_outside_of_executable_threshold_in_the_future()
  49:         {
  50:             systemDateTime.Stub(t => { var readProperty = t.Now; })
  51:                 .Return(TestHelper.When(clientReportExecutionTime).Is(11).minutes_in_the_future);
  52:  
  53:             var result = system_under_test.CanIssueReportFor(new ClientEscalationInfo("client_id", clientReportExecutionTime, "recipients"));
  54:  
  55:             result.should_be_false();
  56:         }
  57: }

The TestHelper in this case was a private class that implemented the fluent interface.

   1: private class TestHelper
   2: {
   3:     private DateTime anchorTime;
   4:     private int minutesToUse;
   5:  
   6:     private TestHelper(DateTime anchorTime)
   7:     {
   8:         this.anchorTime = anchorTime;
   9:     }
  10:  
  11:     public static TestHelper When(DateTime anchorTime)
  12:     {
  13:         return new TestHelper(anchorTime);
  14:     }
  15:  
  16:     public TestHelper Is(int minutes)
  17:     {
  18:         minutesToUse = minutes;
  19:         return this;
  20:     }
  21:  
  22:     public DateTime minutes_in_the_future
  23:     {
  24:         get
  25:         {
  26:             return anchorTime.AddMinutes(minutesToUse);
  27:         }
  28:     }
  29:  
  30:     public DateTime minutes_in_the_past
  31:     {
  32:         get
  33:         {
  34:             return anchorTime.AddMinutes(-minutesToUse);
  35:         }
  36:     }
  37: }

An interesting observation we made for ourselves was that just by doing it this way, not only we simplified tests reading, but also have taken the concern of date/time math out of the real concern – testing the real SUT, the strategy.

Conclusions

  1. I am not a 100% confident this is the best way to implement a test, but this is something that my team likes, and we will keep exploring it and getting better in expressiveness in our tests.
  2. Tests should be ‘clean’ (according to Uncle Bob) or they are not communicating the idea, become un-maintainable down the road, and abandoned eventually.
  3. If it feels wrong, it is wrong. Do everything it takes to feel right.

I was reading through the book when combined several subjects together, such as "help tests" and "error handling", and realized that the core "Depend upon abstraction.  Do not depend upon concretions." principle is underused by myself.

Normally, I apply this great principle when thinking of another not less good one - "Design to an interface and not an implementation". And usually it happens when I generate new pieces of code. But this can and should be applies to an existing code as well, especially if it's a 3rd party code. I am going to demo the concept along with the other subjects based on the book example, sorry for luck of 'creativity on an early weekend morning' :)

Lets pretend our code depends on the 3rd party component that has some strict rules about values it can operate on (for the sake of simplicity). It documents that 0 and 100 should never be used. Thirst thing is to verify this behaviour, document it and make sure we learn it and can verify the same behaviour or capture any changes in the next versions of the 3rd party component when we upgrade it (another reason to have learning/exploratory tests)

   1:   [TestsOn(typeof (ThirdPartyComponent))]
   2:    [TestFixture]
   3:    public class ThirdPartyComponent_LearningTests 
   4:    {
   5:      [Test]
   6:      [ExpectedException(typeof(InitializationException))]
   7:      public void When_passing_zero_Should_get_an_exception()
   8:      {
   9:        var sut = new ThirdPartyComponent();
  10:        sut.ActOn(0);
  11:      }
  12:   
  13:      [Test]
  14:      [ExpectedException(typeof(ExceptionalValueException))]
  15:      public void When_passing_a_hundred_Should_get_an_exception()
  16:      {
  17:        var sut = new ThirdPartyComponent();
  18:        sut.ActOn(100);
  19:      }
  20:    }

Result - behaviour verified and documented.

image

You can leave the job, knowing that someday someone will thank you for doing that. Or maybe not. Well, don't quit yet ;) Using the component now is quiet simple, we don't have the 'surprise' factor, but painful:

   1:        try
   2:        {
   3:          new ThirdPartyComponent().ActOn(0);
   4:        }
   5:        catch (InitializationException e)
   6:        {
   7:          Logger.Log(e.Message);
   8:        }
   9:        catch (ExceptionalValueException e)
  10:        {
  11:          Logger.Log(e.Message);
  12:        }
  13:        // more exceptions cases
  14:        finally
  15:        {
  16:          Logger.Log("Done with 3rd party component.");
  17:        }

This will hunt you down if you use the component several times several places. Solution (as per book) - abstract it from the system in a way that suites our needs (logging in this case) and does not affect the client code when test cases are added/removed in the future. Again, through the test we define the interface of the abstracted component:

   1:   [TestFixture]
   2:    public class When_abstracted_component_is_asked_to_
   3:                                           act_on_special_case_values
   4:    {
   5:      [Test]
   6:      [Row(0)]
   7:      [Row(100)]
   8:      [ExpectedException(typeof(AbstractedComponentException))]
   9:      public void Should_throw_an_abstracted_exception(int value)
  10:      {
  11:        create_system_under_test().ActOn(value);
  12:      }
  13:   
  14:      private AbstractedComponent create_system_under_test()
  15:      {
  16:        return new AbstractedComponent();
  17:      }
  18:    }

Now the client code is shielded from all the different exceptions specific to the 3rd party component an instance can throw, allowing other exceptions to propogate as normal. Implementation of AbstractedComponent encapsulates what used to be in the client code:

   1:    public class AbstractedComponent : IAbstractedComponent
   2:    {
   3:      private ThirdPartyComponent component;
   4:   
   5:      public AbstractedComponent()
   6:      {
   7:        component = new ThirdPartyComponent();
   8:      }
   9:   
  10:      public void ActOn(int value)
  11:      {
  12:        try
  13:        {
  14:          component.ActOn(value);
  15:        }
  16:        catch (InitializationException e)
  17:        {
  18:          Logger.Log(e.Message);
  19:          throw new AbstractedComponentException(e);
  20:        }
  21:        catch (ExceptionalValueException e)
  22:        {
  23:          Logger.Log(e.Message);
  24:          throw new AbstractedComponentException(e);
  25:        }
  26:        // more exceptions
  27:      }
  28:    }

AbstractedComponentException is basically a DTO to carry ThirdPartyComponent various current (and future) exceptions, allowing the client concentrate on component work/failure without going into the implementation details of possible failures.

   1:    public class AbstractedComponentException : Exception
   2:    {
   3:      public AbstractedComponentException(Exception exception)
   4:        : base("AbstractedComponent exception", exception)    {  }
   5:    }

The new client code now looks cleaner and DRYer.

   1:        try
   2:        {
   3:          new AbstractedComponent().ActOn(0);
   4:        }
   5:        catch (AbstractedComponentException e)
   6:        {
   7:          Logger.Log(e.Message);
   8:        }

The exception message can be improved to provide the required information without too much of hassle, but that's not the main point. The point is that client code is not shielded from unnecessary details, is not replicated, simple to read, and most of all verifiable/testable down the road. Change of the 3rd party component is not an agony for the application, but a routine exercise of already existing learning tests (personally like the word 'exploratory' more).

image

As always, comments and notes are welcomed. Have a great week!

 

 
Update: side note - I used R# 4.1 and Gallio for example.

image I finally got a chance to get to the next book on my reading list. Highly recommended book. "Uncle Bob" not only has produced a quality book, but also has captured a significant amount of wisdom that helps to make coding more rational.

It's not a secret that I prefer certain tools/frameworks/applications over other ones. So is true with unit testing framework. My currently preferred one is MbUnit. My team was using NUnit so far, and honestly I respected the choice and didn't mind that much. Until this week.

This week we moved to MbUnit (yupi!). Several reasons for that:

  • Row testing was not possible - despite the NUnit.Rowtest extension, it was not working with R# unit tests runner
  • Multiple assertions would fail if the first one would not pass
  • Assert.AreEqual behaved weirdly

The last item was quiet a surprise. What happened is that we decided to try to be more expressive in our tests, and therefor substitute Assert.AreEqual() with extension method should_be_equal_to() allowing a more fluent syntax and human expressiveness (yes, side effect is abstracting away the unit testing framework, but that was not the primarily intent).

One interesting fact about how we implemented it. As I have mentioned, with extension method. The code is quiet simple:

public static class AssertionsExtensions
{
    public void should_be_equal_to(this object actual, object expected)
    {

Assert.AreEqual(expected, actual);

    }
}

It was working just fine. So an expression like

var result = CreateSystemUnderTest();
result.Id.should_be_equal_to(1);

would work. Until we introduced MbUnit and it started complaining. It complained about the fact that if result would be a string "1" value, and expected value would have been 1 numeric, still it would report it as equal, despite the fact that they are not of the same type. Our fix was to get away from an object type and use generics:

public static class AssertionsExtensions
{
    public void should_be_equal_to<T>(this T actual, T expected)
    {
        Assert.AreEqual(expected, actual);
    }
}

Now MbUnit was satisfied and we were happy - tests passing again.

So what happened? Apperently Assert.AreEqual() expects either two strings or two objects. When a string and another type are supplied, it does ToString() and compares the values. 1.ToString() would be the same as "1". But these two are not equal in the normal context. MbUnit was smart enough to pick that and warn about it.

Kudos to MbUnit. Good for us :)

I have posted a question at VisualSVN user group in regards to reverting delete operation on an item in a project. The workaround is very much manual, and feels wrong. At this point I was either able to revert the change for the deleted file only and manually add it to the project, or revert both deleted file and modified project file, and manually remove the references to the files that are reported as 'missing'.

More Posts