Archives

Archives / 2008 / March
  • xUnit.net RC2 Released

    UPDATE: More posts on the subject

    UPDATE: xUnit.NET RC2 New Drop includes ASP.NET MVC support and better GUI runner.  Details here.
    UPDATE: Added Static Methods mention and F# - Thanks to DevHawk!


    I've been a big fan of such testing frameworks as NUnit and MbUnit, but recently I've found myself getting pulled more towards xUnit.net at least to play around with for any of my code samples that I write for this blog and on my own time.  I'm not really a fan of MSTest and many I think would agree about its deficiencies.  I won't go as far as say Jay Flowers and wear the shirt though...

    Another Release?

    Recently, Brad Wilson and Jim Newkirk recently announced the release of xUnit.net RC2 on CodePlex.  I'd encourage you download the latest bits here.  For those wondering what changes happened between RC1 and RC2, Brad has a good writeup on his blog here.  What's interesting about this is the removal of the Assert class methods which take a user defined message should it fail.  I was never really a fan of those in the first place though. 

    Another interesting added features was the IUserFixture<T> which allows you to have a startup and teardown for your fixtures in a separate class and therefore reusable, unlike the current way of using no parameter constructors as your startup and the IDisposable.Dispose for your teardown.  See the tests in the FixtureExample for details.  But here's a snipped version of that code:

        public class DatabaseFixture : IDisposable

        {

            SqlConnection connection;

            int fooUserID;

     

            public DatabaseFixture()

            {

                string connectionString = ConfigurationManager.ConnectionStrings["DatabaseFixture"].ConnectionString;

                connection = new SqlConnection(connectionString);

                connection.Open();

                string sql = @"INSERT INTO Users VALUES ('foo', 'bar'); SELECT SCOPE_IDENTITY();";

     

                using (SqlCommand cmd = new SqlCommand(sql, connection))

                    fooUserID = Convert.ToInt32(cmd.ExecuteScalar());

            }

     

            public SqlConnection Connection

            {

                get { return connection; }

            }

     

            public int FooUserID

            {

                get { return fooUserID; }

            }

     

            public void Dispose()

            {

                string sql = @"DELETE FROM Users WHERE ID = @id;";

     

                using (SqlCommand cmd = new SqlCommand(sql, connection))

                {

                    cmd.Parameters.AddWithValue("@id", fooUserID);

                    cmd.ExecuteNonQuery();

                }

     

                connection.Close();

            }

        }


    What the above code allows us to do is to define a class that holds the data from the initialization of the first test, to the cleanup after the last test.  Our state is therefore maintained in a reusable manner.  As you will note, the startup logic resides in the default no parameter constructor and all teardown logic is in the IDisposable.Dispose method.

        public class FixtureTests : IUseFixture<DatabaseFixture>

        {

            DatabaseFixture database;

     

            public void SetFixture(DatabaseFixture data)

            {

                database = data;

            }

      

            [Fact]

            public void FooUserWasInserted()

            {

                string sql = "SELECT COUNT(*) FROM Users WHERE ID = @id;";

     

                using (SqlCommand cmd = new SqlCommand(sql, database.Connection))

                {

                    cmd.Parameters.AddWithValue("@id", database.FooUserID);

                    int rowCount = Convert.ToInt32(cmd.ExecuteScalar());

                    Assert.Equal(1, rowCount);

                }

            }

        }


    Then we can go ahead with our tests, while using the SqlConnection as defined on our DatabaseFixture.  After we're done with our test, it goes ahead and calls Dispose on the fixture.  I tend to like this approach and it's definitely growing on me.

    Why xUnit.net?

    For those new to xUnit.net, there are some decent links to help you along.  Some of the more interesting ones can be found here:
    But, why am I interested in it?  Well, let's just say that I think it tackles things in a slightly different manner.  I think one of the key pieces that I really like is the Assert.Throws instead of the clumsy ExpectedExceptionAttribute which you must clutter your tests with on the top.  I would rather assert that such a thing happened programmatically, so that I may analyze the exception.  I can also specify which line I expect during my test will throw the exception, instead of taking on blind faith that my test threw an exception.  It may be of the right type, but that may not have been the one you wanted, thus giving a false sense of security.

    To use this, just simply use the Assert.Throws<TException>(Assert.ThrowsDelegate) which I've found to be very helpful.  Let's look at a quick test of that being used.

        [Fact]

        public void PopEmptyStack()

        {

            Stack<string> stack = new Stack<string>();

            Assert.Throws<InvalidOperationException>(() => stack.Pop());

        }


    As you can see, we're pretty explicit about what line will throw the exception, and that's really the key to this scenario.  There are a good number of samples provided on the releases page that you should check out.  As always with most products that I talk about, I highly recommend reading the tests to really fully understand what's going on underneath the covers.  Not only does it help you understand the intent of the program, but you can learn about good coding techniques, design patterns, testing patterns and so on.

    Another point that xUnit.net separates itself from the pack is the ability to decorate static methods as facts.  This frees you from having to create an instance of your test class in order to call them.  Harry Pierson, aka DevHawk, demonstrates its use with regards to F# and testing the parse buffer here.  It definitely opened my eyes and a few more avenues as I pursue more F# related work items in the future.  Here's just a quick and dirty sample of showing how you can use xUnit.net with F# quite easily, just as Harry's post did.

    #light

    #R @"E:\Tools\xunit-build-1223-samples\Samples\xunit\xunit.dll"

    open System
    open System.Collections.Generic
    open Xunit

    type Stack<'t> = class
      val elements : LinkedList<'t>
     
      new() = { elements = new LinkedList<'t>() }
     
      member x.IsEmpty
        with get() = x.elements.Count = 0
       
      member x.Push element =
        x.elements.AddFirst(element:'t)
       
      member x.Top
        with get() =
          if x.elements.Count = 0 then
            raise (InvalidOperationException("cannot top an empty stack"))
          x.elements.First.Value
         
      member x.Pop =
        let top = x.Top
        x.elements.RemoveFirst()
        top
    end

    [<Fact>]  
    let NoElementsShouldBeEmpty () =
      let stack = new Stack<string>()
      Assert.True(stack.IsEmpty)


    If you notice, the FactAttribute is placed on a static method called NoElementsShouldBeEmpty and sure enough it works like a champ through xUnit.net.  I like this approach instead of the pomp and circumstance required for creating classes as shown above with my Stack class.  Note the use of the empty parans which forces it to be a void method with no values passed either.  But if you run it through the xunit.console sure enough it succeeds like a champ.

    What are we missing though?  Well, I'm in favor of having a standalone GUI Test Runner much like NUnit and MbUnit have.  In fact, Brad has started this and you can get these features from the latest commits here.  Mind you I haven't gotten it to work just right yet, but it's a work in progress.

    Conclusion

    There is a lot to like about xUnit.net and takes a lot of lessons learned from the use of NUnit, MbUnit and others and I think they're doing a good job incorporating issues.  This project isn't as active as MbUnit and NUnit, but it's definitely one to keep an eye on.  Recent releases of NUnit and Gallio Automation Platform will probably be also covered in the short while as well as they have a lot to offer.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • Understanding AOP in .NET

    In my previous posts I have talked a bit about Inversion of Control (IoC) containers with respect to Interception and Aspect Oriented Programming (AOP).  It's not only important to understand the uses and strategies for implementing your solutions using it, but also how interception and AOP works deep down in .NET.  Instead of a long, drawn out post, I think I'll just include some articles and posts that do a very good job of explaining some of the ideas behind it.

    Articles and Posts

    I think it'd be good if we just start out with some basic MSDN articles and such regarding AOP and interception.  Some of them may be older but the concepts will still apply to this day:


    Just Read the Code

    There are many AOP frameworks out there in the wild right now for .NET.  To understand them pretty well, it's best if you just crack open the code and follow the unit tests.  Most of these are no longer active.  Let's cover some of the AOP frameworks out there:
    Many containers also implement AOP through the IoC container such as:
    Conclusion

    For those willing and able to go ahead and learn about AOP, it's actually quite interesting.  it's also quite a challenge especially when dealing with IL emitting.  Go ahead and look at the source code and samples and give some of it a try.  Next time we pick up, I'll be talking about AOP in the Enterprise and Spring.NET.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • IoC and Unity - Configuration Changes for the Better

    In my previous post about Unity and IoC containers, I made note of some changes in the latest drop of the Unity Application Block.  As Grigori Melnik, the PM of the Unity and Enterprise Library team noted, Unity should be released in its final form on April 7th, so stay tuned.  In the mean time, the latest drop of Unity was on March 24th, so go ahead and it pick it up.

    Configuration Changes

    As I noted from above, the public APIs really haven't changed all that much.  Instead, most of the efforts recently have been around performance improvements in the ObjectBuilder base and the configuration of the container itself.  I must admit that previous efforts left me a little cold with having to decorate my classes with the DependencyAttribute.  Well, you shouldn't have to do that anymore, now that the TypeInjectionElement has been added so that you can map your constructor arguments and so on.  Let's walk through a simple example of doing so.

    First, let's go through my basic anti-corruption container that I use for Unity and any other container that I use for registration and so on.

    namespace UnitySamples

    {

        public static class IoC

        {

            private static IDependencyResolver resolver;

     

            public static void Initialize(IDependencyResolver resolver)

            {

                IoC.resolver = resolver;

            }

     

            public static T Resolve<T>()

            {

                return resolver.Resolve<T>();

            }

     

            public static T Resolve<T>(string name)

            {

                return resolver.Resolve<T>(name);

            }

        }

    }


    Remember this is just a quick spike sample of my anti-corruption container which was taken from Ayende.  And then in order to configure my UnityContainer through the implementation of my IDependencyResolver interface.  Let's take a brief look at that:

    namespace UnitySamples

    {

        public interface IDependencyResolver

        {

            T Resolve<T>(string name);

     

            T Resolve<T>();

        }

    }


    And then the implementation of the interface for Unity would look like:

    using System.Configuration;

    using Microsoft.Practices.Unity;

    using Microsoft.Practices.Unity.Configuration;

     

    namespace UnitySamples

    {

        public class UnityDependencyResolver: IDependencyResolver

        {

            private IUnityContainer container;

     

            public UnityDependencyResolver()

            {

                container = new UnityContainer();

                UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");

                section.Containers.Default.Configure(container);

            }

     

            public T Resolve<T>()

            {

                return container.Resolve<T>();

            }

     

            public T Resolve<T>(string name)

            {

                return container.Resolve<T>(name);

            }

        }

    }


    Now, if you look at the constructor for the UnityDependencyResolver above, I am using the default container in order to configure my container.  But, I have the option of specifying a name for it as well or even just an index.  I could just easily change that code to this and it would work if I name my container default.  This is a little bit of a change from before when I had to use the GetConfigCommand() method in order to configure the container which was a bit too chatty for my tastes.

                container = new UnityContainer();

                UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");

                section.Containers["default"].Configure(container);


    So, the idea that I want to do is create an object graph so that I have classes with dependencies that have dependencies.  Without doing AOP and some basic interception, we could take the approach of keeping around something like an IContext which would have our cross-cutting concerns in one location such as logging and whatnot in one area so your objects don't sit there with 16 constructor parameters, and instead has a context from which it can pull.  This approach has worked for me in the past, so let's just go through that one right now.

    First, let's look at the context that has those cross-cutting concerns and the actual implementation:

    namespace UnitySamples

    {

        public interface IContext

        {

            ILogger Logger { get; }

        }

    }


    And then the concrete implementation might look something like this:

    namespace UnitySamples

    {

        public class UnityContext : IContext

        {

            private readonly ILogger logger;

     

            public UnityContext(ILogger logger)

            {

                this.logger = logger;

            }

     

            public ILogger Logger

            {

                get { return logger; }

            }

        }

    }


    So, what I do here is inject my logger into my context, and probably anything else that might be cross-cutting as well.  So, now in one of my classes, then I can accept the IContext in to do what I need it to do.  That would look something like this.

    namespace UnitySamples

    {

        public class Customer

        {

            public string CustomerId { get; set; }

     

            public string FirstName { get; set; }

     

            public string MiddleName { get; set; }

     

            public string LastName { get; set; }

        }

     

        public class CustomerTasks

        {

            private readonly IContext context;

     

            public CustomerTasks(IContext context)

            {

                this.context = context;

            }

     

            public void SaveCustomer(Customer customer)

            {

                // Save customer

                context.Logger.LogEvent("Saving customer", LogLevel.Information);

            }

        }

    }


    I'm not doing anything special, instead, just showing how this pattern might apply.  And then just tying it all together is my console application (sometimes my favorite UI for quick spikes).

    namespace UnitySamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                IoC.Initialize(new UnityDependencyResolver());

                CustomerTasks tasks = IoC.Resolve<CustomerTasks>();

                tasks.SaveCustomer(new Customer{ CustomerId = "12345", FirstName = "Joe", LastName = "Smith", MiddleName = "Frank"});

            }

        }

    }


    But the more interesting part about this is the XML configuration.  I know many people such as Ayende have declared war on XML configuration, but I think for this quick example it does quite well.  If we start talking about complex object graphs, then I'd certainly agree and I'd rather do it programmatically.  But, let's first look at how I'd wire up the whole thing in the app.config file.

    <?xml version="1.0" encoding="utf-8" ?>

    <configuration>

        <configSections>

            <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" />

        </configSections>

     

        <unity>

            <typeAliases>

                <typeAlias alias="string" type="System.String, mscorlib" />

                <typeAlias alias="ILogger" type="UnitySamples.ILogger, UnitySamples" />

                <typeAlias alias="ConsoleLogger" type="UnitySamples.ConsoleLogger, UnitySamples" />

                <typeAlias alias="DebugLogger" type="UnitySamples.DebugLogger, UnitySamples" />

                <typeAlias alias="IContext" type="UnitySamples.IContext, UnitySamples" />

                <typeAlias alias="UnityContext" type="UnitySamples.UnityContext, UnitySamples" />

                <typeAlias alias="CustomerTasks" type="UnitySamples.CustomerTasks, UnitySamples" />

            </typeAliases>

            <containers>

                <container>

                    <types>

                        <type type="ILogger" mapTo="ConsoleLogger" name="defaultLogger"/>

                        <type type="ILogger" mapTo="DebugLogger" name="debugLogger"/>

                        <type type="IContext" mapTo="UnityContext">

                            <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">

                                <constructor>

                                    <param name="logger" parameterType="ILogger">

                                        <dependency name="debugLogger"/>

                                    </param>

                                </constructor>

                            </typeConfig>

                        </type>

                        <type type="CustomerTasks">

                            <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">

                                <constructor>

                                    <param name="context" parameterType="IContext">

                                        <dependency/>

                                    </param>

                                </constructor>

                            </typeConfig>

                        </type>

                    </types>

                </container>

            </containers>

        </unity>

    </configuration>


    As you may notice from above, Unity gives us the ability to give type aliases so that we can reference the short names instead of the long ugly fully qualified type names by using the <typeAliases><typeAlias> nodes in the configuration.  Much as before, we still register our types with a name, type and so on. 

    But, what's interesting is that we now have the ability to do type injection through a Unity extension called the TypeInjectionElement.  This allows us to inject into the constructor and put in our parameters as need be.  But we could also replace that with <method> or <property> in order to do method injection and property setter injection respectively. 
    Within the <param> element, we can specify our given injection element parameters.  This allows us to specify the name of our given parameter, the type, but also for the dependency check, we can specify the name and type to alias the other <type> elements that we wish to get a reference of.  We can also specify the values if we so desire of strings, integers and so on.  Below is a simple example of using both dependency references and values.

    <constructor>

        <param name="logger" parameterType="ILogger">

            <dependency name="debugLogger"/>

        </param>

        <param name="dbName" parameterType="string">

            <value value="AdventureWorks" />

        </param>

    </constructor>


    I find the best way to learn about these features usually isn't through documentation, although it's a nice thing to do, but instead the tests.  I look for the functionality that I'm interested in and go deep.  A project this size without good unit tests == FAIL in my opinion.  Anyhow, this is better looking than it used to be and much more intuitive.

    Interception Revisited

    As Chris Tavares, the developer of Unity noted on my blog before was the fact that he could easily put in the interceptors from the ObjectBuilder2 since ObjectBuilder2 is modular and Unity is built upon it.  It shouldn't have been news, because I've been working on that myself to get lightweight interception to work on it.  Maybe when I get it fully implemented I'll share it.  But in the mean time, it's just a spike.

    Conclusion

    I think this approach has helped in the XML configuration goo that had been a bit confusing.  I didn't want to clutter my domain models and processing code with excess attributes stating intent, and would rather keep it clean.  This makes a step in that direction.  In the next installation of looking at IoC will revolve around Spring.NET and AOP in the Enterprise, so stay tuned...  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • FringeDC March 2008 Video Now Online - Haskell and XMonad Extensibility

    As I've discussed before with my dive into functional programming and F#, there is a user group of language geeks that specialize in Haskell, Lisp, Scheme, OCaml, Erlang and so on, within the Washington DC area called FringeDCBrent Yorgey, well known in the Haskell community and contributor to XMonad, presented an introduction to Haskell and explained a bit about extending XMonad.  Fortunately for those who couldn't attend like myself due to scheduling conflicts, Conrad Barski recorded this session and posted it to Google Video.  The slides have also been made available as well here.  For those not aware of XMonad, it's a tiling window manager for X that is extensible in Haskell.  Unfortunately, the opening talk by Philip Fominykh on The Zipper, a purely functional data structure in Haskell for manipulating the location of structure instead of the data, was not recorded. 

    If you want to get your language geek on, come check it out for their next informal meeting on May 10th over a beer or two.  More details on the site here.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • IoC and Unity - The Basics and Interception

    I realize it's been a while since my last post on Inversion of Control containers and looking at Unity as one of them.  Since that time, Scott Hanselman linked to some of the comparisons that I did for IoC containers here.  I'll be the first to admit that the look was a bit naive, but to get you all interested in looking at IoC container and how they can improve your applications.  It was suggested here that my posts weren't a complete comparison, although in my previous posts I covered a lot of those topics.  Even so, after talking with said individual, I need to cover more ground on the basics and dive deeper into the details.

    Where We Are

    Before we begin today, let's see what we've already covered in the past:

    So, we have a bit of back history, but I think I dived too far into things without giving some of the back story.

    DI Frameworks Galore

    As Scott Hanselman noted, there are a number of IoC containers in the .NET Space.  Most of these I was already aware of and played with, so I'm only going to list the ones I've played with.  Here they are in no particular order:
    So, as you can see, there are plenty to choose from.  They all serve basic IoC needs and I'll talk about that a little bit more later.

    Getting Back to Basics

    When we talk about Inversion of Control and dependency inversion principle, we're talking about loosely coupled applications.  It follows the Hollywood Principle which is the trite "Don't call use, we'll call you" mantra.  In a way it means that those dependencies that you have inside your class, say your Logger and its associated formatter.  In the tightly coupled world, you would have a private member of a Logger.  Instead, what you would have in the other case is an interface or abstract class that would represent the functionality of that logger, and that instance given to your class.  Throw on top of that a container to manage those dependencies to those interfaces and mappings to concrete classes.  Also, your container can manage the object lifetimes as well.   Ayende shows how easy it is to create a simple container, although it misses an important point about lifetime management.

    To put it succinctly, to be a container should satisfy the following requirements:
    • Configuration - Registering types and mappings through code, XML or script
    • Lifetime Management - How the lifetime of the objects are managed, singleton, transient, etc
    • Resolution - Resolve dependencies and instantiate
    • Extensibility - Be able to add new facilities for additional features (interception, AOP, and so on)
    Before we get into any advanced topics, let's get into more detail.  James Kovacs wrote in the March 2008 edition of MSDN Magazine called "Loosen Up - Tame Your Software Dependencies For More Flexible Apps".   This article is a great start for those trying to understand dependency inversion and dependency injection.  James walks through a simple application with tight coupling, and then works to loosen it.  Once that is complete, Castle Windsor is introduced and walks through the XML configuration or using Binsor to auto-wire the container.

    One of the more key points in this article and it came from Ayende is to have an anti-corruption layer for your IoC container.  Yes, that's the same Domain Driven Design term in which we isolate things that don't conform to our given architecture and can translate from one context to another.  Let's throw up a simple example of using this.  Remember it's just a quick spike of an anti-corruption layer.

    namespace UnitySamples

    {

        public static class IoC

        {

            private static IDependencyResolver resolver;

     

            public static void Initialize(IDependencyResolver resolver)

            {

                IoC.resolver = resolver;

            }

     

            public static T Resolve<T>()

            {

                return resolver.Resolve<T>();

            }

     

            public static T Resolve<T>(string name)

            {

                return resolver.Resolve<T>(name);

            }

        }

    }


    So, what we have is an IDependencyResolver given to the singleton IoC container.  Like I said, this isn't best practice as I haven't done any checking on whether the resolver was initialized or not.  But, what we want is to expose the way of resolving dependencies.  Now let's take a look at the IDependencyResolver interface and how that's used.

    namespace UnitySamples

    {

        public interface IDependencyResolver

        {

            T Resolve<T>(string name);

     

            T Resolve<T>();

        }

    }


    Not much to this but gives us the ability to resolve our dependencies by type, or by type and name.  Pretty simple code here once again.  But, if I wanted to suddenly use Unity as the backing store, I'd simply implement it as the following:

    using System.Configuration;

    using Microsoft.Practices.Unity;

    using Microsoft.Practices.Unity.Configuration;

     

    namespace UnitySamples

    {

        public class UnityDependencyResolver: IDependencyResolver

        {

            private IUnityContainer container;

     

            public UnityDependencyResolver()

            {

                container = new UnityContainer();

                UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");

                section.Containers.Default.GetConfigCommand().Configure(container);

            }

     

            public T Resolve<T>()

            {

                return container.Resolve<T>();

            }

     

            public T Resolve<T>(string name)

            {

                return container.Resolve<T>(name);

            }

        }

    }


    So, all I'd have to do to switch from one container to another is just swap out the IDependencyResolver and I'm finished.  But, unfortunately, there are some techniques that are partial to one container or another, so there may be some things I'd lose there.  But for most cases, this pattern works and works well at that.

    Of course loose coupling, dependency injection and using IoC containers are nice in terms of making unit tests easier, but as you can see, there's a bit more to it than that, including the lifetime management, interception, the flexibility of changing implementations and so on.

    Interception and Aspect Oriented Programming

    Aspect Oriented Programming (AOP) and basic interception are interesting pieces of some containers.  This technique allows us to handle cross-cutting concerns such as logging, transaction management, security and so on at a different level instead of littering our code and domain model with checks everywhere.  Instead, we centralize those concerns and have them intercept calls and do their work.  Let's get some jargon straight here before we go any further.  An aspect is the part that encapsulates the cross-cutting concern such as logging.  These aspects can give advice, which is to alter the behavior of a given program.  The advice is given at certain join points which is where the aspect advice and the program intersect.  A group of these join points are called a pointcut.

    Interception is a feature of some of the IoC containers out there.  A few come to mind such as Castle Windsor, Spring.NET, LinFu, S2Container, Puzzle.NET and even ObjectBuilder2.  These containers provide at least a basic level of interception, meaning that you can intercept calls to methods, analyze the contents of the method call, proceed with the call and even modify the return value.  Since most containers support simple interception, I haven't found it more useful than logging or enforcing security on various things.  However, it can be extended to support more robust Policy Injection style of programming such as you would find in the Policy Injection Application Block.

    Interception in Unity?

    Quite a bit has changed with Unity since I last blogged about it in terms of the configuration.  You can grab the latest bits here.  It would be nice to have changesets and such checked into CodePlex for things like this instead of zip files or MSIs, but oh well.  Anyhow, most of the changes revolve around managing lifetimes and handling the parameters for constructor injection.  You can read the tests and figure it out.

    As I've covered before, right now simple injection in Unity is not supported at the moment.  The intention is to have the Policy Injection Application Block fill that role.  There is also PostSharp4Unity which can also fit that bill for quick fixes, but I want deep down support for simple interception.  Conversations with Brad Wilson led me to ObjectBuilder2 and some spikes that they had with a customer DependencyContainer.  So parts of me wondered why that wasn't leveraged at all.  Let's take a look at what was implemented in the base of ObjectBuilder2.

    Interception in ObjectBuilder2

    ObjectBuilder2 was a project by Brad Wilson and Scott Densmore which took the original ObjectBuilder, which was really meant for CAB and not a real container, and made some changes and improvements.  Such improvements also came with a few samples of things you can do with it such as a DependencyContainer sample.  In here, there are a few interesting pieces.  Most of which lie in the interception pieces.  Let's take a look at each type below of the ones supported.

            public void InterceptInterface<T>(MethodInfo interfaceMethod,

                                              params IInterceptionHandler[] handlers)

     

            public void InterceptInterface(Type typeToIntercept,

                                           MethodInfo interfaceMethod,

                                           params IInterceptionHandler[] handlers)

     

            public void InterceptRemoting<T>(MethodInfo method,

                                            params IInterceptionHandler[] handlers)

     

            public void InterceptRemoting(Type typeToIntercept,

                                          MethodInfo method,

                                          params IInterceptionHandler[] handlers)

     

            public void InterceptVirtual<T>(MethodInfo method,

                                            params IInterceptionHandler[] handlers)

     

            public void InterceptVirtual(Type typeToIntercept,

                                        MethodInfo method,

                                        params IInterceptionHandler[] handlers)


    Now with this, you can see there are three different ways of intercepting, through interfaces, virtual methods or through remoting.  We also have the ability to intercept through attributes or through code.  Let's walk through a test to see how intercepting through attributes might work. 

                [Fact]

                public void InterceptViaAttributes()

                {

                    Recorder.Records.Clear();

                    DependencyContainer container = new DependencyContainer();

                    container.RegisterTypeMapping<ISpy, SpyInterfaceAttributes>();

     

                    ISpy obj = container.Get<ISpy>();

                    obj.InterceptedMethod();

     

                    Assert.Equal(3, Recorder.Records.Count);

                    Assert.Equal("Before Method", Recorder.Records[0]);

                    Assert.Equal("In Method", Recorder.Records[1]);

                    Assert.Equal("After Method", Recorder.Records[2]);

                }


    Of course there wouldn't be tests in here without using xUnit.NET of course.  Hope to cover that soon enough.  Anyhow, as you can see, it's pretty simple to register types in here and then check the expectations through the recorder to see whether the actions happened.  To find out what the ISpy and SpyInterfaceAttribtues are actually doing, let's dig through the code. 

                public interface ISpy

                {

                    void InterceptedMethod();

                    void ThrowsException();

                }

     

                internal sealed class SpyInterfaceAttributes : ISpy

                {

                    [InterfaceIntercept(typeof(RecordingHandler))]

                    public void InterceptedMethod()

                    {

                        Recorder.Records.Add("In Method");

                    }

     

                    public void ThrowsException()

                    {

                        throw new Exception();

                    }

                }


    What this class allows us to do is to register an intercepting interface handler, in this case the RecordingHandler.  What we intend to have done is that we record a message once we're in the method.  Then our interceptor can put in calls before and after our method call just to prove a point.

    namespace ObjectBuilder

    {

        public class RecordingHandler : IInterceptionHandler

        {

            readonly string message;

     

            public RecordingHandler()

            {

                message = "";

            }

     

            public RecordingHandler(string message)

            {

                this.message = string.Format(" ({0})", message);

            }

     

            public IMethodReturn Invoke(IMethodInvocation call,

                                        GetNextHandlerDelegate getNext)

            {

                Recorder.Records.Add("Before Method" + message);

                IMethodReturn result = getNext().Invoke(call, getNext);

                Recorder.Records.Add("After Method" + message);

                return result;

            }

        }

    }


    This IInterceptionHandler interface handles the basic interception tasks of doing some before and after work on the given method.  Here it is doing nothing special but recording a message before and after intercepting the method call.  I encourage you to dig through this a bit more and understand where it is coming from, because it's good stuff and heck, you might learn some IL emitting as well.  It's a shame I didn't see some of this IP get rolled into Unity, but we can hope some of it does.

    Interception in Castle Windsor

    Like above with ObjectBuilder2, Castle Windsor has a lightweight interception capability as well.  I covered this in a previous post, but I'll elaborate on it further.  There isn't much on Windsor interception documentation, but you can read about that here.  To take advantage of method interception, simply implement the IInterceptor interface which is marked below.

    public interface IInterceptor

    {

        void Intercept(IInvocation invocation);

    }


    And then we can implement a simple logger interceptor which goes ahead and logs the before and after information as well as anything else you might want to log that comes from the method signature such as arguments, method name, targets, etc.  Here, I'm doing nothing special at all, but play around with the IInvocation interface to find out what you can do.

    using Castle.Core.Interceptor;

     

    namespace CastleSamples

    {

        public class LoggingInterceptor : IInterceptor

        {

            private DebugLogger logger = new DebugLogger();

     

            public void Intercept(IInvocation invocation)

            {

                logger.Log("Before Method");

                invocation.Proceed();

                logger.Log("After Method");

            }

        }

    }


    Like I said above, I am doing nothing special here other than to show that I can log before and after calls, and I have the option of doing more whether I want to proceed with the method call or not.  I can easily register the interceptor either through code, through Binsor or through the XML muck.  An post of mine goes through that here.

    To me, these lightweight interceptors are nice, but I don't get much value out of them just yet outside of logging and such.  Ayende has similar thoughts about the uses of AOP in Windsor here.  He also covers doing a Policy Injection Application Block style approach in Windsor as well here

    One off the topics discussed at DC ALT.NET was about using Windsor Interception to implement the Unit of Work pattern to implement undo logic for a WPF application.  It was a pretty interesting approach to the issue and I'll definitely have to dig into that a little deeper as well.  I'm hoping Phil McMillan starts blogging more anyways...  But, obviously there are uses of interception and AOP in the enterprise and I'll get to that in the next post in the series.

    Conclusion

    As you can see, interception is an interesting topic that needs to be explored a bit more before I'd call this subject done.   After all, I haven't touch most other containers, and I definitely want to get into AOP and Spring.NET as it is a topic I'd like to dive a little deeper into as well as AOP in the Enterprise model.  Things that make the .NET rramework bend in all sorts of ways is always a worth cause.  Anyhow, I hope this dive again into the subject is useful and any feedback is appreciated.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • ALT.NET Thinking From The Outside

    As I've noted before, Dave Laribee was recently interviewed by Scott Hanselman on Episode 104 of Hanselminutes.  The reaction that I've seen has been pretty positive from what I've seen.  It's great to see the ALT.NET message being spread outside the core believers group.  Many people can be turned off after a few discussions on the altdotnet mailing list and not get the real gist of what the group is about.

    Dave did a great job explaining the core principles of ALT.NET which are:

    • Use the right tool for the right job
    • Look outside the .NET community for new and different ways of solving problems
    • Get involved with the community through teaching and teaching
    • A good emphasis on agile methodologies
    • Design patterns and principles
    Scott Hanselman, during the interview, brought up the canonical  person of someone who is out in the field and maybe not at the cutting edge, the Chief Architect at the Nebraska Department of Forestry.  How would he explain ALT.NET to them?  Dave had a great answer on this, but as it is, the ALT.NET community is meant to be divisive.  Not in a bad way, but more in the way of that what ALT.NET is, is not for everyone.  If you're a person who just drags and drops controls onto a page and happy with the status quo and not doing anything to learn anything new, well, then ALT.NET is not probably for you.  Those who come to the table wanting to learn new things, well, that's what we're here for.  When I started the DC ALT.NET group, these are some of the things I had in mind.  We want to reach out to the Washington DC .NET developer community to spread the message, but to also show there are like-minded people who have that passion for bettering themselves.

    But, how do we convey that message to the development community as a whole.  That was a part of the conversation that was interesting.  And to me, I think we should have that "street kit" which includes such bare essentials as a manifesto (that we'll probably get to in Seattle), frameworks, design patterns, ways of spreading the message, etc.

    Christopher Bennage had a good wrapup of the show as well here.  I'd tend to agree that ALT.NET isn't about convincing, it's about conversing, having that conversation about what pains them and working through on a solution.  It's about spreading the community.  But, a direct command to learn Ruby, meh...  I think Scala has a few more things to offer right now which is why I'm chomping at the bit to get to more of it.

    A post by Leon on community though caught my eye recently.  What he says is pretty accurate.  I tend to think that what ALT.NET is preaching is what many communities such as the Ruby on Rails and Java communities have been doing for years with regards to design patterns, TDD and so on.  A lot of the innovation such as TDD frameworks, design patterns and such just hasn't come from the .NET community.  Many people wait on Microsoft to provide these things such as TDD frameworks (MSTest), Logging (Enterprise Library), O/RM (Entity Framework) and won't pay attention to the OSS world which I find an utter shame.  Instead, what I'd like people to do is take those frameworks and look what's available from the community as a whole and compare them, much as I have with Unity and some other IoC containers.  Yes, many people worry about licensing issues and that's something for your legal team to work out. 

    But, to his point, I'm glad he's learned Ruby on Rails and is happy.  In fact, I think it's great that he is expanding his horizons outside of C# which I think most developers ought to know a few languages and not just C#, VB.NET or Java.  For example, this past year I spent time learning Ruby and F#.  I plan on taking up Scala soon as well and maybe a couple of other languages.  In the past I was a Java, PHP and C++ programmer, so I've run the gamut.  You can take some of these practices back to your other languages and learn from the successes and the failures of each community.  It only makes you a better developer over time.  I think many of the innovations that happen in the .NET space should come from the outside, and not just from Microsoft. 

    Do I think it's having an effect?  Absolutely!  If you look at such frameworks as Unity, there is a lot of feedback being provided, the ASP.NET MVC framework, same way.  This willingness on both sides to engage is a wonderful thing and maybe to the point where "ALT" isn't the alternative anymore and instead the de facto standard.

    kick it on DotNetKicks.com

    Read more...

  • Adventures in F# - F# 101 Part 6 (Lazy Evaluation)

    Time for another adventure in F#, covering some of the basics of functional programming and F# in particular.  This is intended at looking not only at the language, but the implementation as it regards to C#.

    Where We Are

    Before we begin today, let's catch up to where we are today:

    So, today we'll be covering the topic of lazy evaluation, so, without further ado, let's get started.

    Lazy Evaluation

    Another fascinating topic in the land of functional programming is lazy evaluation.  This technique is basically delaying the execution of a given code block until it is needed.  Imagine for a minute that you have a function that is basically an if-else function.  This function would take a boolean condition, and if false, it will execute the second block of code.  The last thing you'd want to do is evaluate the else statement if the if is true and the results could be disastrous.  Instead, we can delay that result until we ultimately need it.  But why is it useful?  Well, think of infinite calculations, or infinite loops, the last thing you want to do is evaluate all of them, and instead do it lazily on command.

    It's one of the more important features in languages such as Haskell where Simon Peyton Jones, the principal designer of the Glasgow Haskell Compiler (GHC), really likes this features and talks about it extensively on DotNetRocks Episode 310.  By default, all function parameters computation are delayed by default.  If you'd like to see more about Haskell's lazy evaluation of a Fibonacci sequence, check it out on the HaskellWiki.

    Back to F# now.  When we talk about a pure functional language, the compiler should be free to choose which order to evaluate the argument expressions.  Instead, F# does allow for side effects in this way, such as printing to the console, writing to a file, etc, so being able to hold off on those kinds of items just aren't possible.  F# by default does not perform lazy evaluation, instead has what is called eager evaluation.  To take advantage of lazy loading, use the lazy keyword in the creation of your function and wrap the area to be lazy evaluated inside that block.  You then use the Lazy.force function to evaluate the expression.  Once that is called for the first time, the results is then cached for any subsequent call.  Let's walk through a quick sample of how that works:

    #light

     

    let lazyMultiply =

      lazy 

      (

        let multiply = 4 * 4

        print_string "This is a side effect"

        multiply

      )


    let forcedMultiply1 = Lazy.force lazyMultiply
    let forcedMultiply2 = Lazy.force lazyMultiply

    What you'll notice from the above sample is that I intentionally put a side effect inside my lazy evaluated code.  But, since the evaluation only happens once, then I should only see it in my console once, and sure enough, when you run it, it does.  This is called memoization, which is a form of caching.  This uses the Microsoft.FSharp.Control.Lazy class.  But, I wonder how it's actually done through the magic of IL.  Since I think C# is a little bit more readable, let's crack open .NET Reflector and take a look.



    What we notice is that it creates a new class for us called lazyMultiply.  There is a lot of syntactic sugar behind the scenes, but as you can see, it's calling the same get_lazyMultiply function off my main class.  It's nothing more than a simple property that does this:



    If you'd like to dig more into the guts of how that works exactly, check out Jomo Fisher's blog post about the Lazy Keyword here.  I'm definitely glad to see F# team members blogging about these sorts of things.

    Lazy Collections?

    F# also has the notion of lazy collections.  What this means is that you can calculate the items in your collection upon demand.  Some of the collections inside the F# library may also cache those results as well.  We have two types that we're interested in, the LazyList class (full name: Microsoft.FSharp.Compatibility.FSharp.LazyList in FSharp.Compatibility.dll) and the Seq class (full name: Microsoft.FSharp.Collections.Seq in FSharp.Core.dll).

    The LazyList is a list which caches the results of the computed results on the items inside the collection.  In order to create a truly lazy list, you must use the unfold function.  What this does is returns a list of values in a sequence starting with the given seed.  The computation itself isn't computed until the first one is accessed.  The rest of the elements are calculated by the residual and I'll show you that below:

    #light

     

    let sequence = LazyList.unfold (fun x -> Some(x , x + 1)) 1
    let first20 = LazyList.take 20 sequence

    print_any first20

    What the above sample allows us to do is create a list of infinite numbers, starting at the seed of 1.  But, the values aren't computed until the first is accessed, and then the results are cached.  But, in order to access any value out of them, you need to take x number of values from them.  The LazyList provides that capability to do that with the take function.  What you'll also notice is that we can use the None or Some functions. The Some function, as shown above has its first value as the starting value, and the second value is what it will be like the next time it will be called.  The None function, not shown above, represents the end of the list.

    You may also use the Seq class to represent these lazy values.  Think of the Seq as the magic collection class inside F# as it compatible with most, if not all arrays, and most collections as well. 

    But, what does that code look like as well?  Very interesting question.  Let's take a look at the main function first:



    Ok, so that looks a bit interesting.  So, what does the sequence@3 really look like.  What we see is that we're giving it a seed of 1 and then getting the first 20 from the list.  Let's now take a look at the sequence@3.  What's we're interesting in is not the actual constructor, but the Invoke method as you remember, it's a lazy list.



    So, as you can see, it sure enough looks like we had in our code from above with a heck of a lot more angle brackets.  I'm sure that's why some of the var stuff was introduced in C# 3.0 was to hide this ugliness.

    Conclusion

    Just to wrap things up here on a Friday, I hope you enjoyed looking at some of the possibilities of lazy loading.  This could also help with Abstract Syntax Trees, loading of data from files, XML documents, etc.  The real power is to take infinite data loops and take chunks of them at a time.  And we really don't have to worry about the stack as well when we do this.  I hope you've enjoyed the series so far and there are a few more things I'd want to cover before I consider this series done.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • ASP.NET MVC Source Code Now Available

    As of yesterday, the ASP.NET MVC Release 2 source code has been made available on CodePlexScottGu made the source drop announcement earlier this morning.  Congrats to Phil Haack and the ASP.NET MVC Team for shipping the source code.  It's worth noting, it's not Open Source in the way that it's just a zip file and no outside patches are to be accepted, unlike IronRuby.  The plan going forward is to make incremental drops of the source code going forward.   

    As always, check out Jeffrey Palermo's MVCContrib project for contributions to the code base, which is open source.

    kick it on DotNetKicks.com

    Read more...

  • DC ALT.NET March Meeting Wrapup

    Another month and another highly successful DC ALT.NET meeting.  I want to thank Kevin Hegg for hosting the event as he was a very gracious host.  We had a better than expected turnout which was very cool.  It's good to associate names to faces after chatting, emailing and whatnot.  I also want to thank Phil McMillan for stepping up to the plate at the last moment to backfill for the lack of the scheduled speaker.  It's even more refreshing to not have any Death By PowerPoint (DBPPT) (TM).  I appreciate Phil's talk even more due to the fact that he had a laptop meltdown the night before, so we talked about concepts and implementations without showing any real code.

    What Did We Talk About?

    Our format that we follow is that we have one hour for our scheduled topic and the rest is Open Spaces.  Bring a topic and talk about it.  So, for the first hour, Phil led the discussion around interception facilities in Castle Windsor in regards to handling a Unit of Work pattern for a custom written WPF application.  There is a lot of headache that comes with undo logic in WPF controls, so using interception and the unit of work pattern can get around this, although not the cleanest code written to man.  Also, we talked about the headaches of registration inside IoC containers whether it be in code or in XML.  We talked about Binsor and whether that was an answer to the registration headaches caused by massive XML config files.  Craig Andera just loves bringing up Lisp and Lisp macros as they solve all programming problems ever invented by man. 

    The second hour was an open spaces discussion where we had pretty lively debate about:

    • Dynamic Language Runtime (DLR) and its uses and extensibility model
    • Design by Contract
    • Functional Programming with Lisp/Scheme, Functional Javascript, Erlang, and F# and the value proposition it has
    • Finding the right developer for your organization
    • TDD/Test First Development
    • ALT.NET Open Spaces, Seattle topics
    How We're Different

    Like I said before, we tend to be different than most user groups in the Washington DC area due to the fact that we're an Open Spaces event, for at least half of it.  We don't really do PowerPoint presentations, instead a more intimate environment where everyone participates.  I don't think anyone stayed silent during the meeting.  Instead of being lectured to, you're part of the conversation.  We want passionate developers to attend, those who are looking for a better way.  It's even effective when not looking at code, nor slides for any given product.  I feel we can augment any discussion from any user group in the DC area, and not here to compete, instead compliment them with a more intimate and passionate discussion.

    Where We Go From Here

    After the March meeting, we're looking to hold the next meeting the week after ALT.NET Open Spaces, Seattle.  That should bring some lively discussion and wrapup from the event.  We also look forward to having Jay Flowers make it to the event to discuss Continuous Integration and CI Factory.  Stay tuned for details on our next meeting.  April is going to be a busy month for at least me with the CMAP Code Camp on April 12th, speaking at RockNUG and ALT.NET Open Spaces, Seattle as well as our own DC ALT.NET Meeting.

    Wrapping It Up

    If you're interested in a better way, to discuss .NET or related topics and you're in the ALT.NET mindset, then come and join the conversation.  We're always looking for passionate individuals to come and join and be a part.  Join our mailing list and find out more here.

    kick it on DotNetKicks.com

    Read more...

  • Adventures in F# - FringeDC User Group

    During my Adventures in F# series that I've been posting, I've always wondered where the interest in these languages come from.  Sure, we have a lot of user groups in the Washington DC area, just to name a few that I've been to or belong to:

    But, what I was noticing is lack of where the true language geeks hang out.  Craig Andera pointed me in the last DC ALT.NET Meeting to a group that does just that while he was looking at Lisp and Scheme in the past year.  This group is called FringeDC which is not interested in the mainstream languages such as C#, C++, Java, Ruby and so on.  Instead they focus on fringe languages such as Lisp, Scheme, Haskell, Erlang, Prolog, OCaml, Squeak and so on.  I'm sure we could pile F# and Scala on that bandwagon as they both are fringe-ish and not mainstream just yet.  If you're in the Washington, DC area, go ahead and check them out.  I'll be sure to attend some of these when time permits.  And if they're interested in some F# stuff, I'm sure I could deliver.

    kick it on DotNetKicks.com

    Read more...

  • Looking at DSLs in .NET

    As I've mentioned in recent posts such as here, here and here, I've been very interested in Domain Specific Languages (DSLs), especially with regards to F# and the DLR as well.  I recently re-listened to Software Engineering Radio Episode 52 with Obie Fernandez discussing DSLs in Ruby.   One of the things that attracted me to Ruby for this was the flexibility of the syntax for closures, mixins, etc.  Anyhow, it's a good listen and if you're new to the subject, you should give it a go.  Also, there is a slide deck of DSLs in Ruby which accompanies this episode which can be found here

    So, of course this gets me excited about the possibilities of seeing such things in IronRuby.  After seeing John Lam's presentations at MIX08 and listening to him on various podcasts, I'm excited that they are making such progress and hopefully get it into our hands soon.

    But, before we get too deep into things, I just want to take a step back and look quickly at what DSLs are.

    Internal and External DSLs?

    So, what are DSLs?  Well, to put it succinctly, it's a small language that's used for a very narrow task.  You can think of these as languages specific to a domain such as medical claims processing, stock trading and so on that only have meaning there.  These languages mean very little outside their problem domain and probably wouldn't make sense to anyone outside.  I'm very well aware of such things as I've worked in the medical claims processing industry and their terms, calculations and so on are very specific and to solve the problem well, it's best to suit the language best for expressing solutions.

    Martin Fowler wrote an article entitled "Language Workbenches: The Killer-App For Domain Specific Languages?" in which he talks about the history of DSLs especially in the Lisp world, but until now really haven't caught on.  Martin argues that XML structures such as configuration files, and so on qualify for that status, due to the fact that it is readable by a human and probably a domain expert as well.  But Lisp is well know for the Lex/Yacc parsing and expression trees and so on.

    Now the real interesting part comes in when we talk about internal versus external DSLs...

    External DSLs, quite simply, are those languages that are not in the same language as the main application itself.  This means that I could be free to write any free-form code I wish in order to suit my domain specific need.  This means that you need to write parsers and then ultimately would need to have a translation boundary between your DSL and your application.  This is where I think something like the Dynamic Language Runtime (DLR) could come into play.  What I mean by that is that if you write your language parser for the DLR, and I'll get into that shortly.  There is a bit of overhead with this of course, plus a good debugger and IDE, but with time and patience things like this can be overcome.  Extending such things as #Develop to encompass those pieces is feasible.

    Internal DSLs, on the other hand, are the little languages you can create inside your current language of choice.  Now languages such as Lisp, Ruby, Scala, Boo, and F# seem a bit more suited for these than the mainstream languages of C++, C# and Java.  Of course one of the bigger obstacles is the pesky curly brace which Ruby allows you to discard.  F# doesn't have a concept of this either, and instead the indentation scopes the values and functions and so on.  Martin has an interesting DSL written in Java that's interesting and could be better applied in different languages. 

    Thinking About The DLR

    As I said before, I'm pretty excited about the DLR and the flexibility it can give me as a software engineer looking for new and better ways to solve my customer's problems.  Not only that, but I'm a language geek at heart, what can I say?  I've posted several items on building on the DLR in the context of external DSLs as well as writing custom compilers for .NET.  Projects such as Irony also appeal to me in that way. 

    If you want to play around with the DLR, you can get it in one of two places, the IronPython download on CodePlex or on RubyForge with the IronRuby project.

    Martin Maly, a member of the DLR team has continued his posts about Building on the DLR.  He took some time off to work on some DLR related issues and now is back with some more posts. Let's continue where we left off last time:

    So, as you can see, the DLR has progressed quite a bit, and maybe creating that DSL using the DLR isn't so far off after all.  That is of course if you understand expression trees and so on.  But, the great part about the DLR is that they ship the ToyScript source code with this for you to play with and learn from when you download it.

    DSLs In Boo?

    Oren Eini, aka Ayende, has been working on a book about DSLs called "Building Domain Specific Languages in Boo".  Recently, he posted about some sample chapters now available online with the source code.  I highly recommend that you at least give it a look.  You now have access to the Early Access Edition, and of course you can buy it online.  It'll be interesting to see Ayende at ALT.NET Open Spaces, Seattle to see if he wants to cover more of this stuff. 

    If you're not familiar with Boo, it's one of Ayende's preferred languages.  Such tools as Binsor (The DSL for Windsor) were written in Boo.  If you're not familiar with the language, it has very similar syntax to Python and formats very nicely.  This easily fits Martin Fowler's category as an external DSL.  #Develop has some support for Boo to make it a first class language in the .NET family.  It also ships as part of the Castle Project.  Anyways, a quick sample of Binsor makes it look like a nice DSL for type registration in Castle Windsor in Boo Script:

      import Rhino.Commons
      logger = Component("console_logger", ILogger, ConsoleLogger)

    Check out the first chapter of Ayende's book which is available free online and explore it yourself for DSLs.

    DSLs in F#?

    Another part that had me intrigued was the possibilities of not only Boo to do this, but F# as well.  Robert Pickering covers this in his book, Foundations of F#.  If you pay attention to Chapter 11, he covers DSLs in F# and gives a few examples of how you can do so.  To me, it's pretty powerful because you have a lot of the built-in features of a functional programming language such as lists, pattern matching and so on.  Such examples as given are such things as the arguments parser in F# that is described in the Arg class in Microsoft.FSharp.Compatibility.OCaml namespace.  This allows you to parse well known data structures as first class citizens.  Martin Fowler also gave such an example from the article I quoted from above.  F# lends itself quite well to DSLs in regard to support for lambda expressions. 

    Don Syme also covers these topics in his book Expert F# in Chapter 9.  This covers more language oriented programming techniques, but you can scan some information about building DSLs in F#.  Some interesting parts of this come down to Active Patterns which I covered partially yesterday.  In the coming weeks, I hope to post some of my forays into this, taking some samples from the Ruby community and applying some of the same functionality in F#.

    Conclusion

    There is still much yet to be covered in this topic of DSLs in .NET languages.  We can go on and on with regards to internal and external DSLs and argue about which language is suited for each.   But in the coming weeks, I hope to take some samples and show how they can apply cleanly in F#, and probably run into some language problems where it might not be the best fit.  But, that's the fun part about it.

    kick it on DotNetKicks.com

    Read more...

  • Reminder - DC ALT.NET Meeting March 19th

    Just a reminder, we will be holding it tomorrow, March 19th from 7-9PM.  The meeting this month will bring ALT.NET to Arlington, Virginia.  I want to thank Kevin Hegg for hosting this event.  Unfortunately, Jay Flowers will not be able to attend, so instead we will have myself and Phil McMillan presenting IoC containers and Interception with Castle Windsor.

    At our last meeting, Stelligent hosted our event in which we discussed a lot of great topics.  You can read a wrapup of our last meeting here.  Our format is as follows, the first hour or so is the topic at hand and then the second hour or whenever the talk is done is for Open Spaces.

    Looking for Sponsors

    As always, we're looking for sponsors for our events.  We bring a lot of passionate developers to your site and we feel we can bring a lot.  Sponsorship opportunities are always appreciated!

    Who We Are

    Are you a developer who always keeps an eye out for a better way? Do you look outside the mainstream to adopt the best practices of any development community, including Open Source, Agile, Java, and Ruby communities? Are you always looking for more elegant, more simple, more maintainable solutions? If so, then you might be an ALT.NET practitioner!
     
    This group follows the Open Space Technology model.  In Open Space, a facilitator explains the process and then participants are invited to co-create the agenda and host their own discussion groups.  So, we'll take a vote and present the topics.  We're in the pub-club mind occasionally, so it's not surprising to find us geeking out a bar...
     
    This model follows the four basic principles:

    • Whoever comes are the right people
    • Whatever happens is the only thing that could have
    • Whenever it starts is the right time
    • When it's over, it's over
    Time and Location:
    Tuesday, Marh 19th 7:00 PM

    Arlington, VA.  See thread for details.

    Come, participate, and make your voice heard!  Come meet passionate developers like yourself in the active discussion.  Hoping for a great turnout...   If you haven't signed up for our list, go ahead and do that here.

    kick it on DotNetKicks.com

    Read more...

  • Adventures in F# - F# 101 Part 5 (Pattern Matching)

    Time for another adventure in F#, covering the 101 level basics of the language and why I think it's useful and how it can even help your C# as well.  This time, I want to spend a good deal of time on pattern matching and a few other topics.

    Where We Are

    Before we begin today, let's catch up to where we are today:

    So, today, like I mentioned before, here are the topics I'm going to cover today:
    • Pattern Matching
    • Active Patterns
    Pattern Matching

    One of the interesting and more powerful features of the F# language is Pattern Matching.  Don't be tempted to think of them as simple switch statements as they are much more powerful than that.  Pattern matching is used to test whether the piece under test has a desired state, find relevant information, or substitute parts with other parts.  Most functional languages such as Haskell, ML, and OCaml have them, and F# is no exception to its support. 

    At first glance, like I said, you might be tempted to think of them as the C# switch statement.  But, this is more powerful as you can have ranges support, can case on .NET types and plenty of other things you cannot do in other .NET languages.  Let's walk through a simple example of the old Fibonacci sequence that I've shown in the past.  But this time, I'll walk through it several times as I make improvements each iteration by adding more features to support failover, etc.

    In order to use pattern matching with explicit values, you can use the match and with keywords.  This allows you to match a particular parameter with the following conditions.

    #light

     

    let rec fib n =

      match n with

      | 0 -> 1

      | 1 -> 1

      | x -> fib(n - 2) + fib(n - 1)

     

    print_any (fib (-1))


    Ok, so what we have here is a simple one with pattern matching which checks for a 0, then a 1, and then the default case goes to do the Fibonacci sequence calculation.  The other two would be considered escape clauses for the last calculation.  But, we can do better than this.  Let's try again by compacting it just a little.

    #light

     

    let rec fib n =

      match n with

      | 0 | 1 -> 1

      | x -> fib(n - 2) + fib(n - 1)

     

    print_any (fib (-1))


    Now the code is a bit more compact as we can combine two values on one line as you can see.  But, we haven't hit a fail case if the value is less than 0.  That's not a good thing.  Luckily F# gives us a failure function to deal with something like this.  We can use the when clause if we want to check for ranges.  Also, we can specify that the condition is a failure with the failwith keyword.  Let's now walk through the more complete scenario.

    #light

     

    let rec fib n =

      match n with

      | x when x < 0 -> failwith "value must be greater than or equal to 0"

      | 0 | 1 -> 1

      | x -> fib(n - 2) + fib(n - 1)

     

    print_any (fib (-1))


    And sure enough when you run the program through the debugger, you get the nice picture that tells you of the exact error.  It fails with a FailureException and our message shows up as well.



    What I always like to do in situations such as these is to look through .NET Reflector to look at how it might show up if it were rendered in C#.



    Pretty simple as it uses the switch statements when it can, and then uses the failure outside of it.  As you can see, the F# is much cleaner and more to the point than this stuff.  Jacob Carpenter took it to a different level though in C# when he went through Generics and Lambda abuse to reproduce the same actions.  You can read more about that here.

    But, that's easy stuff that can be easily done by C#, albeit differently.  Let's try for matching against .NET types instead, which is something C# cannot do via the switch statement.  So, in a words, you can definitely tell that it's a bit more powerful than just a simple C# switch statement.

    #light

     

    let stringType (t : ob) =

      match t with

      | :? System.Boolean -> "Boolean"

      | :? System.Byte -> "Byte"

      | :? System.Int32 -> "Int32"
      | :? System.Double -> "Double"
      | _ -> "Unknown"

     

    print_string (stringType (box 14uy))

    print_string (stringType (box false))

    print_string (stringType (box "Foo"))


    What the above example does is check for the corresponding .NET types by using the :? operator especially reserved for this behavior.  Also, note the use of the underscore "_" as a wildcard approach to this.  Let's take a look at the results in .NET Reflector again to see what the results look like.  Again, it's nothing spectacular just yet.



    Let's look at yet another sample when calling a recursive function.  This time, let's concatenate a few strings together from an F# list.  This is a pretty simple example of using pattern matching to tell when to stop adding onto the existing string.

    #light

     

    let stringList = ["foo"; "bar"; "foobar"; "barfoo"]

     

    let rec concatStringList = function

      | head :: tail -> head + concatStringList tail

      | [] -> ""

     

    print_string (concatStringList stringList)


    So, what we're doing is while we have a value to concatenate onto the tail, then add it to the list, else if it's an empty list, let's return a blank string.  The keywords head and tail are actually built into the F# language as part of F# lists.  The head is the head value on the list, in our case, a string.  The tail represents the rest of the list.  What's really cool is that ideas from functional programming can actually help your C# and imperative code thinking.  We'll get into at a later time...  Anyways, I'm always curious what that looks like in Reflector and how it gets translated into C#-ese.



    You can also match over two or more parameters as well.  This is especially interesting if wanting to cover all variations of a particular combination without messy C# if statements.  Let's take a look at matching over a tuple whether we allow a URL to be accessed or not:

    #light

     

    let allowUrl url port =

      match (url, port) with

      | "http://www.microsoft.com/", 80 -> true

      | "http://example.com/", 8888 -> true

      | _, 80 -> true

      | _ -> false

     

    let allowMicrosoft = allowUrl "http://www.microsoft/" 80

    printfn "%b" allowMicrosoft


    What seems like simple code is actually quite squirrley if you want to translate this into C#.  As you can see from the above code, if we pass in the Microsoft address and 80, we can return a true value, else look at the other conditions and if all else fails, then we return false.  But, let's take a look at how it's translated into C#.



    Definitely handled very elegantly in our F# code as it's one of the strengths.  With C#, not as much as you can see from this pretty ugly code that it represents.  Matching against enums is pretty easy as well with very little coding effort such as this:

    #light

     

    let calcRateByDay (day:System.DayOfWeek) =

      match day with

      | System.DayOfWeek.Monday -> 0.42

      | System.DayOfWeek.Tuesday -> 0.67

      | System.DayOfWeek.Wednesday -> 0.56

      | System.DayOfWeek.Thursday -> 0.34

      | System.DayOfWeek.Friday -> 0.78

      | System.DayOfWeek.Saturday -> 0.92

      | System.DayOfWeek.Sunday -> 0.18

      | _ -> failwith "Unexpected enum value"

     

    print_any (calcRateByDay System.DayOfWeek.Monday)


    So, just wrapping up this section for now, as you can notice, we can do all sorts of things with pattern matching and I've barely scratched the surface of it.  I won't begin to cover things as pattern matching over unions just yet until I get to that section where I want to talk about them exclusively.

    Active Patterns

    Now that we understand how pattern matching works, let's take this to a different level.  These active patterns allow you to create a union type of various .NET classes and match against them.  Unfortunately, Robert Pickering couldn't fit the active patterns into his Foundations of F# book, so you can go ahead instead and read about them here.  Also, Luis Diego Fallas posts about them here as well. 

    To give you a brief sample of what one looks like, let's look through a sample of whether we're looking through files or directories.

    #light

     

    let (|File| Directory|) (fileSysInfo : System.IO.FileSystemInfo) =

      match fileSysInfo with

      | :? System.IO.FileInfo as file -> File (file.Name)

      | :? System.IO.DirectoryInfo as dir -> Directory (dir.Name, { for x in dir.GetFileSystemInfos() -> x })

      | _ -> assert false


    What this allows us to do work with Files as tree structures.  You can also use these to walk XML documents and so on.  Read the above samples to get more information about them.

    Conclusion


    Pattern Matching to me is one of the most fundamental pieces to F# and functional programming as a whole.  What I've started you with are some basic samples, but these can be applied to binary trees, complex objects and so on.  I hope to cover more scenarios soon, and until next time...

    kick it on DotNetKicks.com

    Read more...

  • CMAP Code Camp April 2008 Registration Open

    As mentioned in my previous post about my speaking schedule, I am helping organize the ALT.NET track at the CMAP Code Camp.  I plan to be speaking on a few topics and will get that posted when the schedule is finalized.  Either way, it should be a great time and bringing ALT.NET ideas to a new audience is always a good thing.  I feel there is a lot of momentum around the movement right now with the help of the "What is ALT.NET?" MSDN Magazine article by Jeremy Miller and Dave Laribee on Hanselminutes.

    Anyhow, back to the topic at hand.  Registration is now open for the CMAP April 2008 Code Camp. Space is limited for this FREE event, so register soon. Click here to register.

    Central Maryland Association of .NET Professionals (CMAP) will holding its Code Camp 2008 Spring Edition on April 12, 2008.  The Code Camp will be held at the Loyola College Graduate Center in Columbia, MD from 9am - 5pm.

    Great source of information for FREE, with a FREE lunch, and chances to win great giveaway items, what a great way to spend a Saturday...

    http://www.cmap-online.org
    http://www.cmapcodecamp.org

    kick it on DotNetKicks.com

    Read more...

  • Upcoming Speaking Schedule for April 2008

    While I'm finishing up my F# post on pattern matching, I thought I'd throw out my speaking schedule for April.  It's going to be a busy one with code camps, user groups and conferences.  Some time I'm sure I'll find some time to sleep and get my own work done.  Below is my current schedule as of right now:

    • RockNUG - April 9th
      Decouple Your Applications with Dependency Injection and IoC Containers

    • CMAP Code Camp - April 12th
      Heading up the ALT.NET track at the CMAP Code Camp to talk about ALT.NET topics such as IoC containers, TDD/BDD, O/RM frameworks, etc

    • DC ALT.NET - April 16th
      Inversion of Control Containers for Cross-Cutting Concerns

    • ALT.NET Open Spaces, Seattle - April 18th-20th
      Design by Contract panel discussion (Proposed).  No guarantees since it is Open Spaces
    Hope to see some of you at these events.  Feedback is always appreciated.

    kick it on DotNetKicks.com

    Read more...

  • Why I'm Excited About ALT.NET Open Spaces, Seattle

    Update:  Catch Dave Laribee on Hanselminutes discussing ALT.NET here.

    It's almost a month away until ALT.NET Open Spaces, Seattle and things are coming along nicely.  As you may have noted, we filled up rather fast while keeping some spots open for some pre-invites.  We have a great crowd of people not only from inside Microsoft, but outside as well.  We've put a bit of effort into getting this off the ground, and I can admit I've spent a bit of time doing so.  But to see names like Ward Cunningham, Jim Shore, Martin Fowler from the Agile spaces, folks from Microsoft such as Scott Guthrie, Scott Hanselman, Phil Haack, John Lam, Jim Hugunin, Brad Abrams, Charlie Calvert, Brad Wilson, P&P, Microsoft Research folks such as Rustan Leino and Peli, the CodeBetter guys, the Israeli crowd (Ayende, Osherove, Dahan) just warms my heart.  To give you an idea, I'll put the list at the end.

    In case you missed it, Jeremy Miller had a great article in the latest MSDN magazine called "What Is ALT.NET?" which sums up my thoughts exactly on the topic.  Very nice stuff!  And I think Ayende might be onto something with these ALT.NET logos here.  Something tells me t-shirts need to be made.

    But where will we go from here?  Dave Laribee and others, myself included, have been kicking around the idea of a RailsConf, QCon, Spring Experience, No Fluff Just Stuff kind of conference.  I really do like that idea and I want an active learning conference where we in the ALT.NET community can learn from each other, but also spread the message outside.  Dave set four basic criteria that I think were well worth noting for parameters for such an event:

    1. It would be longer: four or five days.
    2. It would start 2-3 days of workshops or classes upfront on advanced topics: DDD, T/BDD, Agile, Patterns, SOA/Messaging, etc.
    3. The final days would lead into a kind of "dream conference" with talks given by well-known speakers.
    4. It would cost money, not a lot, but some...
    Anyhow, I love the idea but I'd rather keep the cost as low as possible.  I realize that many of the "dream conference" speakers that we're talking about do put a lot of effort into these events efforts should be made to accommodate their needs.  I would love to have something like this next year, and who knows it might just happen.  I would also like some of that effort to help grow at a local level like I have been doing with DC ALT.NET.  During this process, we'd love to hear what you think what works, what doesn't and so on.  But, it's going to be a great time!

    It's hard not to when you see names like these:

    Jonathan de Halleux, Trevor Redfern, Russell Ball, Jonathan Wanagel, Ayende Rahien, Brad Abrams, Shawn Wildermuth, Anil Verma, James Franco, Wendy Friedlander, David Pehrson, Scott Hanselman, James Shore, Donald Belcham, Eric Holton, Michael Bradley, Joey Beninghove, Greg Young, Jesse Johnston, Tom Opgenorth, Harry Pierson, Anand Raju Narayan, Justin-Josef Angel, Chris Sells, Matt Pisut, Jeff Olson, Martin Fowler, Rustan Leino, Oliver, Roy Osherove, Rob Reynolds, Brian Donahue, Alan Buck, Jeff Certain, Sean Solbak, Dave Laribee, Dennis Olano, Owen Rogers, Bertrand Le Roy, Jarod Ferguson, Douglas Schroeder, Terry Hughes, Simon Guest, Rod Paddock, Jean-Paul S. Boodhoo, Dustin Campbell, Eric Ness, David Airth, Aaron Jensen, Wade Hatler, Adam Dymitruk, Chris Salahub, Charlie Poole, John Lam, Ben Scheirman, Brandon Lang, Miguel Angel Saez, Dave Woods, Ashwin Parthasarathy, Matt Hinze, James Kovacs, Alex Hung, Joe Ocampo, Alvin Lee, Steven "Doc" List, Kevin Hegg, D'Arcy Lussier, jakob Homan, Pete Coupland, Rob Zelt, Tom Dean, Joseph Hill, Arvind Palaniswamy, Chris Sutton, khalil El haitami, Kelly Leahy, John Nuechterlein, Troy Gould, Kyle Baley, Rhys Campbell, Joe Pruitt, Ronald S Woan, Michael Nelson, Matthew Podwysocki, Piriya Thongtanunam, Howard Dierking, Pete McKinstry, Dan Miser, Eli Lopian, Raymond Lewallen, Neil Blake, Jacob Lewallen, Mike Stockdale, Kirk Jackson, Brad Wilson, Eric Farr, Jeff Brown, Ian Cooper, John Quach, Cameron Frederick, David Pokluda, Charlie Calvert, Shane Bauer, Rajiv Das, Jeff Tucker, Phil MCmillan, Udi Dahan, Bil Simser, Martin Salias, Bill Zack, Chris Patterson, Greg Banister, Osidosi, Gabriel Schenker, James Thigpen, Phil Haack, Ray Houston, Colin Jack, Robert Smith, Sergio Pereira, Brian Henderson, Michael Henderson, Chantal Laplante, Dave Foley, Ward Cunningham, Bryce Budd, Chris Bilson, Scott Guthrie, Robin Clowers, Craig Beck, Phil Dennis, Jeffrey Palermo, Robert Ream, Carlin Pohl, Glenn Block, Tim Barcz, Dru Sellers, Scott Allen, Jeremy D. Miller, Grant Carpenter, Chris Ortman, Drew Miller, Weston Binford, Buchanan Dunn, Rajbeer Dhatt, Justin Bozonier, Jason Grundy, Greg Sangha , david p buchanan , Don Demsak , Jay Flowers , Adam Tybor , Scott C Reynolds , Chad Myers , Nick Parker , John Teague , Daniel , Jim Hugunin , Scott Koon , Justice Gray , Julie Poole , Neil Bourgeois , Luke Foust

    Still working on my F# posts and IoC container posts, so stay tuned.  Until next time..

    kick it on DotNetKicks.com

    Read more...

  • DC ALT.NET Meeting - March 19th

    I've held off recently announcing the DC ALT.NET meeting due to scheduling issues.  Anyhow, that has been resolved and we are good to go.  We will be holding it on March 19th from 7-9PM.  The meeting this month will bring ALT.NET to Arlington, Virginia.  I want to thank Kevin Hegg for offering his office as our get together. 

    At our last meeting, Stelligent hosted our event in which we discussed a lot of great topics.  You can read a wrapup of our last meeting here.  This time, we're going to have Jay Flowers to discuss Continuous Integration and CI Factory.  It should be a great discussion as it's been weighing on my mind lately.  Our format is as follows, the first hour or so is the topic at hand and then the second hour or whenever the talk is done is for Open Spaces.

    Looking for Sponsors

    As always, we're looking for sponsors for our events.  We bring a lot of passionate developers to your site and we feel we can bring a lot.  Sponsorship opportunities are always appreciated!

    Who We Are

    Are you a developer who always keeps an eye out for a better way? Do you look outside the mainstream to adopt the best practices of any development community, including Open Source, Agile, Java, and Ruby communities? Are you always looking for more elegant, more simple, more maintainable solutions? If so, then you might be an ALT.NET practitioner!
     
    This group follows the Open Space Technology model.  In Open Space, a facilitator explains the process and then participants are invited to co-create the agenda and host their own discussion groups.  So, we'll take a vote and present the topics.  We're in the pub-club mind occasionally, so it's not surprising to find us geeking out a bar...
     
    This model follows the four basic principles:

    • Whoever comes are the right people
    • Whatever happens is the only thing that could have
    • Whenever it starts is the right time
    • When it's over, it's over
    Time and Location:
    Tuesday, Marh 19th 7:00 PM

    Arlington, VA.  See thread for details.


    Come, participate, and make your voice heard!  Come meet passionate developers like yourself in the active discussion.  Hoping for a great turnout...   If you haven't signed up for our list, go ahead and do that here.

    kick it on DotNetKicks.com

    Read more...

  • Singularity - C# OS Released on CodePlex

    Update:  If you want the .iso I used for the VPC, check it out here on my SkyDrive.

    During my research and posts about Design by Contract and Spec# and my interactions with folks from Microsoft Research, I came across Singularity OS, an operating system written in an offshoot language based upon C#.  In that time, I realized that the Singularity team extended Spec# and the Design by Contract and static verification pieces of it into a new language called Sing#

    Fast forward to last Tuesday.  Almost five years after the start of development, it has finally been released onto CodePlex as an open source non-commercial academic license and can be found here.  After reading about it and talking with some Microsoft Research folks about it, I had to give it a shot.  That's one of the things I love about working at Microsoft is the fact that I can interact with people like these on a periodic basis.

    History of Singularity

    During my long commute to and from work, I have the pleasure of listening to many podcasts.  Although I like the ones in the .NET space with Hanselminutes and DotNetRocks, I also like to venture into the Ruby and outside community where I'm pretty comfortable as well.  So, one of my absolute favorites is Software Engineering Radio for the serious talk and geeking about languages and architecture.  Lo and behold, the latest episode, Episode 88, covers Singularity with Galen Hunt where he talks with Markus, the host about the history and features of the OS.  I suggest you listen to that before we go any further.  Also, a good overview can be found here in PDF format.

    If you think about most operating systems we run today, the essence of what they are is dated back in the 1970s and C and Assembly based.  Back in 2003, Galen and team started this effort to write an operating system in managed code.  Over 90% of the system is written in a language called Sing# which is an extension of Spec# which I will get into shortly.  But, Singularity consists of three major parts, Software Isolated Processes (SIPs), contract-based channels, and manifest-based programs.

    SIPs are interesting parts of Singularity.  They provide a sandbox as it were for program execution free from meddling from outside processes.  This includes its own memory space, threads and so on.  In fact, memory and threads cannot be shared from one SIP to the other, so the vectors for malicious code are cut way down. 

    Contract-Based Channels are another interesting aspect of Singularity.  It's a built-in feature of the Sing# language which I will get to in the next section.  In short, what it provides is a quick and verifiable way of communicating between processes with messages.  To support this, the Spec# language had to be extended to support this.

    Lastly, manifest based programs are interesting because it defines the code that runs within the SIP and its behaviors.  In Singularity, there really is no such thing as Just In Time Compiling (JIT) as all code needs to be loaded into memory and statically verified before it can be executed, which is something a JIT cannot do.  But on the other side of this, it makes dynamic languages and late binding impossible as well.  So, to work around this, they devised a plan called Compile Time Reflection, so you know your dependencies beforehand and uses Dependency Injection in a way to inject the appropriate dependencies.  Really slick stuff!

    Sing#

    Rustan Leino and others in Microsoft Research had already begun an effort called Spec# to provide Design by Contract features to the C# language and a static verifier to prove that code is in fact working as the contracts were written.  Just a quick aside, we're going to be lucky enough to have Rustan at ALT.NET Open Spaces, Seattle to talk about it and Design by Contract (Shameless Plug).  Anyhow, back to the topic at hand.  Spec# didn't have enough for the static verification that needs to happen.  So, instead, Sing# brings us Contract Based Channels for creating message declarations and a set of named protocol sets.  Any communication that crosses processes must use contract based channels.  These messages that it passes have declarations that state the number and types of arguments for each message and an optional message direction. Each state specifies the possible message sequences leading to other states in the state machine. 

    I just want to dig through some code to see exactly what that looks like:

        class DirectoryServiceWorker
        {
            private TRef<DirectoryServiceContract.Exp:Start> epRef;
            private DirNode! dirNode;

            private DirectoryServiceWorker(DirNode! dirNode,
                                          [Claims] DirectoryServiceContract.Exp:Start! i_ep)
                requires i_ep.InState(DirectoryServiceContract.Start.Value);
            {
                epRef = new TRef<DirectoryServiceContract.Exp:Start>(i_ep);
                this.dirNode = dirNode;
                base();
            }

    If you notice from above, you can see some Spec# goodness in there including NonNull types using the ! keyword and also requires preconditions.  It's pretty well written and a lot of fun to dig through.  If you want to learn more about compilers and operating systems, now is the time to sift through the source code and get your geek hat on.

    Building the Image

    If you want to actually run Singularity, the team has provided as part of the zip file, a way to build the operating system.  You'll simply need the following:

    • Windows Debugging Tools
    • .NET Framework 1.1
    • Virtual PC 2007
    • MSBuild
    There is a really well documented PDF file that comes as part of the download to walk you through the build procedure step by step.  Basically, you kick off an MSBuild process which builds the image and then you can mount an iso image to see the results.

    I was able to get the results in about 10 minutes or so for the build process.  Then again, if you're running Vista, you need to be sure to launch the configure.cmd as an elevated process in order to kick things off properly.  That was the first hurdle.  But once I got that going, the rest was easy.  And I got a pretty cool result as well when I ran the VPC image.  Look at the goodness:



    I have played with it just yet all that much.  I'm figuring what I can do with it next.  But, that's part of my copious spare time which doesn't seem to exist much anymore.

    Conclusion

    I've done well with my learning plan this year to keeping to what is on my plan and not deviating from it.  Luckily languages such as Spec# and Sing# still fall into that category.  It's pretty fascinating stuff and great to get my hands on an operating system using managed code.  It's pretty impressive from the things I've read and the code I've read.  I'm only hoping that research projects such as this make a significant impact on future versions of Windows, let alone future versions of most operating systems.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • Videos and Interviews from MIX08

    Well, I've had the urge to find all the videos I could and watch them to find out all the goodies I missed while not at MIX08.  If you missed any of the main sessions, you can find out more about them here.   Note that there are 88 sessions recorded here, so it's a lot of good viewing material. 

    Best of all are Scott Hanselman's MVC Videos can be seen here.  He also covers the MVC Mock Helpers which better allow for unit tests using various Mock frameworks including Rhino Mocks, TypeMock.NET and Moq.

    Dave Laribee
    was great on Twitter to make sure we were all kept up to date with all the good things that were happening.  Brendan Tompkins supplied Dave with a video phone so that he could capture impromptu videos and such.  But, best of all they were broadcasted live.  He was able to talk to guys like Rob Conery, Phil Haack, Steve Harman, Miguel, John Lam, Scott Hanselman and Josh Holmes.  Very cool stuff!  They were pretty good and entertaining, although the video wasn't always superb and sometimes you needed motion sickness pills.  But, the sessions of note are:

    More of them can be found here.  And it's enough to make Phil think that he has enough of a fidgeting problem...  Well, enjoy!

    kick it on DotNetKicks.com

    Read more...

  • RockNUG Meeting 3/12/2008 - Refactoring in C#

    The Rockville .NET User Group (RockNUG) will be holding their next meeting on Wednesday, March 12th, 2008 from 6:30PM-9:00PM.  This month, they'll be having a pretty interesting topic on refactoring in C# with Jonathan Cogley.  I've had my refactoring and agile boots on lately, so I can definitely relate.  I don't know what I would do though without my Resharper 4.0 Nightly Builds...  I've had a few issues here and there, but nothing to discourage me from continuing usage.

    Anyhow, here are the details:

    Location:
    Montgomery College, Rockville
    Humanities Building - Room 103


    Refactoring in C# - Bad code to better code
    presented by Jonathan Cogley

    What could be more fun on a Wednesday evening than critiquing some bad
    code and making it better? :) Come along to learn how to clean code like
    the Thycotic team. What do we look for? How do we take small steps to
    keep it working? What tips and tricks make it easier? This session
    will be code, code and more code (and a few unit tests of course!).

    Jonathan Cogley is the founder and CEO of thycotic - a software
    development company operating in the Washington DC Metro Area with
    offices in Vienna, Virginia. Jonathan has worked for many interesting
    companies over the last decade as a software consultant in both the UK
    and the USA. His company has released various .NET projects and APIs
    including an implementation of Remote Scripting for .NET, a database
    platform independent data access layer for ISVs and various tools for
    the Test Driven Developer. Test Driven Development (TDD) is the
    cornerstone of the thycotic approach to software development and the
    company is committed to innovate TDD on the Microsoft .NET platform with
    new techniques and tools. Jonathan is also a columnist and editor for
    the popular ASP.NET Web site, ASPAlliance. He is an active member in the
    developer community and speaks regularly at various .NET User Groups,
    conferences and code camps across the US. Jonathan is recognized by
    Microsoft as an MVP for C# and has also been invited to join the select
    group of the ASPInsiders who have interactions with the product teams at
    Microsoft.

    The schedule for the event is as follows:

    n00b Session 6:30 - 7:00 ASP.NET GridView Part III
    Pizza/Announcements 7:00 - 7:30  
    Featured Presentation 7:30 - 9:00 Refactoring in C#

    Hope to see a great crowd there!  I'm looking forward to it.

    kick it on DotNetKicks.com

    Read more...

  • IoC Container, Unity and Breaking Changes Galore

    Update: IoC and Unity - The Basics and Interception

    As Grigori Melnik noted on my blog previously as well as his own, there was a brand new drop of the Unity Application Block as of March 4th.  This by far was a huge update with a lot of breaking changes.  That teaches me to use a CTP of any product and blog about it actively as it compares to other Inversion of Control (IoC) containers.  Glad I didn't do a lot on ASP.NET MVC just yet but I have a few good projects going on the side with that now.

    Where I've Been Before

    As noted here before, I've been spending some time actively comparing the Unity Application Block to other IoC containers and what each offers.  Let's get caught up to my previous posts:

    I'm probably going to change the code on each one just so that you all don't get angry about code from me not working, so stay tuned, it will get done.  But, keep in mind, it is still CTP, and I'd rather concentrate on the concepts as well.

    What's Changed

    As you may notice if you did any of the code snippets I published, they sure as heck don't build anymore.  So, let's enumerate the changes made so far:
    • UnityContainer method Register<TFrom, TTo>() becomes RegisterType<TFrom, TTo>()
    • UnityContainer method Get<T>() becomes Resolve<T>
    • UnityContainer method SetSingleton<T> removed
    • UnityContainer method RegisterType<TFrom, TTo> accepts a LifetimeManager to create Singletons
    • Added ContainerControlledLifetimeManager and ExternallyControlledLifetimeManager
    • Removed DependencyAttribute NotPresentBehavior property and enum
    • Added UnityContainer CreateChildContainer method to create child sandboxes
    • Reversed the parameters for UnityContainer method RegisterInstance<T>(value, key) to RegisterInstance<T>(key, value)
    As you can see, some of these changes can be seen as quite annoying especially if I were to have a method such as the following and I can't find the error until I sit and read the code through and through:

    Old:

    IUnityContainer container = new UnityContainer();

    container

        .RegisterType<ILogger, ConsoleLogger>(new ContainerControlledLifetimeManager())

        .RegisterInstance<string>("administrator@example.com", "defaultFromAddress");


    And sure enough, it blew up due to the fact that it seemed just fine because it was RegisterInstance<string>(string, string) and instead I had to change it to this to get it to work.

    New:

    IUnityContainer container = new UnityContainer();

    container

        .RegisterType<ILogger, ConsoleLogger>(new ContainerControlledLifetimeManager())

        .RegisterInstance<string>("defaultFromAddress", "administrator@example.com");


    Nothing from what I can tell changed in the XML configuration, but I prefer to register my types by code instead.  The only thing I really like to switch out from time to time is a Logger of some sort.

    Another interesting aspect is the concept of child containers in their own sandbox as it were.  Take for example, we have two loggers, a DebugLogger and a ConsoleLogger.  We have the ability to have these registered in two different child containers all managed by the parent.  Let's walk through a simple example of this.

    using Microsoft.Practices.Unity;

     

    namespace UnitySamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                UnityContainer parentContainer = new UnityContainer();

                IUnityContainer childContainer1 = parentContainer.CreateChildContainer();

                childContainer1.RegisterType<ILogger, ConsoleLogger>(new ContainerControlledLifetimeManager());

                IUnityContainer childContainer2 = parentContainer.CreateChildContainer();

                childContainer2.RegisterType<ILogger, DebugLogger>(new ContainerControlledLifetimeManager());

     

                ILogger logger1 = childContainer1.Resolve<ILogger>();

                ILogger logger2 = childContainer2.Resolve<ILogger>();

                logger1.Log("Foo");

                logger2.Log("Bar");

            }

        }

    }


    What seemed a little annoying to me is that CreateChildContainer is not available from the IUnityContainer, and instead only from the UnityContainer concrete class instead.  But as you can see, I create two loggers each in their own sandbox, one a DebugLogger and one the ConsoleLogger.

    What I'd Like To See

    As I've said before, I'm not entirely sold on the Unity Application Block as I like a lot of the other ones in the Open Source Community as well including Castle Windsor, StructureMap and Spring.NET.  It's heartening that they are taking a lot of advice from the community to heart and having quick drops for public approval.  When people are evaluating IoC containers, it's critical that you have a list of essential features that the container must support.  You may also note that your needs for this may change from project to project, so don't assume that if it worked well here that it will suit your needs on all projects.  Due diligence must be done in tool and framework analysis to get the right tool for the right job.

    With that in mind, I have a few things that I'm looking for right now in an IoC container.  Some of these requirements are as follows:
    • Must be lightweight (No heavy XML lifting required)
    • Must support interception and AOP with runtime weaving (compile time might be nice)
    • Should be opinionated so that I don't need to mark my dependencies in my constructor, instead match by name
    • When I ask for a logger, give me one and I don't care which
    • Support easy object registration through fluent interfaces
    But like I said, it evolves quickly and changes from project to project.  What's your list and why?

    Further Thoughts About IoC

    Nick Malik recently blogged about IoC container and their overuse in a two part blog series here and here.  In his first post, he tends to think that people are religious about the use of IoC containers and not just the Dependency Injection pattern in particular.  In the second post, he covers that the use of the IoC container is only as good as the team that understands its impact and applicability.  Most tools have their limits and uses.  Mocking containers have the same issue.  Once you start mocking for parts of your tests, then there is a good chance for abuse and overuse as well.  I see his point, but I disagree that many teams are not up for IoC containers and general good design practices of Dependency Injection.

    Wrapping It Up

    Next time, I'd like to get into the extensibility model of each container in terms registering extensions and so on.  In the mean time, I hope you go and evaluate your needs for an IoC container, even whether you need one or not.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • ASP.NET Team Releases for Mix 2008

    For all those interested in the information and the latest bits from the ASP.NET Team, here are the latest links.

    Downloads:


    ASP.NET Updates:
    Jeffrey Palermo, the founder of MVCContrib and CodeCampServer has posted the changes from what he can gather here.  As noted, you must uninstall the old bits first before installing the new ones.

    I've been playing with the bits for a little bit lately and I must admit it's a lot better now.  But, I'm noticing that it seems that the ASP.NET team wants us to use more of the Supervising Controller/Presenter Pattern and less of the Passive View PatternBrad Wilson also notes that here.  It hasn't dampened my usage of it yet as I have adapted my designs since then.  After all, you have to be a little flexible when using a CTP.

    kick it on DotNetKicks.com

    Read more...

  • Adventures in F# - F# 101 Part 4

    Time for another adventure in F#, covering the 101 level basics of the language and why I love it as much as I do.  This time we're going to cover some topics such as custom operators, lists and so on. 

    As I want to stress in every installment of this series, the importance of functional programming and its influence on the .NET framework.  Don Syme, the creator of F# was instrumental in bringing generics into the .NET framework.  With such things as lambdas, object initializers, collection initializers, implicit initialization partially came from ideas from functional programming and F#. 

    Where We Are

    Before we begin today, let's catch up to where we are today:

    Drink up the goodness and let's continue.  Today, I'd like to cover the following topics:
    • A Brief History of F#
    • Operators
    • Lists
    The list may seem small, but there is a lot to learn with all of this.  But before we get technical again, let's step back and get a brief lesson of how F# came to be.

    A Little History Lesson

    As you may know, functional programming itself is one of the older programming paradigms that dates back to the 1950s.  In fact, as with most things, it can be traced back to Lisp in the late 1950s.  Like I've said before, most good things in programming nowadays can trace their history to Lisp, or Smalltalk

    Anyhow, back in 2002, Don Syme and a team at Microsoft Research wanted an Metalanguage (ML) approach to language design on the .NET platform.  While this project was going on, Don was also working on implementing generics in the .NET platform as well.  These two projects worked there way into becoming F# which made its first appearance in 2005.

    As you may note from looking at the F# libraries such as FSharp.Compatibility.dll and the namespace Microsoft.FSharp.Compatibility.OCaml, it shares a common base with Objective Caml (OCaml).  OCaml itself, another language in the ML family and been around since 1996.  It also borrowed for Haskell as well in regards to sequences and workflows.  But, in the end, as you may know, F# splits from those languages quite a bit because it allows for imperative and object oriented programming.  That's the really interesting piece of this puzzle is to be able to interoperate with other .NET languages as if they were first class citizens.

    One thing that I found particularly interesting about F# is that it is very portable, unlike other .NET languages.  What I mean by that is that they provide you with the source so that you could use this with Mono, Shared Source CLI or any other ECMA 335 implementation.  I've been able to fire up a Linux box with Mono and sure enough it compiles and runs without fail.  Now onto the technical details...

    Operators

    Operators are an interesting and compelling feature in F#.  Most modern languages support this feature except for say Java to overload operators.  I think it's a compelling feature in the way of dealing with DSLs and such.  But anyways, F# supports two types of operators, the prefix and infix types.  The prefix operator type is an operator that takes one operand.  In order to be prefix, it means that it precedes its operand such as (-x).  The infix operator type takes two or more arguments.  An example of this kind of operator is (x + y).

    There are many operators in the F# libraries quite frankly and I'll list just a few of them:

    Arithmetic Operators
    Operator Meaning
    + Unchecked addition
    - Unchecked subtraction
    * Unchecked multiplication
    / Division
    % Modulus
    - Unary Negation

    Bitwise Operators
    Operator Meaning
    &&& Bitwise and
    ||| Bitwise or
    ^^^ Bitwise Exclusive Or
    ~~~ Bitwise Negation
    <<< Left shift
    >>> Right shift

    Assignment and Others
    Operator Meaning
    <- Property Assignment
    -> Lambda Operator
    :: List  Concatenation (cons)
    |> Forward Operator
    [[ ] Index Operator
    :? Pattern Match .NET Type

    And many many more.  It's a massive array of built-in operators...

    Back to the subject at hand.  As in C# and other .NET languages, the operators are overloaded.  But unlike C# for example, both operands for the operator must be of the same type.  But unlike C#, F# will allow you to not only define, but redefine operators as well.  Below I'll show an example of using the common concat operator with strings and then redefine the division operator to do something really stupid and do multiplication instead.  Aren't I crafty?

    #light

     

    let fooBar = "Foo" + "Bar"

    print_string fooBar

     

    let (/) a b = a * b

    printfn "%i" (4 / 2)


    So, as you can see it's rather powerful what we can do with existing operators.  But, with F#, we can also define our own operators we wish to use.  In fact we can use any of the below as our new operator.

    ! $ % & * / + - . < = > ? @ ^ | ~ as well as a :

    As I did above, it should be no different for us to create our new operator.  Let's walk through a simple operator that does raise to the power of two of the first operand and add the second.

    #light

     

    let ( *:/ ) a b = (a * a) + b

    printfn "%i" (45 *:/ 42)


    If you're curious like me what that actually looks like, let's look through .NET Reflector and see what it creates for us.  I'll just copy and paste the results here:

    public static int op_MultiplyColonDivide(int a, int b)

    {

        return ((a * a) + b);

    }


    But we can also define operators for our custom types as well.  Let's go ahead and define a simple type called Point and then define the addition and subtraction operators.  This syntax should look rather familiar and to me, it looks a little Ruby-esque. 

    #light

     

    type Point(dx:float, dy:float) =

      member x.DX = dx

      member x.DY = dy

     

      static member (+) (p1:Point, p2:Point) =

        Point(p1.DX + p2.DX, p1.DY + p2.DY)

     

      static member (-)(p1:Point, p2:Point) =

        Point(p1.DX - p2.DX, p1.DY - p2.DY)

     

      override x.ToString() =

        sprintf "DX=%f, DY=%f" x.DX x.DY

     

    let p1 = new Point(5.0, 6.0)

    let p2 = new Point(7.0, 8.0)

    let p3 = p2 - p1

    let p4 = p2 + p1

     

    printfn "p3 = %s" (p3.ToString())

    printfn "p4 = %s" (p4.ToString())


    Lists

    Lists are one of the most fundamental pieces that is built right into the F# language itself.  The language itself around the use of lists is a bit verbose, but I'll cover the basics through the use of code.  Lists in F# are immutable, meaning that once they are created, they cannot be altered.  Think of them like a .NET System.Array in a way, but a little bit more powerful.  Do not get them confused though with F# Arrays which are entirely different and will be talked about later.  Let's show the basics here below about lists:

    #light

     

    let empty = [] (* Empty List *)

    let oneItemInList = 1 :: [] (* One Item *)

    let twoItemsInList = 1 :: 2 :: [] (* Two Items *)

    List.iter(fun x -> printfn "%i" x) twoItemsInList (* Iterate Over List *)

     

    let threeItems = [1 ; 2 ; 3] (* Shorthand notation *)

    let listOfList = [[1 ; 2 ; 3]; [4 ; 5; 6]; [7 ; 8 ; 9]]

    List.iter(fun x -> print_any x) listOfList (* Iterate over list of list *)


    As you can see from the example above there are a couple ways of declaring lists, including the verbose way which includes the empty list, or the classic style of putting inside of square brackets.  What's more interesting is to have lists of lists as well.

    Another interesting aspect of F# lists is List Comprehensions.  Using this syntax, we can specify ranges to use in our lists.  If you specify a double period (..) in your list, it fills in the rest of the values.  I have three samples below which includes 0 to 100, lower-case a to z and upper-case A to Z.

    You can also specify steps in the numbers as well which is much like a for loop with a step in there.  Note that characters do not support the stepping capability.  To use the stepping capability, the format is (lower bound, step value, upper bound).  Below I have a sample which pulls all even numbers from 100 to 0 and stepping -2 each time.

    For loops can also be used in which you can transform each item along the way.  To use this one, use this for keyword and the lower end to upper end and then the lambda to modify the value.  Below I have an example where I double each number from 1 to 20.

    Another example of a for loop is to create a loop in which you specify a when guard.  You can then only print out values that match your criteria.  The example I have below is to print all odd numbers using the formula when x % 2 <> 0.

    #light

     

    let zeroToOneHundred = [0 .. 100] (* 0 to 100 *)

    let lowerCaseAToZ = ['a' .. 'z'] (* Lower-case a-z *)

    let upperCaseAToZ = ['A' .. 'Z'] (* Upper-case a-z *)

    let evensFromOneHundred = [100 .. -2 .. 0] (* 100 to 0 skipping every 2 *)

    let doubledValues =

      { for x in 1 .. 20 -> x + x } (* Double each value from 1 to 20 *)

     

    let odds n =

      { for x in 1 .. n when x % 2 <> 0 -> x } (* Print all odd numbers *)

    print_any (odds 20)


    And I've only just begun to touch the basics of lists.  Dustin Campell also covered lists recently on his Why I Love F# Series.

    Conclusion

    We still have plenty to go before I'd say we were done with a 101 level exercise with F#.  We still have yet to cover pattern matching, lazy evaluation, exception management and even using F# with imperative and object oriented code.  So, subscribe and stick around.  Until next time...

    kick it on DotNetKicks.com

    Read more...

  • Live From Mix08

    No, I'm not at Mix08 right now, but I'm busy paying attention to every detail.  I'm missing a lot of really cool things such as:

    • Silverlight downloads at 1.5 million a day
    • IE 8 preview with Firebug?
    • SQL Server Data Services
    Also, now you can download IE8 already from here, so the news just keeps coming...

    You can too by paying attention to the following places:
    • Live streaming video from Mix from Microsoft and some short videos on:
      • IE 8
      • Scott Guthrie
      • Ray Ozzie
      • Dean Hachamovitch
    • CodeBetter's live stream by Dave Laribee which features short snippets from the event.
    • Josh Holmes is covering the event and has noted about Ray Ozzie's keynote as well as other things.
    • John Lam will be at Mix as well and will be tweeting throughout the conference with his Twitter name john_lam.  I know I'm following it now...
    Also, if you're at Mix08, make sure you stop by to Josh Holmes's Open Spaces at Mix.  You can find out more details about it here.

    kick it on DotNetKicks.com

    Read more...

  • IoC Containers, Unity and ObjectBuilder2 - The Saga Continues

    Update:  Fixed code changed from CTP and More in the series:


    I just wanted to revisit the whole Unity Application Block just once more to look at a few more things including handling parameters, instances and so on.  If you hadn't seen, there was another source drop of Unity as of 2/26, so if you haven't picked it up, feel free to do so here

    So, we're going to continue our look at DI and IoC containers as they pertain to different needs.  I'm not going to be a gloryhound about any particular IoC container, instead, just lay out how each one does its job and you decide whether you want to use it or not.  And that's the main point I want to emphasize here that each tool has its purpose and its needs that it solves and it's up to you do decide whether it meets your needs and programming style.

    Where We Are

    Anyhow, I covered the Unity Application Block in a few other posts lately as it relates to other Inversion of Control (IoC) containers.  I covered the basics with the Unity Application Block, StructureMap and Castle Windsor.  Check them out before we go any further today:
    Now that we're caught up to where we have been, let's dive into the other subjects in the same area of concern.  Unity has an interesting way of dealing with several issues that I want to cover in respect to other IoC containers.  And they are:
    • Managing Instances and Parameters
    • Method Call Injection
    • Unity Application Block versus ObjectBuilder2
    Managing Instances or Parameters?

    The previous two times, we talked about constructor injection and setter injection, but not really about managing simple types and mapping them to our constructors.  It's usually a simple thing, but yet, we have three different ways of attacking the very same problem.  That's the very point of these blog posts to point that out and it's once again echoed here.

    To set up this scenario, we have a simple SmtpEmailService which is used to send emails to users in our system.  It's a pretty naive sample, yet will illustrate some simple constructor injection with both simple and complex types.  We're going to inject a default from address and a logger into our service and go from there.

    We'll start with our Unity sample of how we are going to map those simple and complex types to the constructor.  Let's start off with the basic logger interface and implementation.  I'll only post this part once as it will apply the same to each of the samples.

    namespace UnitySamples

    {

        public interface ILogger

        {

            void Log(string message);

        }

    }


    And once again, we have our concrete implementation with the console logger.

    using System;

     

    namespace UnitySamples

    {

        public class ConsoleLogger : ILogger

        {

            public void Log(string message)

            {

                Console.WriteLine(message);

            }

        }

    }


    Now, let's tie it together with the point of today's post, the SmtpEmailService.  We will have two parameters in this scenario, the default email address, and the logger used for logging.

    using Microsoft.Practices.Unity;

     

    namespace UnitySamples

    {

        public class SmtpEmailService : IEmailService

        {

            private readonly string defaultFromAddress;

            private readonly ILogger logger;

     

            public SmtpEmailService([Dependency("defaultFromAddress")] string defaultFromAddress, ILogger logger)

            {

                this.defaultFromAddress = defaultFromAddress;

                this.logger = logger;

            }

     

            public void SendEmail(string to, string subject, string body)

            {

                // Send email

                logger.Log(string.Format("Sending email from {0} to {1} with the subject {2} and body {3}", defaultFromAddress, to, subject, body));

            }

        }

    }


    As you may note, I'm using the DependencyAttribute to decorate the parameter defaultFromAddress.  This correlates to the key specified in the config file or when you register an instance, and I'll show you that later.  
    Let's take a look at the program.cs to use the XML config file to wire up our instances.

    using System.Configuration;

     

    using Microsoft.Practices.Unity;

    using Microsoft.Practices.Unity.Configuration;

     

    namespace UnitySamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                IUnityContainer container = new UnityContainer();

                UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");

                section.Containers.Default.GetConfigCommand().Configure(container);

                SmtpEmailService service = container.Resolve<SmtpEmailService>();

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    The above code is pretty standard as we've used before, so real surprises here.  So, let's take a look at the XML configuration necessary to make this work:

    <?xml version="1.0" encoding="utf-8" ?>

    <configuration>

        <configSections>

            <section name="unity"

                    type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection,

                      Microsoft.Practices.Unity.Configuration" />

        </configSections>

        <unity>

            <containers>

                <container>

                    <types>

                        <type type="UnitySamples.ILogger,UnitySamples"

                              mapTo="UnitySamples.ConsoleLogger,UnitySamples"

                              lifetime="Singleton"/>

                    </types>

                    <instances>

                        <add name="defaultFromAddress" type="System.String" value="administrator@example.com" />

                    </instances>

                </container>

            </containers>

        </unity>

    </configuration>


    What I did is that I utilized the instances node and then add an instance called defaultFromAddress.  This node is only used to initialize simple types such as strings, DateTimes, System.Doubles, System.Int32 and so on.  Then I can set the value as well.  So, if I want to initialize complex types in the types node and the simple types in the instances node.

    But, we can also register things through the regular programmatic interfaces as well such as this:

    using Microsoft.Practices.Unity;

     

    namespace UnitySamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                IUnityContainer container = new UnityContainer();

                container

                    .RegisterType<ILogger, ConsoleLogger>(new ContainerControlledLifetimeManager())

                    .RegisterInstance<string>("administrator@example.com", "defaultFromAddress");

                SmtpEmailService service = container.Resolve<SmtpEmailService>();

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    As you note, I used the fluent interfaces approach to register the ConsoleLogger as well as the instance of type string which had my value and the key I wanted to use.  The first value I use is the value, and the second is the key I want to use to correlate to my DependencyAttribute.

    Now, let's look at the StructureMap instance using the same approach.  The only pieces I intend to show are just the pieces I changed.  So, let's take a look at the SmtpEmailService once again.

    namespace StructureMapSamples

    {

        public class SmtpEmailService : IEmailService

        {

            private readonly string defaultFromAddress;

            private readonly ILogger logger;

     

            public SmtpEmailService(string defaultFromAddress, ILogger logger)

            {

                this.defaultFromAddress = defaultFromAddress;

                this.logger = logger;

            }

     

            public void SendEmail(string to, string subject, string body)

            {

                // Send email

                logger.Log(string.Format("Sending email from {0} to {1} with the subject {2} and body {3}", defaultFromAddress, to, subject, body));

            }

        }

    }


    As you may note from above, I had no need for the dependency attribute or anything like that.  I haven't bothered to put the PluginAttributes or anything like that either as I want to do XML config way.  So, let's next look at the program.cs which does the simple email wireup. 

    using StructureMap;

     

    namespace StructureMapSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                SmtpEmailService service = ObjectFactory.GetInstance<SmtpEmailService>();

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    And then we wire things up with our StructureMap.config file.  Note that we set up things through an Instance node and then set the constructor injection parameters through property nodes.

    <?xml version="1.0" encoding="utf-8" ?>

    <StructureMap>

        <Aseembly Name="StructureMapSamples"/>

      <PluginFamily Type="StructureMapSamples.ILogger" Assembly="StructureMapSamples" DefaultKey="Console" Scope="Singleton">

        <Plugin Assembly="StructureMapSamples" Type="StructureMapSamples.ConsoleLogger" ConcreteKey="Console" />

        </PluginFamily>

      <PluginFamily Type="StructureMapSamples.SmtpEmailService" Assembly="StructureMapSamples" DefaultKey="EmailService">

          <Plugin Assembly="StructureMapSamples" Type="StructureMapSamples.SmtpEmailService" ConcreteKey="SmtpEmailService" />

          <Instance Key="EmailService" Type="SmtpEmailService">

              <Property Name="defaultFromAddress" Value="administrator@example.com" />

              <Property Name="logger" Key="Console" />

          </Instance>

      </PluginFamily>

    </StructureMap>


    I tidied up the XML configuration a bit after taking Jeremy Miller's advice and trying to collapse it down as much as possible.  I'm sure I still don't have it to the smallest amount possible, but at least it's legible.  Now, let's go ahead and use the DSL for configuration instead of the simple program.cs we have above.

    using StructureMap;

    using StructureMap.Configuration.DSL;

     

    namespace StructureMapSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                StructureMapConfiguration.UseDefaultStructureMapConfigFile = false;

                StructureMapConfiguration.BuildInstancesOf<ILogger>()

                    .TheDefaultIsConcreteType<ConsoleLogger>().AsSingletons();

                StructureMapConfiguration.BuildInstancesOf<SmtpEmailService>().TheDefaultIs(

                    Registry.Instance<SmtpEmailService>().UsingConcreteType<SmtpEmailService>()

                        .WithProperty("defaultFromAddress").EqualTo("administrator@example.com"));

                SmtpEmailService service = ObjectFactory.GetInstance<SmtpEmailService>();

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    As you can see from above, it's a pretty simple thing to use the more fluent interfaces here as well to register what we're interested in.  We're making sure it doesn't use the default StructureMap.config file and then build up an instance of an ILogger as a singleton, and then use the injection on the SmtpService to set the defaultFromAddress.  The ILogger it should automatically figure out itself.

    Now, let's finally move onto Castle Windsor.  I must admit, I'm most familiar with Castle Windsor, but I'm learning something new about each of these every day, which is a good thing.  So, the only classes I really need to show is the program.cs and the configuration file to show what has changed.  So, let's start with the program.cs file:

    using Castle.Windsor;

    using Castle.Windsor.Configuration.Interpreters;

     

    namespace CastleSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                WindsorContainer container = new WindsorContainer(new XmlInterpreter());

                SmtpEmailService service = container.Resolve<SmtpEmailService>();

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    This is a pretty simple example of using the XmlInterpreter which reads the XML configuration file to read what we want to set.  And then our XML configuration file should look something like this:

    <?xml version="1.0" encoding="utf-8" ?>

    <configuration>

        <configSections>

            <section name="castle"

                type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />

        </configSections>

        <castle>

            <components>

                <component id="emailservice.smtp"

                    type="CastleSamples.SmtpEmailService, CastleSamples">

                <parameters>

                    <defaultFromAddress>administrator@example.com</defaultFromAddress>

                </parameters>

            </component>

            <component id="logger.console"

                           service="CastleSamples.ILogger, CastleSamples"

                           type="CastleSamples.ConsoleLogger, CastleSamples"

                           lifestyle="singleton" />

            </components>

        </castle>

    </configuration>


    I could have also specified the logger as well as a parameter by noting it inside the parameters node such as this:
    <logger>${logger.console}</logger> to specify that it uses the ConsoleLogger, but it's not really necessary to do so.

    Instead, let's use the XML free way of configuration just to prove a point we can do it both ways:

    using System.Collections.Generic;

    using Castle.Core;

    using Castle.Windsor;

     

    namespace CastleSamples

    {

        class Program

        {

            static void Main(string[] args)

            {

                WindsorContainer container = new WindsorContainer();

                container.AddComponentWithLifestyle("logger.console", typeof (ILogger), typeof (ConsoleLogger),

                                                    LifestyleType.Singleton);

                Dictionary<string, object> properties = new Dictionary<string, object>();

                properties.Add("defaultFromAddress", "administrator@example.com");

                container.AddComponent("emailservice.smtp", typeof (SmtpEmailService));

     

                SmtpEmailService service = container.Resolve<SmtpEmailService>(properties);

                service.SendEmail("me@example.com", "Test email", "This is a test email");

            }

        }

    }


    So, as you can see, it's pretty simple to set the parameters on our SmtpEmailService through the use of a Dictionary<TKey, TValue> generic collection.  The rest is just plain simple.

    Method Call Injection

    The Unity Application Block introduces another concept called method call injection.  Yes, that's right, instead of using constructor injection or setter injection, you can use method call injection.  The basic premise is that you can inject any method as you would for a constructor or setter.

    When would you use this though?  If you read the documentation, it lays out the explanation as follows:
    • You want to instantiate dependent objects automatically when your instantiate the parent object
    • You want a simple approach that makes it easy to see in the code what the dependencies are for each class 
    • The parent object requires a large number of constructors that forward to each other, making debugging and maintenance difficult 
    • The parent object constructors require a large number of parameters, especially if they are of similar types and the only way to identify them is by position 
    • You want to hide the dependent objects by not exposing them as properties 
    • You want to control which objects are injected by editing the code of the dependent object instead of the parent object or application
    By default unless any of this matches your criteria, you should just go ahead and use constructor injection.  But, let's walk through a small sample of using method injection:

    using Microsoft.Practices.Unity;

     

    namespace UnitySamples

    {

        public class SmtpEmailService : IEmailService

        {

            private string defaultFromAddress;

            private ILogger logger;

     

            [InjectionMethod]

            public void Initialize([Dependency("defaultFromAddress")] string defaultFromAddress, ILogger logger)

            {

                this.defaultFromAddress = defaultFromAddress;

                this.logger = logger;

            }

     

            public void SendEmail(string to, string subject, string body)

            {

                // Send email

                logger.Log(string.Format("Sending email from {0} to {1} with the subject {2} and body {3}", defaultFromAddress, to, subject, body));

            }

        }

    }


    The rest of our program shouldn't have to change with this scenario.  But, as you will note from above, we decorated our Initialize method with an InjectionMethodAttribute which makes it realize that instead of constructor injection, use this instead.  And it works like a charm.  I haven't run into a scenario yet where I would find it useful, but I find it interesting.  If anyone does have a scenario that they've run into, please let me know.

    Unity == ObjectBuilder2?

    As you may have noticed, if you had downloaded the Unity Application Block, you'll notice that it is built upon ObjectBuilder2, but you may also notice that it hides a lot of functionality that it provides.  When you are marking your classes with DependencyAttribute and other things, you will notice that there are two of them, one in Unity and one in ObjectBuilder2, so this got me curious on what else I was missing out on.

    If you've played around with Enterprise Library in the past, you have probably dealt with ObjectBuilder.  It was not the most delightful piece of code I've ever had to deal with and not very intuitive.  Brad Wilson and Scott Densmore took it upon themselves to rewrite this into a much more usable piece of code.  You can download the original and unencumbered ObjectBuilder2 from CodePlex here.

    Take a look at the CodePlex DependencyInjection project and its associated tests in Tests.CodePlex.DependencyInjection for a good idea on how well Brad and Scott did with their DI framework.  The best way to learn about any framework is to read the tests or specs (BDD style of course).  Let's look at two ways of doing code interception using the DI framework with using code and using attributes.

                [Fact]

                public void InterceptViaAttributes()

                {

                    Recorder.Records.Clear();

                    DependencyContainer container = new DependencyContainer();

                    container.RegisterTypeMapping<ISpy, SpyInterfaceAttributes>();

     

                    ISpy obj = container.Get<ISpy>();

                    obj.InterceptedMethod();

     

                    Assert.Equal(3, Recorder.Records.Count);

                    Assert.Equal("Before Method", Recorder.Records[0]);

                    Assert.Equal("In Method", Recorder.Records[1]);

                    Assert.Equal("After Method", Recorder.Records[2]);

                }

     

                [Fact]

                public void InterceptViaCode()

                {

                    Recorder.Records.Clear();

                    DependencyContainer container = new DependencyContainer();

                    container.RegisterTypeMapping<ISpy, SpyInterface>();

                    container.InterceptInterface<SpyInterface>(typeof(ISpy).GetMethod("InterceptedMethod"),

                                                               new RecordingHandler());

     

                    ISpy obj = container.Get<ISpy>();

                    obj.InterceptedMethod();

     

                    Assert.Equal(3, Recorder.Records.Count);

                    Assert.Equal("Before Method", Recorder.Records[0]);

                    Assert.Equal("In Method", Recorder.Records[1]);

                    Assert.Equal("After Method", Recorder.Records[2]);

                }


    Ok, so you may note that we registered this with an ISpy interface.  Then we have two implementations, one with code and one using attributes.  Let's now inspect that code.

                public interface ISpy

                {

                    void InterceptedMethod();

                    void ThrowsException();

                }

     

                internal sealed class SpyInterface : ISpy

                {

                    public void InterceptedMethod()

                    {

                        Recorder.Records.Add("In Method");

                    }

     

                    public void ThrowsException()

                    {

                        Recorder.Records.Add("In Method");

                        throw new Exception();

                    }

                }

     

                internal sealed class SpyInterfaceAttributes : ISpy

                {

                    [InterfaceIntercept(typeof(RecordingHandler))]

                    public void InterceptedMethod()

                    {

                        Recorder.Records.Add("In Method");

                    }

     

                    public void ThrowsException()

                    {

                        throw new Exception();

                    }

                }

            }


    As you may note, we have two methods that we're interested in, the InterceptedMethod and the ThrowsException method.  Then we can test through a Recorder that in fact the code was hit.  xUnit.net is used quite frequently throughout the code as it is a project also by Brad and Jim Newkirk.  Overall, I'm very impressed with this code and I wish it had made more of an impact.

    Chris Tavares and others from the Unity team have taken it upon themselves to redo a lot of the guts of ObjectBuilder2 as they continue to extend Unity.  For AOP, the plan is to include the Policy Injection Block into the solution, whereas ObjectBuilder2 already had this piece of functionality and does it quite well.  Some of it seems to be right now a little less featured than ObjectBuilder2 at this point, but that can change.  I have hopes for Unity, but ObjectBuilder2 did a lot of these things that it claims to do, just a little bit better.

    Wrapping it Up

    Here wraps up another installment of looking at IoC containers and the various frameworks associated with it.  I hope that it whets your appetite to not only understand these containers, but see how they work deep down and see which matches your programming style.  Get to know your tools just a little bit better and find out what good quality code can be.  I hope you can take this to heart and see what works for you.

    kick it on DotNetKicks.com

    Read more...

  • Coming to Terms with Behavior Driven Development

    A while ago, I posted about Behavior Driven Development (BDD) while using the NBehave, and I think I went too far into the tool without going into the whole thought process instead.  I've had a series of these blog posts in my head but have been fighting writers block in order to get them out the door.

    BDD Introduction

    Anyhow, there has been a lot of discussion around BDD lately on the altdotnet mailing list around the definition and applicability of BDD in regards to Test Driven Development (TDD).  Subsequently, this led to forming a new group on Google Groups about Behavior Driven Development.  It's great to see the community start to gain momentum and talk about it more.  I've been following Dan North, Joe Ocampo, Scott Bellware, JP Boodhoo and Dave Laribee on this for a while, but now to come to a centralized place for that knowledge sharing has been invaluable.  I've seen too many times that there is a high noise to information ratio out there and we need to clarify a few things before it can really take hold.

    Getting back to the subject at hand, if you're unfamiliar with BDD, you should check out Dan North's explanation here.   There are a good number of links inside to whet your appetite.  To put things succinctly, BDD aims to bridge the gap between Domain Driven Design and Test Driven Development.

    Dave Astels, also has a great video on Google Video on BDD you can find here.  It's a great video which talks about how TDD differs and evolved from BDD.  It also delves into the subject of rSpec, the BDD framework in Ruby.  Very worth your time to check it out.

    Dave's summation of BDD comes down to the following bullet points:

    • Don't talk about Units in regards to Unit Testing, instead talk about Behaviors
    • All software we write has behaviors
    • There should be no correlation between public methods on a class and test classes
    • Structure your tests around the behavior of your application
    • Put emphasis on behaviors (How do I get it to do what I want it to)
    • Get rid of state based testing and look at interactions instead
    • Get rid of tests and instead use specifications of behavior (specs)
    • Get rid of assertions and instead set up expectations
    More Places To Turn

    I agree with Jimmy Bogard, that going to TDD to BDD in one fell swoop by just renaming your tests.  Instead, we should focus on those interactions between the systems and less on the physical implementations of the code.  We should also focus on the expectations inside our code as well.  Mocking and the use of Rhino Mocks or your other mocking framework will do nicely in here.  Brian Donahue also has some nice stabs at BDD as well here, here and here.  If you're up in the Philly area soon, Brian will be making the rounds doing BDD presentations...

    Dave Laribee has also put some thought into BDD as well with a particular set of classes he has written to help with BDD that you can check out here.  Also if you've paid attention to the BDD mailing list on Google Groups, you will note that Scott Bellware also has a framework up on Google Code called SpecUnit which is an extension to NUnit to support naming conventions of BDD inside of the xUnit type frameworks.  Very cool ideas coming out!

    Wrapping It Up

    I encourage you to join the BDD list, read the links, and learn more.  I find it's a more intuitive way of proving the behavior of your application (Note that I didn't say test).  I find that it bridges the gap between Domain Driven Design and Test Driven Development quite handily especially in regard to using the ubiquitous language.  It's important that naming and ubiquitous language comes into play when using BDD, because you can share your specs and they should be able to read them as well.  It's a great gap between user stories and the code you write.  If you need help with user stories, check out Mike Cohn's User Stories Applied book.  Anyhow, I hope you found this useful as an introduction to some of my many upcoming BDD posts.

    kick it on DotNetKicks.com

    Read more...