Archives / 2007
  • Lost in a sea of consciousness

    The ALT.NET mailing list is pretty overwhelming. A week or so ago David Laribee proposed splitting it into various groups but that thread seems to have gone by the wayside. The list is the proverbial firehose of babble around ALT.NET (and everything else you can think of). Even longer ago was the notion of summing up the threads and pulling out the choice information into a digestable form. Of course this would mean work and well, it's much better to yap about it then do it.

    Whilst doing my weekly ego surf, I came across a fairly new site, Alt.Net Pursefight! Of course it's anonymous, but it's a brilliant no-nonsense wrap-up on the goings on in the list. No thread is left unturned and no person is left without exposure. So hey, if you're looking for an unbiased opinion of what's going on check it out. It's like the Mini-Microsoft of the ALT.NET world.


  • File and File Container Wrapper Library on CodePlex

    Sometime ago we were talking about file wrappers and testing these things on the ALT.NET mailing list. It's a rather boring task to test say a file system without actually writing files (which you might not want to do). Wouldn't it be nice to have a wrapper around a file system so a) you could forget about writing one yourself and b) you could throw something at Rhino mocks because you really don't want to test writing files.

    I've put together a project implementing this and it's now released on CodePlex. It's composed of interfaces (IFile and IFileContainer) that provide abstractions over whatever kind of file or file container you want. This allows you to wrap up a file system (indivdual files or containers of files like folders) and test them accordingly. Concrete implementations can be injected via a Dependency Injection tool and you can sit back and forget about the rigors of testing file access.

    There are 3 concrete implementations (found in FileStrategyLib.dll) that implement a file system (IFile), folders (IFileContainer), and zip files (IFileContainer using SharpZipLib). There's also a test project with a whopping 10 unit tests (providing 97% coverage, darn) included. More documentation and sample code is available on the CodePlex site.

    You can check out the project here on CodePlex:

    1.0 is available and released under the MIT License. It contains the binary and source distributions. I'll check the code into the source repository later tonight.

    Note that this is a very simple library concisting of 2 interfaces and about 50 lines of code. Of course being an open source project I encourage you to check it out and enhance it if you want. It's not the be-all system to get all kinds of information out but should get you started. It was originally written just for a simple system to create and add files to a folder and zip file but can be extended. Otherwise, consider it an example of having to wrap and test a system like a file system which you may run into from time to time.


    Update: Source code tree is checked in and posted now.

  • Big Visible Cruise WPF Enhancement

    Having some fun with Ben Carey's Big Visible Cruise app. BVC is a WPF app that looks at CruiseControl (.NET, Ruby, and Java versions) and displays a radiator dashboard of the status of projects. As each project is building, the indicator will turn yellow then green if it suceeds or red if it fails. It's all very cool and I jumped all over this so we could have a visible display of our projects in the office.

    Here's the default look that Ben provided:

    I submitted a request to be able to control the layout and he reciprocated with a layout option (using skins in WPF). Here's the updated layout he provided:

    I had a problem because with only a few (8 in my case) projects, the text was all goofy. The layout was all cool but I wanted something a little flashier and better to read, so some XAML magic later I came up with this:

    Here's a single button with some funky reflection:

    Okay, here's how you do it. You need to modify two files. First here's the stock LiveStatusBase.xaml file. This file is the base for displaying the bound output from the CruiseStatus for a single entry:

      <DataTemplate x:Key="SimpleStatusDataTemplate">
        <Border BorderBrush="Black" BorderThickness="1">
          <TextBlock TextAlignment="Center"
                     Background="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}"
                     Text="{Binding Path=Name, Converter={StaticResource BuildNameToHumanizedNameConverter}}" />

    It's just a TextBlock bound to the data source displaying the name of the project and using the background color for the status of the build. Here's the modifications I made to make it a little more sexy:

        <DataTemplate x:Key="SimpleStatusDataTemplate">
        &lt;Grid Margin="3"&gt;
                &lt;DropShadowBitmapEffect /&gt;
            &lt;Rectangle Opacity="1" RadiusX="9" RadiusY="9" Fill="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}" StrokeThickness="0.35"&gt;
                    &lt;LinearGradientBrush StartPoint="0,0" EndPoint="0,1"&gt;
                        &lt;GradientStop Color="White" Offset="0" /&gt;
                        &lt;GradientStop Color="#666666" Offset="1" /&gt;
            &lt;Rectangle Margin="2,2,2,0"
                    &lt;LinearGradientBrush StartPoint="0,0" EndPoint="0,1"&gt;
                        &lt;GradientStop Color="#ccffffff" Offset="0" /&gt;
                        &lt;GradientStop Color="transparent" Offset="1" /&gt;
            &lt;Grid Margin="5"&gt;
                &lt;TextBlock TextWrapping="Wrap" TextAlignment="Center"
                           HorizontalAlignment="Center" VerticalAlignment="Center"
                           FontSize="32" FontWeight="Bold" Padding="10,10,10,10"
                           Foreground="Black" FontFamily="Segoe Script, Verdana"
                           Background="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}"
                           Text="{Binding Path=Name, Converter={StaticResource BuildNameToHumanizedNameConverter}}"&gt;

    I added a grid. The top rectangle is to define the entire area for each project (filling it in with the build status color) and the next one is the highlight (using a LinearGradientBrush) on the button. Then the TextBlock with the name of the project and it's build status gets filled in.

    Now here's the stock BigVisibleCruiseWindow.xaml (the main window):

    &lt;Border DockPanel.Dock="Bottom"&gt;
      &lt;DockPanel LastChildFill="False"&gt;
        &lt;TextBlock DockPanel.Dock="Left"  Text="Big Visible Cruise" FontSize="16" FontWeight="Bold"  Padding="10,10,10,10"  Foreground="White" FontFamily="Segoe Script, Verdana" /&gt;
        &lt;Button DockPanel.Dock="Right" Content="Options..." FontSize="9" Margin="10" IsEnabled="False" /&gt;
    &lt;Viewbox DockPanel.Dock="Top" Stretch="Fill"&gt;
      &lt;ItemsControl ItemsSource="{Binding}" Style="{DynamicResource LiveStatusStyle}" /&gt;


    The main window used a DockPanel and displayed some addition things (there's a button there for choosing options but it's non-functional). Here's my changes:

            <ItemsControl ItemsSource="{Binding}" Style="{DynamicResource LiveStatusStyle}" />

    I just simply replaced the DockPanel with a Grid.

    That's it! Feel free to experiment with different looks and feels and maybe submit them to Ben as he could include them in maybe a set of skins to use.

    Note: You need to download the code from the repository as the 0.5 release Ben put out doesn't include the skins.

  • TreeSurgeon Updates - 2005/2008 support

    Just a quick update to TreeSurgeon as we've been working on some plans on updating the tool to have a more flexible framework for adding new tools and updating the output for newer versions of the .NET framework.

    Donn Felker added VS2005 support so that's in right now. If you grab the latest from source control or download the latest ChangeSet from CodePlex here, you can get TreeSurgeon spitting out VS2005 and .NET 2.0 solutions. I'm just finishing up VS2008 support now and that'll be in the planned 1.2 release that we're coming out with shortly (probably by the end of the week). In addition, based on votes from the Issue Tracker (which we use as a Product Backlog) I'm looking to add MbUnit support so that will probably get into this release.


    The UI is pretty ugly and could use some graphical loving. I'm not going to get all stoked about the UI right now but it does need something. I've been meaning to talk to Jay Flowers as he pinged me awhile back about hooking up and seeing if there's some crossover between CI Factory and TreeSurgeon. I'm not so sure as I still have yet to get my head wrapped around CI Factory (nothing against Jay but if I can't grok an install in 5 minutes I usually move on until I can come back and kill some time figuring out how it works). To me a CI Factory and TreeSurgeon is like Marvel vs. DC (or Mac vs. PC) as I'm not sure I see the synergies but we'll see where that goes.

    We're also looking at doing some kind of plugin pattern for creating different type of tree structures. The original structure Mike Roberts came up with is great, but as with anything evolution happens and we move on. I personally use a modified structure that's based on Mikes, but accomodates different features. JP Boodhoo has a blog entry here on his structure, again, slightly different. In addition to the updates on the tree structure, we're looking at adding better support in the generated build file (and options for it in the ugly UI) for choosing what unit testing framework and how to compile using it (straight, through NCover, or NCoverExplorer). Again, some stuff is solid; others are up in the air but feel free to hook up on the forums here with your thoughts and ideas as we're always open to drive this the way you, the community, want it to go.

    Like I said, you can grab the 2005 support right now from here and the 2008 support will be up on the server when we do the 1.2 release later this week.

    Back to the grind.

  • Game Studio 2.0, Visual Studio 2005, VisualSVN, and me

    If you're like me and dabble in everything, then you might have Visual Studio 2005 installed along with VisualSVN (a plug-in to integrate Subversion source control access with Visual Studio). The latest version of XNA Game Studio (2.0) now allows you to build XNA projects inside of any flavor of Visual Studio. Previously you could only use C# Express Edition, which meant you couldn't use any add-ins (the license for C# Express forbids 3rd party addons) which meant I couldn't use ReSharper. Anyone who's watched my XNA demos knows this peeves me to no end as I stumble pressing Alt+Enter to try to resolve namespaces or Ctrl+N to find some class.

    Oh yeah, getting back to the point. If you've installed the latest Game Studio 2.0 you can now use it with Visual Studio 2005 (not 2008 yet). And if you've got the combo I mentioned installed (VS2005, GS2.0, and VisualSVN) you might see this when you create a new XNA project:


    It's a crazy bug but I tracked it down via the XNA forums. Apparently there's a conflict if you're running the combination I mentioned. Don't ask me why a source control plugin would affect a project creation template. I just use this stuff.

    Anyways, you can fix it with the latest version of VisualSVN (1.3.2). I was running 1.3.1 and having the problem, you may be in the same boat. Here's the conversation on the problem in the XNA forums; here's the bug listed on the connect site; and here's the link to VisualSVN 1.3.2 to correct the problem. All is well in developer land again as we enter the world of ReSharper goodness sprinkled with a topping of XNA.

    Happy gaming!

  • Another Kick at the Can

    image Look like a new development community is forming, this time around XNA and Game Development. uses the DotNetKicks code and has launched a new site for game development content. There's only 5 articles submitted, but looks like it's off to a good start. A good set of categories to start, a bit of a redesign from the typical DotNetKicks look and feel (something like a cross between Digg and DNK) but it's just starting off.

    So if you're into the game development scene feel free to help out and see if this kicks community can grow and prosper. Check out GameDevKicks here and give a whirl.

    More game development news to come shortly from your friendly neighborhood SharePoint-Man.

  • Soaking up the ASP.NET MVC Framework

    The ASP.NET MVC framework is out and I'm sure people will be messing around with it. I'm still not sure how much AJAX is possible right now with this so we should see some samples hopefully (I'm trying to build one right now but not getting very far).

    If this is all new to you and you're trying to get your head around the framework, here are the links to Scott Guthrie's mini-series on this epic. Recommended Required reading to get used to the framework and now that it's out you can build the samples yourself.

    There are more entries coming from the big guy, including information about the HtmlHelpers and AjaxHelpers (and how to build your own) but this will get you off the ground and flying in no time.

    If you missed the link, you can grab the framework here.

  • ASP.NET MVC now available

    You've read about it on the Internet, you've seen us talking about it, and if you were at DevTeach last week you soaked up Justice's inhuman presentation (and Jeffrey's more than human one) on the tool you'll know what the buzz this week is. Now you can see what the hype is all about.

    The ASP.NET MVC addition is now available here. It's part of the ASP.NET 3.5 extensions which not only includes the MVC framework, but also includes some new stuff for AJAX (like back button support), the ASP.NET Entity Framework, and there are two new ASP.NET server controls for Silverlight.

    Grab it, try it out, watch the skies for demos and tutorials and all that jazz (or read ScottGu's 20 page post on the subject which is more than anyone will ever need) and start building .NET web apps the smart way! The framework is available here for download and there are some QuickStarts that will help you get up and running here.

  • Terrarium Anyone?

    Anyone out there got a copy of the Terrarium client and server they can flip me? I'm working on something new and need to find a copy of it. It seems to all but vanished from any Microsoft site I can find. For example, the download page is here on the WindowsClient.NET site but doesn't work. It continues to be listed as a Starter Kit for Windows Form (it hasn't been updated since .NET 1.1) but I can't seem to track it down anywhere. If you have a copy let me know via email and if you can send it that would be great, or I can provide a place for you to upload it to. Thanks in advance.

  • ALT.NET keeps on ticking

    I can't say I've seen a community with more spirit, enthusiasm, opinion, views, experience, concealed lizards, logging chains, and gumption than the ALT.NET community.

    Stats for the Yahoo! Groups mailing list, which only started 2 months ago on October 7th:

    • Almost 2000 posts on October (remember it was about 23 days of posting)
    • Over 3200 posts in November
    • Already 1300+ posts in December (and we're only 7 days in)

    Prediction (based on velocity) for December: 6250+ (yes, I actually used my Scrum Velocity tool [a spreadsheet] to calculate this). That's insane (the number of posts, not the fact that I used a spreadsheet for this blog entry).

    Anyways, keep the firehose going. We're all still trying to figure out how to consume this information without blowing our heads off. Just waiting to see if we'll get a special community away, "Most Active Yahoo! Groups List... Ever"

    PS we've been doing some updates on the Wiki site as well so feel free to drop by and be a reader or writer there too!


  • Learning the Model View Presenter Pattern

    The guys over at Patterns and Practices got it right. They've put together a package (available via a CodePlex project here) on learning and understanding the Model View Presenter Pattern (MVP). It's kind of a "mini-guidance" package and not the big behemoth you normally see from these guys that:

    • Provides guidance on how MVP to promotes testability and separation of concerns within the UI
    • Illustrates how to implement MVP with standard ASP.NET
    • Illustrates how to implement MVP with ASP.NET and the Composite Web Application Block

    The package contains full documentation on the pattern, unit tests, and source code (for both WinForms and CAB) demonstrating it. Very nice and very easy to digest! Check it out here if you're just getting started and want to see what MVP is about.

  • TRUE, True, true, and FALSE, False, false

    I'm just in rant mode the last few days, trying to get my head wrapped about 3000 posts by Oren (who just happens to be an android and takes his orders from his dog) and 3000 posts on the altnetconf mailing list in the last 30 days (yes, 100 new messages a day and so many blog entries to come, somehow...)

    However this is just plain rude:


    This is the XML "Intellisense" in Visual Studio 2005/2008.

    Yeah, 6 definitions of True and False.

    Let me qualify this post which I should have done in the first place. This screenshot was for an XML file in SharePoint and the options presented in Intellisense are completely driven by your XSD. So really my brain wasn't working yesterday when I blamed the IDE for this as it's only as dumb as it can be told.

    However, I guess if I was building the tool and the attribute was a boolean value I would only present two options "True" and "False" so maybe it is a deficiency of the IDE.

    Or the XSD.

    Or SharePoint.

    Or me.

    All of the above? Yeah, probably.

  • I love Sony, I hate Sony

    image You know. It's a love/hate thing. This Christmas the family decided we would get each other consoles. Hey, what's a house with 3 XBox 360s without some more console love? So I would buy the PS3 (and a couple of choice games) for Jenn, she would get me a Wii (plus the proverbial look-like-an-idiot games). I came through on my end, but unfortunately there is no Wii anywhere in Calgary. Anyways, so change of plan and we've bought the PS3 as our "family" present and we'll just fill each others stockings and whatnot with other goodies.

    Now here's the kicker (and where my <rant/> begins). I own a Harmony 880. Best damn remote on the planet (well, at the time, the 1000 kicks it's butt). Controls everything. If I want to watch TV, it shuts other devices off; turns on the TV; selects the correct input; and turns on the digital receiver. Switch over to the 360 and it shuts down the digital receiver; turns on the 360; and changes the input on the TV to HDMI. Brilliant.

    Then I tried to get it to work with the newly acquired PS3. That was a farce. Guess what Sony decided to do with their console (and I'm only finding this out now). It's a proprietary Bluetooth device (okay, not sure about the "proprietary" part or if it's standard Bluetooth). No Harmony remote (mine or anything else on the market) handles Bluetooth. I mean, why would they? That's a silly technology to support. Everything (and I do mean *everything*) is IR these days.

    Sheesh. So basically my "universal" remote handles everything in the house *except* that crazy Sony device. There are some crazy hacks like buying a USB->IR thingy which requires the Sony remote so I would probably end up dropping *another* $200 bucks or something on top of the $400 I paid for the remote in the first place. Not something I'm going to entertain so I guess it's get off my lazy butt and walk across the room to turn the freakin' console on. Sigh.

    Thanks Sony.

  • DevTeach Day 3 - The Mad Mexican Strikes Again!

    Poor Beth Massi. There she was, innocently giving her LINQ to XML in VB talk and up comes this.


    Well, the Mad Mexican is in Vancouver and has struck again. He burst into Beths session screaming "I LOVE YOU BETH MASSI!".


    And just as quick as he came, he was gone.


    Best Presentation Ever.

  • DevTeach Day 3 - Justice Gray is my Hero

    Justice is just starting up his talk and wow, we're off to a great start. Justice starts with a list of things that may offend, which include women beating up men, 5 level if statements, dissing datasets, and unchained masculinity. Luckily nobody left at the start.


    We've been going for a few minutes and Justice is focusing on what the session is not (including the fact that Justice is not JP Boodhoo even though they both have an incredibly uncanny physical resemblance).

    BTW, the women kissing men came into play when he showed the MonoRail slide. I wasn't offended.


    Like David Laribee's presentation, this is the first time I've seen Justice present. I like his style as again its relaxed and has some nice departures from the typical PowerPoint crap that we all see. It's funny, it's casual, and it's to the point. It's a good way to take in information in a contextual way that makes it fun and easy to digest. In this day and age when we're looking at huge frameworks and technologies, there's a lot to take in. Presentations like this make it easy to take eat the elephant, one bite at a time.

    Justice has the weirdest story of Steven Rockarts and his descent into druggie induced Hell. However it all relates (in Justice's strange and demented way) to MVC. Again, top skills as a presenter here. He finally winded into the problems between WebForms and MVC doing a compare and contrast example then the code samples came.

    The code samples are fun (and I'm assuming available on the web somewhere) and an interesting read (especially the tests PuttingSteveInDetoxShouldGetRidOfMeth() and GivingSteveAHugGivesHimABlackEye()). Give a look see and in early December when the CTP release of the MVC framework is available you'll be able to build your own samples and start working with it.

  • DevTeach Day 3 - David Laribee, The Coding Hippie

    The model is the code. That's the message David Laribee started off with his Fundamentals of Domain Driven Design. As described by Kyle Baley, David walks us through DDD with the laid back and relaxed view as only he can do it.

    "When you say design everybody has a definition which doesn't correspond with yours..."

    David posted a quote from Paul Rand, one of my favourite graphic artists (he was responsible for the Art Nouveaux movie posters from the 60s). The quote is completely relevant to DDD since the client has one notion of an Invoice and you have something else. This is the foundation of the ubiquitous language. One of my more favourite Agile tools is the customer, and listening to them is a key action you take with them during the life of building a solution for them.


    I really like David's slide approach. Mind you, he's using a MacBook Pro and perhaps his background is Mac-like and more visually focused but he has a nice approach to presentation. All of the slides are simple in nature and really focus on the message. This is very much the Lawrence Lessig approach to presentation where you don't need a lot of fluff and flashy lights. For example a picture from the movie 300 with a the simple caption of "Impossible Odds". Brilliant!

    Its tough trying to find the domain experts, the subject matter experts, on a project. However you have to work with them. It's more of an art than a science to try to extrapolate the information that you need to build systems out of your customers or experts. Of course having 7 experts in the room you'll probably get 10 different answers as to what an Invoice is. Or an Employee. Or a Product.

    David walks through the basic patterns used in DDD (Entities, Value Objects, Aggregates, Repositories, Services).

    A couple of tips for Repositories. 1 Repository for each Aggregate Root. David (and this is my preference too) is to have a Customer Entity and a CustomerRepository Repository. There's a big debate out there about calling Repositories Repositories and you can stand on either side. Sometimes a Repository makes more sense to call it using a domain concept rather than a pattern name. For example a FileRepository might be called a Folder. I would call it Folder in the domain rather than a FileRepository.

    All in a great presentation, however we just got a fire alarm which has basically put an end to the session. Well, off into the cold now with the rest of the geeks.

  • DevTeach Day 2 - XNA with Live!

    Sitting for the first session as we're going through the XNA session with Pat McGee. The session was an intro to XNA (and filled with a room full of guys who haven't done any XNA work so it was a good audience) but was fun with the people involved.


    XNA on Xbox requires you to be connected to XBox Live and the Xbox networking was down so the guys couldn't demo anything on the XBox. However the fallback plan was to demo some networking on the laptops they had (of course when you come for an XNA demo you bring extra laptops and Xboxes, it's all about the hardware). Luckily it was John "The Pimp" Bristowe who saved the day and got the Xbox hooked up to the network so the demos were good to go.

  • DevTeach Day 1 - ReSharper Madness!

    Despite the fact that Oren's presentation is about ReSharper and as an Agilist I try to follow the manifesto "Individuals and Interaction over Processes and Tools", ReSharper *is* something that developers need. It is hard as it flies in the face of the manifesto but I think there are exceptions to the rule, and this is one of them.


    Oren during his ReSharper talk opens up the PetShop client, runs a copy of Keyboard Jedi (which was flying of course) and started his refactoring madness. 4 keystrokes later he converted all of the crappy public fields in the codebase to public properties.

    So we're about 8 minutes in the demo and if you ignore the 4 minutes it took him to explain what was going on, he's converted the PetShop login code to use Castle ActiveRecord and achieved persistent ignorance in the codebase. Nice.

    Again, watching Oren (even as he's doing a demo here and pausing for questions) work in ReSharper is like watching DaVinci paint the Last Supper (not that I was there or anything). It's pretty slick (and a little overwhelming) especially when he undo's 5 minutes of coding in 5 seconds just to find something then redo all that code again. Then again Oren probably sees code like Neo sees the Matrix.

    In the end it took some time to extract the crappy code out of PetShop (which is no easy feat) and a lot of refactorings and keystrokes later, the goal was accomplished. Not bad for an hour sessions to introduce ActiveRecord to a legacy codebase. With some time and care and feeding you can do the same (maybe not at the speed of Oren but then that would be inhuman, or JP-like).

  • DevTeach Day 1 - User Stories for You!

    Just sitting through David Laribee's talk on User Stories (well starting really). No code, just fluff. No wait, that's not right. There's no code but that's okay as we're talking User Stories however there's redemption as David isn't cracking open PowerPoint but rather using Keynote on his MacBook Pro.


    How do you sell Agile (or User Stories) to your boss? If you're stuck in Waterfall land where there is a design phase, construction phase, testing phase (unit testing phase), and the be-all end-all deployment phase. Agile is about an alternative to getting away from the giant Gantt chart from hell where you're predicting what you're going to do 9 months from now. Projects can't work this way, babies can.

    TDD, DDD, BDD, Patterns, etc. are all engineering practices. They're all good and needed. However if all you're doing is these practices then you're just doing a half-baked job. Let's go back to the Agile Manifesto of "Responding to change over following a plan". Agile does have planning and it can be hard, but it's the type of planning that alters the roadmap as you travel along the journey.

    Just because you're doing iterative development, doesn't mean you're doing Agile development. There's a term called Scrumifall (and anti-Agile pattern) where you break up the waterfall approach into chunks. What you really want to achieve is more aligned to Analysis/Design/Code/Test/Deliver in each iteration. There is no design phase, there is no testing phase. At the end of the iteration, done is done and released to the customer (but not necessarily released to the public).

    How do you work with stories or tasks? There was a good example by Oren on "As a User I want to search the CRM system". This seemed like an epic and for him, involved 3 developers for a month. That's pretty big so a technique is to break this down to smaller pieces (not tasks) so they're more digestible. One rule of thumb is that if your story is longer than your iteration, then it's probably too big on it's own and needs to be broken down.

    User Stories have smells too. For example if you have a story like "As a manager I want to approve/reject a document so that...". You might want to stay away from CRUD like functions in your stories. If you've got a CRUD story, you might want to keep them together and not get them too small. Small, but not too small. It's more of an art than a science here.

    David mentioned the Story Point Hall of Fame which is a cool idea. The idea is to take a real story where it really is a good example (in your organization) of a 1 point story. Or a 3 point. Or 10. Here's the idea. Put the idea up on a wall, cork board, whatever and tell everyone that this is a good, proven example of a 1 point story. Then other teams can come back and look at the wall to say "That's a good example" and model from it.

    How do you deal with inter-dependent stories? A good practice for stories is to follow the INVEST principle (Independent, Negotiable, Valuable, Estimatable, Small, Testable). When you're looking at your story where there's a dependent story the technique to remove the dependency is to fake it out (almost like a mock for user story but it's more of a fake or stub). David calls them connectors and they're basically get the dependency out and create a new story to handle the fake out.

  • DevTeach Day 1 - Let the Games Begin!

    I'm not focusing very well today. I've been staring at the DevTeach schedule for about 10 minutes and still not sure what I'm seeing. No, I'm not being figurative here. I mean I've been literally staring at the schedule for 10 minutes and can't make heads or tails out of where I want to go. I also don't know how I could be a bigger fan of Dave Woods. I'm not sure if it's the lack of sleep or the inability to read right now. It's certainly not the clarity of the schedule itself. That's pretty clear and straight forward.

    The claw! It's all in the claw!

    Anywho, good news (or bad news if you were expecting to see someone else at DevTeach). I'm filling in a missing slot tomorrow in the Server Track. James Kovacs and I are going to pair up and do a session on source control best practices. Here's the abstract (cut me some slack as I wrote this at 3:05 last night):

    Do you use source control? Does it work for you or do you work for it? Join Bil Simser and James Kovacs as we explore the wonderful world of source control and how it makes your source code sing and dance and sometimes do tricks. We’ll explore the ABCs of setting up your initial tree, managing code branches, dealing with evil merge scenarios, multiple users, conflicts, all the way to scaling up source control to large teams, integration with other tools, and generally making your life easier when dealing with this precious commodity in your Enterprise. In this session you’ll take away some best practices, tips and tricks, and new techniques you can bring to your teams.

    Should be a blast. James and I are in the Port Hardy room at 3PM so check it out if that's your thing.


    Off to lunch, back later with some more stuff as the afternoon is full of Agile. David Laribee is doing a session on user stories, James Kovacs follows up with his PI Domain Model speak, and the day (for me) wraps up with Oren and ReSharper. Just hoping my eyes will be able to keep up with Oren and his ReSharper Kung-fu.

    Also tomorrow should be cool as I'm going to be on the Agile Q&A session hosted by David Laribee so join Jeremy Miller, James Kovacs, myself, and others as we talk ALT.NET for an hour or so.

    Speaking of ALT.NET, we're looking to get some some Rebel Alliance faction going Thursday night so ping me if you're down here (up here?) in Vancouver and we can get together for an informal ALT.NET chat.

  • DevTeach Day 0 - Vancouver Virgin

    Well, I'm not really a Vancouver Virgin. I've been here a few times. Never for a conference though. And this is the first time DevTeach hits Vancouver so hey, the title sticks and it's my blog so deal.

    Here we are again kids, first day of the conference and man it's been a day. It's 3:22 when I'm starting the blog and we've been going all night long. This poor old tired body can't take much more and I think I'm going to be skipping the 7:30 breakfast call tomorrow morning. I've already put the Do Not Disturb sign on the door and won't be getting up until Thursday. Unfortunately other MVPs were a little less happy-go-lucky than I was and fell by the wayside throughout the night. More on that later.

    So sit back for the next few days and again live vicariously through Unka Bil's blog, a train wreck full of fun, adventure, and photos that should have been confiscated at the bar.

    I did arrive in Vancouver pretty uneventful. There was a Santa Claus parade during the day so of course traffic was a gong show. The only redeeming quality was the fact I could harass the locals when they asked me questions I couldn't answer.


    Outside of the local Subway a couple asked me what all the commotion was about and why all the traffic delays. I had a hard time swallowing that locals wouldn't know about the parade, especially since they had parade bags and one of them had a parade scarf. This of course brought out the insane Calgary guy in me with the only response I could think of that was appropriate.

    "It's aliens! They're taking over the downtown core!! Run for your lives."

    At which point I tore off down the street and around the corner. Not sure if they followed but later at the market they were nowhere to be seen so I assume they found their own evacuation route.

    Dinner with the MVPs

    It was a great dinner at Steamworks with Microsoft, the MVPs (about 40 of us), and the user group leaders from around (Vancouver, Victoria, Edmonton, and even Saskatoon!). Of course as I turned around and checked a few tables out, who is the only person at a table with a laptop? Oren.


    No doubt masterminding another new Rhino Mock feature that we'll all hear about tomorrow (or is yesterday, I always get that confused).

    The other event for the night was the quote that almost had me spewing my drink at dinner. Of course it appeared courtesy of Justice as we were talking about and exchanging business cards.

    "If my business card was a penis it would be 15,000 feet long!"

    Yeah, it was going to be a long night...

    Party with Palermo


    Jeff, as usual, kicked off an awesome party. Looked like around 100 people crammed into a small room but then I was never good at guessing numbers. Books were given away, hats were thrown into the crowd (literally), and this lucky winner walked off with a ticket to DevTeach after picking a number between 1 and 1000.


    During the course of the party, much rejoicing went on. Here I caught Kyle and Justice together although I'm still not convinced they're separate entities:


    Beth Massi, ex-MVP now softie was out and about with the guys having a great time:


    It seems everyone at the party was somehow affiliated with the Justice Gray Fan Club. In fact, I tracked down this little fellow who's not only the president, but he's also a member:


    Our own Justice Gray won a copy of JetBrains ReSharper. Now we think it'll make him a better developer, but we can only hope.


    David Laribee explains the finer workings of Domain Driven Design to Jeremy Miller, while James Kovacs and our own Canadian MVP guy Sasha looks on.


    Shooters Don't Always Stay Down

    Just ask our Igloo coder, coming off his worldwide tour. He wasn't looking too good. Luckily I don't have a wife factor I have to deal with when it comes to publishing incriminating photos on the net or talking bad about him and his evil whiskey ways. Pictures speak louder than words.


    Donald, dude, whiskey and shooter don't mix.

    What Lies Ahead

    Who knows? It's late.

    We polished off the bar then headed up Robson street in search of food which was delivered unto us in the form of Earls. Ahh, bless Earls and they're burgers and, well, food stuff. Along the way sort of picked up Greg Young who schooled us on DDD (as he should) and met the lovely and talented Dave the Chef and Emma the Social Butterfly (who's aspiring to work in Client Service). Good food, good conversation, and good times.

    So it's late and most geeks are in their beds sleeping away. Others are furiously hammering on their keyboard (no, not *that* hammering) trying to get their presentations "just right". Oren is probably still jet lagged and re-writing Rhino Mocks to incorporate .NET 3.5 features. And the rest of us, well, we're just trying to make it through to the next day...

    For the rest of the trip, you can view my entire Flickr set of DevTeach 2007 Vancouver here.


    I'll be blogging from the floor during sessions and trying to keep things moving here at Fear and Loathing Central. I'll also be outfitted with my new Eye-Fi card. This is an uber-cool (geekwise) SD card that you setup with a network to upload to Flickr (or some other service, I chose Flickr) as you take the pictures. Yup, it means mere moments after I snap a picture on the DevTeach floor it'll be in your hot little hands courtesy of Flickr and a $100 SD card. Very slick.

  • So who would you cast?

    I just finished watching my DVD of Red vs. Blue Season 5. I've gone through all 100 episodes of the hit internet series and it's a blast to watch. From the first moments of "why are we here?" to the chuba-thingy/puma debate to 1,000 copies of Church trying to do the right thing, the series is one of my favorites.

    I was thinking this morning with all the bruhaha about the now defunct Halo movie, what if RvB was a live action flick? Of course everyone running around in full Spartan armor isn't any fun, but who would you want behind the masks? What do you think would be a good set of current actors to play our beloved Rooster Teeth characters? Here's my picks:

    • Church: The leading man of course is central to the show. Maybe Nicolas Cage could do it, but I would probably go for someone like Colin Farell or Hugh Jackman.
    • Doughnut: Doughnut is hard as he's a hard one to nail down but I might go for Jude Law or Matt Damon here.
    • Tex: Not my first choice, but Angelina Jolie might work here. Can be a bit of a tomboy but hot too (even if she is whacko)
    • Sarge: You need a rough and tough guy for this and the first person I thought of was Tommy Lee Jones, although (except for his age) Clint Eastwood could nail this.
    • Caboose: This was tough as he has to be dumb but believable. Maybe Will Farell, although he might be over the top and you might want to slot in someone like Heath Ledger.
    • Grif: I thought Mark Walberg might make a good Grif here with his off the cuff remarks and attitude.
    • Doc/O'Malley: John Lithgow as he can play a crazy evil mastermind, but he's not really military grade. I think Johnny Depp could pull this one off with flying colours or even John Travolta.

    I didn't include the whole cast here, but feel free to leave your choices in the comments.

  • What's an ALT.NET Girl to do...

    I ran into a lot of problems this week with VS2008 as I was trying to get my machine up to snuff for my geekSpeak on Wednesday. VS2008 wouldn't run properly last week and I couldn't do any WPF demos. So I uninstalled 2008 thinking I could re-install it. The re-install went worse and crashed all the time. So I uninstalled both VS2008 *and* VS2005 then re-installed 2008. After a few days of installing/uninstalling/re-installing, I finally got something up and limping and the demo went off without a hitch (I think VS only crashed once on me).

    So now I'm on the precipice of  re-imaging my system to get it back to some kind of normalcy and I turn to you, kind and gentle reader.

    What should Bil do:

    1. Re-install XP, spend a few hours updating service packs and hotfixes, knowing that it'll work and I'll be back where I am now but a little faster (always is after a re-install) and all my software will work.
    2. Install Vista Ultimate, hope to heck I can get all my tools working, and watch my Core 2 Duo crawl to Slothlike speed.
    3. Take a chance on installing Windows 2003 Server with no network or video drivers in the hopes that IJW
    4. Install Windows Server 2008 RC0, because you couldn't get enough of Vista on the desktop... now you can get it on your servers!
    5. Break down and convince my wife that buying a new MacBook Pro is a good thing (because she'll get my current Inspiron 9400 which is faster and better than her Inspiron 6000)

    Decisions, decisions.

  • WPF geekSpeak Webcast

    Thanks to everyone who attended my first (but not last) geekSpeak webcast yesterday. We had an awesome turnout and a lot of great questions and interaction.

    I'll be posting the source code to the shell application with all the demos I used (including the 5 or 6 demos we didn't get to) after I add some comments and instructions to make the code a little more tutorial-like rather than just a brain dump of the end result. I'm still waiting for the geekSpeak guys to put together the webcast in a recording so once that's done I'll put the code up with a link on Channel 9 to the webcast (you won't have to register for it, you can just watch/download it). I'll also include my own answers to the questions asked during the webcast as I have a capture of those and a few things were skipped over or missed during the session.

    Thanks again and here's to more geekSpeak sessions in the future!

    PS And no, I wasn't wearing any pants during the session.

    PPS The photo of me used was taken in Florida during a DevConnections I presented at last year so that's not snow behind me but rather it was white sand.

  • Everything You Always Wanted To Know About MS-MVC * But Were Afraid To Ask

    Scott Guthrie, bless his heart, has posted one of the longest and most in-depth blog entry I've seen from him in a long time (and most of his tend to be long and full of great nuggets).

    This is everything you need to know about the new MVC framework that is coming out from his group soon as an alternative to WebForms (not a replacement). It walks through a typical storefront example showcasing how the MVC does its thing. It's a nice piece of work although you might want to read his overview post which will get you familiar with MVC.

    For the record, his post has 5,648 words; 34,768 characters; 146 paragraphs; has 177 comments (so far); and is about 32 pages long.

    Now that's a blog post!

  • WPF geekSpeak Webcast Wednesday Reminder

    Just a reminder that I'll be doing my first geekSpeak webcast Wednesday on WPF. I'm calling the talk "Tricks of the WPF Programming Gurus". We'll wallow through WPF and talk about what it is and how to use it, dive into code (after about 5 minutes of the blathering) and then do some code, write some more code, look at some code, and finally check out some more code. In short, it's going to be an hour and a half of code, code, and more code. Should be fun and we'll see where things go. The talk starts at 12pm Pacific Time, and it touted as level 200. I guess I'll keep it at level 200 although I might go crazy and kick it up a notch to level 1200 (promise, no assembly code will be harmed in the making of this screencast) so who knows. You can register via the attendee URL right here right now. Be there or be WinForms based for the rest of your natural life.

  • Two things...

    Two things I learned this morning and it's not even 7AM yet.

    1. You can open up an image in Paint.NET in the File Open dialog by specifying a URL to an image on the internet. I'm assuming this is nothing new and maybe any File Open dialog can do this (not sure) but it works in Paint.NET. I was opening a file that I though I had copied the local path name in the clipboard. Instead I had the URL to it on a website. So I let it go and do it's thing and lo and behold it had brought the image down and opened it up for me in Paint.NET. Cool.
    2. Visual Studio (2005) holds a reference to your solution files even if you select File | Close Solution. This has bugged me for awhile but I was re-syncing my local folder with what was in TFS and needed to blow away my local directory. I selected File | Close Solution and waited a bit, then proceeded to delete the files. Up until recently, I just installed SysInternals file locker unlocker tool and up it popped telling me that msdev.exe had a hold on the files. No matter what I did I couldn't tell Visual Studio to let my files go (even opening another solution). So the only way to delete a directory of a project you've opened is to shut Visual Studio down. Very annoying to say the least.

    Wonder what the rest of the day holds?

  • Party with Palermo losing steam?

    Party with PalermoWas just over at the Party with Palermo site for DevTeach Vancouver that's coming up in a few weeks. Jeff's banner lists 250 attendees (approx) but I'm only seeing a couple dozen names there? C'mon people. If you're going to be at DevTeach you must attend this party. Heck, if you're just in the Vancouver area why not come (if you're a geek and can tell me the 3rd parameter to System.IO.Compression.BeginRead())

    Anyway, check out the site, sign up and let everyone know you'll be there. We'll even have Kyle and Justice together again for the first time. You just can't miss that!

    See you there!

  • Learning to live with the squiggly line

    My name is Bil Simser, and I'm a ReSharpaholic.

    There. I said it. I'm addicted to ReSharper in a very heroin-like dependency way. I refuse to work without it. Yes, I will actually turn down contracts if the firm doesn't provide or forbids me to use my own copy of R#. To me my value to a customer is providing the maximum productivity that I can and to me that productivity comes from using this tool.

    I know it seems wrong to depend on a tool like this but it's reality. The way I code works perfectly within the way R# aids me in being productive. It grants me a fluid motion to refactoring and code efficiency and helps me get into a groove when I code. A groove that is both productive and efficient and fun. That keeps me going throughout the day. No longer do I have to worry about namespaces, or perform manual acts of extracting interfaces, pushing methods up to base classes, or creating classes. I don't get bogged down finding classes or files and I've almost abandoned my mouse.

    However being so dependent on it I'm now at a crossroads. Visual Studio 2008 RTMs later this month and I was looking forward to using it. It's faster and seeemingly better than it's 2005 counterpart and works well (at least throughout all the beta testing we've done on it). R# provides a version for VS2008 however it's lacking some major compatibility not with Visual Studio but rather the underlying CLR, namely version 3.0. This means that if I choose to move to VS2008 and R# 3.x I have to live with squiggly lines in my code whenver I'm writing LINQ, lamdba expressions or extension methods (to name a few examples).

    What are these squiggly lines you ask? Take this screenshot for example:

    Here you see things like "lot" underlined with a squiggly line (I'm sure there's a more technical term for this) indicating there's an error of some kind. Sometimes it's as simple as a syntax error, other times it's the wrong type being passed to a method. This helps tremendously combined with the right gutter indicator that shows there's something amiss in your code. You can immediately just jump to the spot and fix it. This is insanely useful as I like my code clean. Squiggly lines are dirty to me and mean I have to do something (and you can see without them, how would you know something is wrong until maybe compile time [maybe]).

    Unfortunately things like extension methods in .NET 3.0 are alien to ReSharper, and will be for some time now. Extension methods for example allow you to extend a class (any class including string or object or your own) and enhance it. For example here's some code from Scott Guthrie that extends System.Object by adding a method called "In" to it:

    This allows me to write code for example like checking if a particular ASP.NET control is within a container control collection:

    However in the declaration of the "In" extension method, ReSharper is going to show it's squiggly lines as it doesn't understand "In(this object o, IEnumerable c)". It'll consider this a syntax error and indicate this in the right gutter column. This seems dirty to me. It makes me feel like I've got something wrong in my code and I need to react, but there's nothing wrong here. These aren't the syntax errors you're looking for. Move along.

    So it's a toss up between using VS2008 and 3.0 projects and living with squiggly lines, or not. On the flipside, stick with 2005 and just wait until JetBrains comes out with version 4.0 early in 2008. The other alternative is to use VS2008 but not write 3.0 code (or the features that ReSharper doesn't support yet). However that in essence makes me feel like I'm restricting myself because of a tool. Not a corner I like to be in but I seem to have painted myself there.

    While I understand we're talking about two different companies on two different schedules here, it bugs me. Maybe you'll say "Suck it up Princess" and deal with it. After all, the code will compile it's really only the UX inside the IDE. However to me that's important too as it makes me stop and wonder, did I miss something? Do I need to stop and do something extra here? Maybe it's not the same for you but for me it's a question I'm pondering which is making me think about this. YMMV.

    Update: Ilya Ryzhenkov, the .NET Tools Product Manager at JetBrians, posted an update in the ALT.NET mailing list with some information about ReSharper 4 and C# 3.0. While this isn't "official" information, it might be useful in making decisions:

    "It is not possible to convert C#-2.0 "brains" of ReSharper to semi-C#-3.0. Also, it doesn't make any sense. Not only new constructs appeared in new version of C#. There are many changes in type inference, symbol binding and other very internal things. Even if we just parse the syntax, we will not be able to resolve overloads because we can't infer the type from, say, lambda. It will still be all red code.  Refactorings and other features will break all the time, because they should be updated to support new constructs. Feature as simple as Expand Selection should be updated to work consistenly.

    Even now, when we have ReSharper 4 in development and it parses almost all new constructs (we use C# 3.0 actively), and even (almost) fully supports "var", extension methods, object and collection initializers, implicitly typed arrays, autoproperties - it is still a pain to use ReSharper with C# 3.0. We are working hard to make it consistent and support language in its full power, but it's going to take time.

    Also note, that EAP builds will be available pretty soon, December (optimistic) or January (more realistic). So you will not have to wait for 4-6 months, if you are going to jump pre-release wagon and provide us with your invaluable feedback."

  • Not Attending TechEd Barcelona this week

    I'm here in Calgary this week, not here:


    No. I will no be attending TechEd Barcelona. In not doing this, I will not be checking out Agbar tower at night:


    Nor will I be taking a tour of the Barcelona Palace:


    And I definitely will not be visiting the Casa Mila la Pedrea.


    If you are at TechEd Barcelona, feel free to check out these sites. I will however be working in Calgary, Alberta where the weather is basically crap and nothing like Barcelona.


    Just wanted to make this clear for everyone who was wondering.

  • SharePointPedia goes live

    The news came a little early as it was originally slated for public release on Monday, but the genie is out of the bottle now. SharePointPedia is a MOSS based website that Microsoft is running for SharePoint content. It's not WikiPedia with the SharePoint name so don't think you'll see content blocks here with all kinds of information. Instead it's more like an information portal that takes you to other places, places with SharePoint content (blogs, whitepapers, etc.) and the content is submitted and recommended by you. All you need is a Windows Live ID and once you're online, you can add your own content or recommend others. This is very much along the lines of a Digg-like community for SharePoint.

    I know, I know. Some of you may be saying "But Bil, isn't that what SharePointKicks was all about?" (and we know where that site is today). Well, yes and no. It is all links to content. Rated, categorized, and vetted content. However it goes beyond what SharePointKicks offered. Content can be related to other content, recommendations bubble up to the top of the list, and users who submit the content are featured so you can see who's talking about what.

    In any case, it's new and spiffy and is yet another showcase of what SharePoint can do (unfortunately no, the source code is not available). Check it out today as a contributor, a reader, or both.

  • Being a Better Presenter

    image I've talked previously on great presentations and presenters. For some of us, we live in two worlds. One is the eat-sleep-breath code world, and the other is the present-to-the-masses one. I think I'm a better developer than I am a presenter but I try to come up with good presentations, be flexible and friendly with my style, and above all provide value for your hard earned time you're sitting watching/listening to me. On that front here's some tips for those that are looking to advance their presentation skills.

    • I feel that the majority of speakers make the common first mistake of hitting the lights and sinking everyone into a semi-coma like state.  Whenever possible I suggest presenting with the lights on and tweaking slide-decks to accommodate.
    • Slide Decks are the #1 flaw I see in almost all presentations and I've personally tried to not use them.  People do not, can not and will not remember pages and pages of cde splattered into a presentation, 2 or 3 key words with a slick visual to invoke a reaction will almost always work better (the possible exception being web based / virtual presentations. When I do have to resort to them, I apply a 7-7-7 rule. 7 slides, 7 points, 7 words in each point. This gets the point across and let's me tell the story I want to tell without repeating what's on screen.
    • More confidence and attention to the presenter is always a huge plus.  The presenter should be the first point of attention, the slide deck is just a support blanket when you really need to resort to it. I was told once that a "good presentation" should be almost useless to someone (without the presenter).  You almost always see people looking for the "slide deck for such and such" which always amuses me when I think of the previous statement.  Again the exception being web/virtual presentations.
    • Better story telling.  People will always respond to a good story, any time it's possible to tell a couple of 5 minute stories that are funny or interesting and in some way tie into what your talking about, I'd say go for it.  In my mind good story telling goes hand in hand with a good presenter.
    • I point almost everyone I know whom is serious about presenting, and getting better at presenting to:  - awesome resource for tips/tricks. (other obvious favorites - Seth Godin, Guy Kawasaki).
    • Know your audience going in when possible - but be ready to change gears if you see eyes glazing over.  I've been mid-presentation on the finer points of some tool when I know I've lost the audience so a shift is needed.
    • Practice, Practice and more Practice - No surprise here.
    • Learn from others. Often I'm attending user group meetings or conferences simply for learning and picking up presentation tips from people. Seeing how others present topics and discuss ideas helps me be a better presenter. You might want to check out Al Gores traveling presentation (An Inconvenient Truth) - not so much for the environmental education, but it's possibly one of the most compelling presentations in history. It's out on DVD.
  • Geek Toys: Digital Camera Wi-Fi!

    This is totally cool. An SD card with a built-in Wi-Fi adapter for uploading your pics to Flickr, Facebook, and other sites.

    About a month ago John Bristowe showed me some Nikon camera that had Wi-Fi capabilities so you could upload your pics directly from the camera to Flickr. I thought it was cool so if I was say at Party with Palermo, I could snap some pics and upload them live. Pretty slick. However it meant buying a new camera which wasn't cool since I just dropped $800 on a new digital Nikon. Also the camera really didn't take great pictures so that was a bit of an issue, but the concept was what I wanted.

    Now this comes into my inbox. I just ordered one so I'll let you know how it works when I get it. If you have a SD card camera and are looking for a way to get your pics uploaded without the need for your laptop, this is the way to go. The card is just an SD card and comes with 2GB of storage, which is decent. Then after you set it up, you just snap your pics and the SD card will upload your pics to the site you configure it to when your camera is in contact with the wireless network. Slick!

    You can check out the webpage here.

  • Regions == Evil

    I had an email thread at work with a bunch of the guys on regions and this is the concensus we generally have come to (some of the statements are theirs, not mine, just paraphrased here). I was once a convert who liked regions. I enjoyed them. They made me happy. In all the code I would do something like this:

    class UserCondition : IActionCondition


        #region Fields


        private int _condition;




        #region Constructors


        public UserCondition(int _condition)


            this._condition = _condition;





        #region Properties


        public int Condition


            get { return _condition; }

            set { _condition = value; }





        #region Public Methods


        public bool CanExecute(string action, WorkItem context, object caller, object target)


            string userName = Thread.CurrentPrincipal.Identity.Name;

            return userName.ToLower().Equals("domain\\joeuser");





    That felt good and organized and neat (almost in an OCD way).

    However I have seen the errors of my ways, as others have before me. Regions are evil. Pure and simple. The absolute incarnate concentrated type of evil that only the Time Bandits would fear and not the watered down, garden variety kind of evil.

    They're great for hiding the annoying details of an IConvertible implementation or designer generated code (when it's not already in a partial class). But I often create methods on the fly using ReSharper and it is not going to look for the correct region to place the method. So having everything separated into regions actually slows me down because I have to find where to put the method.

    ReSharper is your friend. Ctrl+F12 is all you need to find stuff in a file. Using ReSharper's type member layout to enforce code layout in a file, you can get consistency across all teams so one code file isn't vastly differently organized (say that 3 times fast) than any other project. With the custom pattern on, formatting puts all the members in all the right places and keeps layout and code style somewhat consistent across teams. It makes diff comparisons and merging a more pleasant experience.
    Going forward, we're purging them from all projects and forbade use of them in new code. YMMV.

  • The CardSpace Value Prop

    Last night I took in the Calgary .NET User Group presenation on CardSpace. It was great to see Michele Leroux Bustamante again as she's an awesome presenter. CardSpace is a relatively new technology but basically it makes identity easy for end users. You can find out more about CardSpace in general here.

    The thing that I'm not sure about is the value prop for this. Currently CS is really only happening behind the firewall. There's very little penetration in the "real world" so we're not seeing CS logins on Visa sites, PayPal, or even Facebook or Yahoo Groups. That was the one thing I got when I was looked at CardSpace awhile back. I thought it was neat and perhaps solved a few problems (mainly around phishing and issues of users entering ids and passwords in clear text) but there was very little implementation out there.

    Discussing it last night Michele brought up an example of how she's using it behind the firewall with a client. Essentially they're looking for a SSO layer that allows them to identify users across multiple disparate data sources, and remove the issue of managing identity on each instance of a data source. If you have SharePoint installed (2007 but 2003 will work to a certain extent) and combine their SSO services with the BDC (Business Data Catalog) you essentially get something like this. However there's the issue of tracking so if you're interested in who actually logged the request this might take some work, whereas CardSpace would help solve this problem.

    However behind the firewall I have a problem with CardSpace in general. I already know the user. Sure, sure. In hetrogeneous environments where my users are Mac, Linux and PC I have problems. I also might have problems in environments where I have corporate employees that I can identify (say via ActiveDirectory) but non-empoyees (contractors or external customers) that I can't. Do I force my non-employees to be members of Active Directory? Do I create a cross-trust to other forests or domains to identify them? How do I handle federated identity in the enterprise. Maybe this is the place where CardSpace helps.

    Outside the firewall I see there's benefit. There's benefit for managed cards for sure so when Visa, Mastercard, and PayPal come on board (and I'm sure they will) it will make signing into sites certainly easier, and perhaps a little more secure (I'm still debating if there's more security from CS given SSL enabled sites when you're doing banking but there are other advantages). Certainly for managed cards issued by banks and other places, I'm all over that like white on rice. Everyone does it but probably doesn't admit they use fairly weak passwords and probably share the same passwords across multiple institutions. With something like CardSpace in place, it becomes a non-issue for managing paswords (the card is my password, verify me) and really all I have to do is manage my cards, much like how I manage my credit cards in my wallet now. For the geek type we know that an SSL enabled site, a valid URL, etc. all gives us a warm and fuzzy that we can enter our credit card info on ThinkGeek and not expect charges to appear at Phil's House of Bondage. For the non-geeks out there, having them select a card from a friendly UI knowing that it's pretty safe makes me feel better (and cuts down on calls from the Father-in-law about this PayPal site he's never been to).

    Of course there's still the roaming issue that needs to be address but that's a different problem. The poor mans solution right now is exporting cards and importing them around (or carrying them around on say a fingerprint enabled USB drive), however it's not a happy-happy-joy-joy scenario for someone like Jason Bourne who just wants to pop into an Internet cafe and log on (okay, bad example as Jason really doesn't want to be identified, but you get the idea).

    All in all CardSpace looks fun and secure and will help solve some problems of both external sites and internet identity as well help deal with issues of FedSpace and complex corporate user identification. It's not the silver bullet (has there ever been one for anything?) but it's certainly an enabler. I'm planning on doing some cool stuff with it in the SharePoint space so stay tuned on this towards the end of the year. There's also some neat stuff that I think I'm going to do on a personal level like enabling some of my own sites with it. Even though it's not widespread, it is out there and easy enough for you to just create a personal card to save you the hassle of tracking user ids and passwords all over.

    Things I learned last night at the session (maybe not completely related to CardSpace):

    • Michele is Canadian! That just rocks.
    • The iPhone really kicks the llamas butt (thanks JP for the look-see) but not sure if I'm going to ditch my CrackBerry for one just yet.
    • Michele used to work at Canadian Pacific Railway about 4 years before I started in 96 (this was Michele's pre-developer days)
    • CardSpace is simple to implement (web based or services) but does take some code to get tokens and decrypt info. This is all code Michele provides in her demos but will eventually make it's way into the core platform.
    • Garrett Serack wrote the identification code for CardSpace and worked at CP with me for a short time (he's now at Microsoft in the Open Source space)
    • It's a small freakin' world
    • I finally learned how to properly pronounce Michele's full name (and in French too!)

    In any case, an interesting technology to track and some cool stuff for developers to try out. Check out CardSpace for yourself and be sure to check out Michele's demos and code as it's one of the few resources out there today for playing around.

  • 42

    The answer to life, the universe, and everything.

    Also the number you'll get if you cut me open today and count the rings.

  • Going Microsoft

    We talk a lot about the ALT.NET community and how it's not anti-Microsoft but rather alternatives to it (and better software development in general). Rather than blabbering on about DataSets over Web Services we talk about objects and domains and cool things that go bump in the night. I wonder as I look at my RSS feeds in the last few months how much of the "alternative" crowd are making more common bedfellows in Redmond?

    Look at the history here. John Lam, Scott Hanselman, Phil Haack, Don Box, and now Rob Conery (and there are probably dozens of others I missed, sorry). MVPs or community leaders from another world, all bringing their super-brains to Microsoft to fold into the collective. Hey, don't get me wrong. I think it's a good thing. A great thing. It just seems all the cool kids are being gobbled up by the Blue Monster and I have to ask. What are things shaping up to? At this rate of adoption what will Microsoft look like in 5 years?

    Like I said, this isn't a bad thing by far. I just wonder if Roy, Oren, Scott B., Jeremy, David, and JP (to name a few) are aimed for aquisition? If there's a group working next to the software aquisition guys looking at people and playing "what-if" scenarios with humans. I'm sure Scott would turn over in his grave before that happened but hey, Microsoft isn't a bad place to work. It's actually a positive experience, a great environment, and recently it seems to have been grabbing some of the top talent there is on the street. So where does it go from here?

    How "alternative" is it if ALT.NET becomes the norm at the place that spawned the original term? Should be an interesting year. Or two. Or three. Or ten.

  • Taming the ActionCatalog in SCSF

    The Smart Client Software Factory provides additional capbility above and beyond what CAB (the Composite Application UI Block) has. A little known gem is the ActionCatalog which can ease your pain in security trimming your application.

    For example suppose you have a system where you want to hide a menu item from people that don't have access to it. This is pretty typical and generally ends up in having to scatter your code with if(User.IsInRole("Administrator")) statements, which can get pretty ugly real quick. The ActionCatalog system in SCSF helps you avoid this.

    Here's a typical example. I've created a new Business Module and in the ModuleController I'm extending the menu by adding items to it:

    public class ModuleController : WorkItemController


        public override void Run()





        private void ExtendMenu()


            ToolStripMenuItem conditionalMenu = new ToolStripMenuItem("Conditional Code");

            if (canManageUsers())


                conditionalMenu.DropDownItems.Add(new ToolStripMenuItem("Manage Users"));


            if (canManageAdministrators())


                conditionalMenu.DropDownItems.Add(new ToolStripMenuItem("Manage Administrators"));





        private bool canManageAdministrators()


            string userName = Thread.CurrentPrincipal.Identity.Name;


                Thread.CurrentPrincipal.Identity.IsAuthenticated &&




        private bool canManageUsers()


            string userName = Thread.CurrentPrincipal.Identity.Name;


                Thread.CurrentPrincipal.Identity.IsAuthenticated &&




    For each menu item I want to add I make a call to a method to check if the user has access or not. In the example above I'm checking two conditions. First the user has to be authenticated to the domain, then for each specific menu item I'm checking to see another condition (in this case comparing the user name, however I could do something like check to see if they're in a domain group or not).

    Despite the fact that I could do a *little* bit of refactoring here, it's still ugly. I could for example extract the duplicate code on checking to see if the user is authenticated then do my specific compares. Another thing I could do is call out to a security service (say something that wraps AzMan or maybe the ASP.NET Membership Provider) to get back a conditional true/false on the users access. However with this approach I'm still stuck with these conditional statements and no matter what I do, my code smells.

    Enter the ActionCatalog. A set of a few classes inside of SCSF that makes security trimming easy and makes your code more maintainable. To use the ActionCatalog there are a few steps you have to do:

    • Create a class to hold your actions
    • Register the class with a WorkItem
    • Add conditions for allowing actions to be executed 
    • Execute the actions

    Setting up the Catalog 
    Let's start with the changes to the ModuleController. You'll add some new methods to setup your actions, conditions, and then execute the actions. In this case the actions are directly manipulating the UI by adding menu items to it, but actions can be anything (invoked or tied to CommandHandlers) so you decide where the most appropriate split is. Here's the modified ModuleController:

    public class ModuleController : WorkItemController


        private ToolStripMenuItem _rootMenuItem;


        public override void Run()








        private void ExtendMenu()


            _rootMenuItem = new ToolStripMenuItem("Action Catalog");




        private void ExecuteActions()


            ActionCatalogService.Execute(ActionNames.ShowUserManagementMenu, WorkItem, this, _rootMenuItem);

            ActionCatalogService.Execute(ActionNames.ShowAdministratorManagementMenu, WorkItem, this, _rootMenuItem);



        private void RegisterActionConditions()


            ActionCatalogService.RegisterGeneralCondition(new AuthenticatedUsersCondition());

            ActionCatalogService.RegisterSpecificCondition(ActionNames.ShowUserManagementMenu, new UserCondition());

            ActionCatalogService.RegisterSpecificCondition(ActionNames.ShowAdministratorManagementMenu, new AdministratorCondition());



        private void RegisterActionCatalog()




    Here we've added an RegisterActionCatalog(), RegisterActionConditions(), and ExecuteActions() method (I could have put these into one method but I felt the act of registering actions, conditions and executing them voilated SRP so they're split out here).

    Action Conditions
    ActionNames is just a series of constants that I'll use to tag my action methods later using the Action attribute. The conditions are where the security checks are performed. Here's the general condition first which ensures any action is performed by an authenticated user:

    class AuthenticatedUsersCondition : IActionCondition


        public bool CanExecute(string action, WorkItem context, object caller, object target)


            return Thread.CurrentPrincipal.Identity.IsAuthenticated;



    Next are specific conditions for each action. As you saw from the AuthenticatedUsersCondition you do get the action passed into to the CanExecute call so you could either pass this off to a security service or check for each action in a common method. I've just created separate classes to handle specific actions but again, how you organize things is up to you.

    class UserCondition : IActionCondition


        public bool CanExecute(string action, WorkItem context, object caller, object target)


            string userName = Thread.CurrentPrincipal.Identity.Name;

            return userName.ToLower().Equals("domain\\joeuser");




    class AdministratorCondition : IActionCondition


        public bool CanExecute(string action, WorkItem context, object caller, object target)


            string userName = Thread.CurrentPrincipal.Identity.Name;

            return userName.ToLower().Equals("domain\\admin");



    Both conditions contain the same code as before but are separated now and easier to maintain. Finally we call the Execute method on the actions themselves. Execute will pass in a work item (in this case the root workitem but it could be a child work item if you wanted), the caller and a target. In this case I want to add menu items to the UI so I'm passing in a ToolStripMenuItem object. The ModuleActions class contains our actions with each one tagged with the Action attribute. This keeps our code separate for each action but still lets us access the WorkItem and whatever objects we decide to pass into the actions.

    The Action Catalog Itself

    public class ModuleActions


        private WorkItem _workItem;



        public WorkItem WorkItem


            set { _workItem = value; }

            get { return _workItem; }




        public void ShowUserManagementMenu(object caller, object target)


            ToolStripMenuItem conditionalMenu = (ToolStripMenuItem) target;

            conditionalMenu.DropDownItems.Add(new ToolStripMenuItem("Manage Users"));




        public void ShowAdministratorManagementMenu(object caller, object target)


            ToolStripMenuItem conditionalMenu = (ToolStripMenuItem)target;

            conditionalMenu.DropDownItems.Add(new ToolStripMenuItem("Manage Administrators"));



    Registering The Action Strategy
    Calling ActionCatalogService.Execute isn't enough to invoke the action. In order for your Action to be registered (and called) the ActionStrategy has to be added to the builder chain. The ActionStrategy isn't added by default to an SCSF solution (even though you can resolve the IActionCatalogService since services and strategies are separate). Without the strategy in the builder chain, when it constructs the object it doesn't take into account the Action attribute.

    So you need to add this to a stock SCSF project to get the action registered:

    protected override void AddBuilderStrategies(Builder builder)





    Once you've done this your action is registered and called when you invoke the catalog.Execute() method.

    A few things about actions:

    • You don't have to call catalog.CanExecute for your actions. Just call catalog.Execute(). The Execute method makes a call to CanExecute to check if the action is allowed
    • You have to register an implementation of IActionCondition with the catalog in order to do checks via CanExecute. If you don't register a condition, any action is allowed

    Alternative Approach
    There are lots of ways to use the ActionCatalogService, this is just one of them. For example in your ModuleController you can set everything up, disable all commands, then execute your ActionCatalog which will enable menu items based on roles and security.

    The ActionCatalog lets you keep your execution code separate from permissions management and who can access what. This is a simple example but with little effort you can have this call out to say a claims based WCF service, retrieve users and roles from something like an ASP.NET Membership Provider, and make applying feature level security (including UI trimming) to your Smart Client a breeze!

    Hope that helps understand the ActionCatalog in SCSF! It's a pretty cool tool and can be leveraged quite easily in your apps.

  • Agile in your schools

    After the fishbowl session from the Edmonton Code Camp, one of the big aspects that came out of the discussion was around the lack of Agile and good software development practices (aka ALT.NET) in our schools and universities.

    Chris Chapman posted a comment on my entry about a series of blogs he did last month on this exact topic. Let me tell you this has got to be one of the most definitive and well-researched pieces on the subject I've ever seen. Chris not only highlights a series of key practices (Agile/lean development, TDD, refactoring, etc.) that each university offers (or doesn't as is the case) but has a scorecard and breakdown of the best-of courses to check out.

    If you're about to enter University and feel like you're going to be left out because you're a forward thinker and want to really challenge yourself, check out Chris' findings which may help you see what's out there!

  • Lessons Learned from Edmonton Code Camp 2007

    Got back from the Edmonton Code Camp last night around midnight, after having dinner with the group. It was funny because we all sat around the table (James, Justice, Donald, Rockarts, et. al.) and realized that the entire group was also going to be at DevTeach. Not sure if that's a good thing or a bad thing, but I'm sure we'll be sharing a few beers over it again in November. Anyway, a full day of fun and discussion with a few surprises. I thought I would wrap everything up into this entry although some of these could almost deserve their own post.

    Education Needs to Get Agile

    In a fishbowl session something really came out about learning from a gent who recently (April) graduated from the UofA. He stated that these strange new concepts (MVP/MVC, ReSharper, Domain Driven Design, IoC, etc.) were all alien and he had never heard of any of them during his Comp Sci studies. It was events like the Code Camp that brought them out (or blogs or whatever) and it excited him. There was passion there and I could see a budding ALT.NET developer just wanting to burst out all over the room.

    It's true that Java, C#, and Ruby are new when it comes to the academic space. I mentioned that the world was running on Internet time and schools seem to be running on Glacial time, just waiting for something to happen and not moving very quickly when it does. This I think is a huge problem. If people are coming fresh out of University with lofty goals of building huge Enterprise applications using what they've learned from school, it's obvious this is going to be an issue. It says to me that we, the forward-thinking and always-moving-in-some-direction have a responsibility to educate the educators. We need to get out there and let people know that alternatives exist and the universe is not made up of DataSets and XML. This might go a long way to helping foster diversity and knowledge in the community at large, and help make the shift from academic to implementation be a little smoother. People should be coming out of education hitting the ground running, and not being stopped dead in their tracks, told to abandon what they learned and go pick up a few good books from the Martin Fowler signature series. There's also a responsibility of the educators to know that we are out here and enterprise development isn't just regurgitating patterns and practices that were abandoned years ago. There are alternatives and better ways of building the mousetraps, you just have to be open to understand, discuss, and validate them for the appropriate solution at hand.


    I have my dual-head presentation on XNA programming including some remote debugging into the XBox 360 from my laptop. Unfortunately I was in a rush and forgot that I didn't have audio for the 360, so I bought a pair of speakers that morning from the local Staples (I'll use them at work to peeve off the QA people sitting next to me when I blast some Don Ho out on Monday morning). In my fit of excitement I forgot about getting an adapter to plug the speakers into the 360 so no sound during the demos, except when I ran the Windows versions. My laptop wasn't beefy enough to render the Racing Starter Kit (but the sound was there) so it chugged along at 12 frames per second.

    I think there was good interest from the community on XNA programming (although a guy asked me if this session was about "games" and proceeded to walk out when I told him yes) so hopefully we'll see some fun stuff coming out of the Edmonton community. I'm really looking forward to XNA Game Studio 2.0 and the networking support but most of all getting it to run on my regular Visual Studio. You don't know how many times I was hitting Ctrl+F12 and Alt+Insert in Express trying to get ReSharper to work.

    Domain Driven Design

    I think I had fun with my Domain Driven Design session. I struggled with this topic as I couldn't figure out how to squeeze a fairly large topic like DDD into 50 minutes of discussion. I left the session open to talking about issues with development in general, what pain points people were facing, and how DDD might serve to ease the pain (or not in some cases). I covered the basics of DDD which was really only scratching the surface and we spent a little time looking at a fairly rich domain (Ben Scheirman's NHibernate video store series) and dived into writing a builder object and a test using a fluent language to describe the domain better through code.

    For future sessions on DDD I'm probably going to spend more time on building fluent interfaces, maybe a DSL using Ruby or something. Writing academic examples of DDD is a little too brief and doesn't really help grok the principles or values of what DDD personifies. I did talk to a few people after the session and a few lightbulbs turned on as a result of it, so it wasn't a complete loss and people did enjoy the session (based on the feedback forms, although maybe they were being nice because it was so late in the day ;)

    Community and Adoption

    This was probably the coolest sessions of the day and focused on talking about good software design, the community, and how to get the word out (whatever that word is). It's funny having a session at a code camp when there's no code, but this worked (although it wasn't as filled as James' ALT.NET session). One of the reoccurring themes during the session was how does the community get to know about things like ReSharper, DDD, and patterns (to name a few). And more importantly, how does one adopt that in your own organization or community practice when you're the lone wolf?

    This is a challenge. Going back to your office and telling everyone "I just saw this great tool/technique at Code Camp and we should all change to using it" isn't going to fly. My message is clear. Practice, practice, practice. And do. Or do not. There is no try (sorry Yoda). Take what you see and practice it against your work. If there's a pain point in how you build or deploy your application then download NAnt and automate the task. There's no need to ask for permission from upper management. How would that conversation go?

    You: I would like to use [insert tool here] as it would reduce our deployment time by half.
    Pointy-haired boss: Sorry [insert tool here] isn't on our list of approved tools and your job is to code. Get back to work!

    Maybe not the best advice (and some upper-management guys may come back to me on this) but just do it. Do something that works and if it works and is better than what you're doing now, isn't it worth it?

    All in all, a great day and one that spurred new ideas and things to try that might make your geek life a little better.

  • Edmonton Code Camp - here I am!

    Jacked up on Rockstar and hanging out in the Holiday Inn in Edmonton as I hit the Edmonton Code Camp tommorow. Of course in my fit of leaving I forgot the video and still camera, so no pics from me but I'm sure they'll be Flickr coverage of the event from others (sorry Kyle).

    Here's to tommorow and I'll see if I can sneak into a few other sessions and blog about them for you while I'm there (the Donald/Justice session on the future of .NET and James' session on ALT.NET should prove interesting).

    Ahh.. wireless networking and HBO. What more can a geek ask for?

  • Working Remotely

    Kyle just finished up a series of posts on remote pairing and remote working. He's working on our Smart Client project from the Bahamas while we're stuck here in Calgary as the winter months start to show their ugly face. It's a great series and wraps up with some tips (although I have to wonder what he does now as the last few sessions he's been awfully quiet). Now I know the truth.

    Top 10 reasons to work remotely:

    1. You can frag people in Halo 3 during a design session.
    2. You can frag even more people in Halo 3 during the morning Scrum.
    3. Did I mention Halo 3?
    4. Fuzzy slippers! (although I have these and wear them at work anyway, but for those basement geeks here's your big chance to get comfy)
    5. Halo 3 anyone?
    6. Matchmaking on Halo 3 while your remote pair compiles
    7. Waiting for the Matchmaker to pair you up when someone breaks the build and you can't check in anyways
    8. Uhmm... Halo 3?
    9. Halo 3 r0ck$!
    10. And finally you can play Halo 3 without disturbing your co-workers

    So as you can see Kyle gets a tremendous amount of work done remote working with us, and you can too! All you need is a pair of fuzzy bunny slippers, a house in the Bahamas and a copy of Halo 3 (oh yeah, and an XBox 360 helps).

  • Kyle, Justice and Me

    The race is on. Ever go out and ego-surf? It's the art of finding yourself on the Internet. Either blog posts or websites you author, people talking about you, or the amazing coinky dink that you are the same person as someone else. Just a mind-numbing experiement for those that have no life.

    The score so far:

    • "Bil Simser" - Web hits: 45,000 Blog search: 407
    • "Kyle Baley" - Web hits: 682 Blog search: 6
    • "Justice Gray" - Web hits: 95,400 Blog search: 412

    I need to cut down on coffee on Tuesday mornings.

  • Camping out in Edmonton - Code Camp 2007

    I'm heading up to Edmonton this weekend for the Edmonton flavor of Code Camp. Seems only fitting as the freaks came down to our Calgary Code Camp, so I'll reciprocate. I have two sessions I'm talking about and will see if I can bring my video camera (where the heck is that charger?) and post some sessions (or at least Justice grooming himself) on my Silverlight space. Here are the sessions.

    IEntity<Hello World>(new GettingStartedWith(DomainDrivenDesign))
    We'll spend an hour wallowing through aggregate roots, entities, value objects, repositories, and the principles and concepts behind Domain Driven Design. Demos will be on the fly based on what you guys want to see but I'll prep a few things around specifications, identifying aggregate roots, TDDing the domain, and other goodness.

    So you want to play a game? How about thermo-nuclear war, XNA style!
    I won't be teamed up with John The Pimp Bristowe on this, but we'll go double-head on the display as we step through XNA, write a few games and duke it out on the big screen deploying and debugging XBox 360 games from my Windows laptop. It's like rockem-sockem robots, but with less carnage.

    Watch for a wrapup, pics, and potentially damaging video on the weekend after the dust settles. See you there!

  • Trifecta announcements from the CKS Group

    The Commity Kit for SharePoint Group (which I'm a member, but haven't participated recently, bad Bil bad) has three awesome announcements today (along with the software releases that go with them).

    Forms-Based Authentication

    FBA has been available on SharePoint 2007 since the release, however it's been a little hard to implement and worst, there are no facilities for managing users or recovering passwords. No longer. With the FBA modules, you can now basically give SharePoint a DotNetNuke facelift and provide that functionality. This includes user login, membership requests and management, password management, and user and role management.

    Virtual Earth Maps and the Campus Map App

    When I'm at Microsoft, I would sometimes see this uber-cool app which showed the Redmond campus along with facilities, all rendering via Virtual Earth and served up in SharePoint. This app is finally released along with the source code, graciously donated by MSIT Information Services team. Very cool for those that have SharePoint setup and would like to have interactive and live maps added to their sites.

    Windows Live Authentication

    It just had to happen. FBA is nice (now with the new FBA managers) and integrated is cool, but we really want Windows Live ID integration. Here it is in all it's glory.

    If there's any doubt you can't do cool things with SharePoint in the public space, take a look at CKS for ideas, inspiration, solutions, samples, and code. And get implementing!

  • Another SharePoint Sucks Post Released into the Wild

    I'm in a quippy mood tonight and stumbled across a post titled "Why SharePoint Server is Terrible". Of course you know flame-bait like this just gets my blood boiling, especially since there are points here that are completely inaccurate and don't represent the peoples the author is talking about (like MVPs). A few months ago there were blog posts about how bad a development environment SharePoint is and for the most part I agreed with what was being said. It's not great, but it's better than hand coding the world yourself. Now we get this post that goes deep into why (as the author states it, not me) SharePoint is terrible and a failure.

    I'm going to nitpick points here because overall the article is just plain negative on SharePoint. Like I said, in some cases SP blows monkey chunks. Yup, you heard it. A SharePoint MVP saying SharePoint bites. However take that with a grain of salt as it can create major suckage in some areas, it excels in others, so you have to balance what you read (even on this blog) with the context of reality and what business problem you're trying to solve. No technology in the world is going to solve problems, it's how you apply the tool or technology to aid you in a solution.

    The author states that SharePoint is a great idea, but failed for three key reasons:

    1. Far too complex to install, configure, and customize.  It is not agile.
      1. Any particular reason why people associate "agile" with "complex"? I build enterprise systems that cater to thousands of users, written using TDD, built on top of SharePoint infrastructure, and follow agile principles but they're all complex systems IMHO.
    2. It is being sold as a solution to organize unorganized companies.  It is being sold as a system to add process's to organizations that don't have them.
      1. I'm making an assumption the author is one of these "unorganized companies". I for one (and most of my cohorts in crime) would never recommend a product if it didn't suit the needs of an organization. Installing SharePoint for the sake of installing it (because some CEO read a glossy brochure) is nothing about what SharePoint is or does. I for one don't make a dime on the millions billions that SharePoint has made for Microsoft but would never stand behind pushing the product into a place where it didn't belong. I've also never seen it sold by Microsoft to corporations that didn't need or even want it. Grant you, user adoption can be slow as it's IT that sometimes makes the decision to bring it in but I've never seen an IT organization bring in the product when they weren't ready for it, or didn't have an idea of how it fit into an already existing process. Processes don't fit tools, appropriate tools are adapted to support them.
    3. It's rendered HTML is brutal, along with the CSS files.
      1. +1 for the rendering of SharePoint pages which can be painful however it has got 10x better with 2007, Master Pages, and better support for ASP.NET. It's still a little brutal and can be cumbersome due to the large number of styles and classes in the CSS. Brutal rendering? Have you seen the HTML output from other portals (Oracle, PeachTree, etc.). I have and SharePoint is a walk in the park compared to them.
      2. SharePoint OOTB doesn't conform to W3C standards so if that's an issue you'll have to wait or do a lot of customization. Yeah, this sucks and is one of the biggest problems with SharePoint rendering but I don't consider it as a result of HTML vs SomethingElse. Just bad rendering by Microsoft.
    4. It is one of the most unflexable applications I have ever used.
      1. Unflexable? Is that even a word. Unflexible? Maybe inflexible? Inflexible is the inability to change. Considering that SharePoint is an extensible platform that I can build solutions on top of to deliver legal document systems, help desks, training portals, or drive the XBox public internet site I don't consider that even remotely close to "inability to change".

    Okay, so that was 4 reasons not 3, but who's counting right?

    The author goes on to state you have to be a master of many tools. I will admit that SharePoint encompasses a lot of tools and technologies, but I am far from a master in many and some I don't even use. Ever. So do you really have to be a master in everything below to harness SharePoint?

    • SharePoint
      • Redundant, see redundant. So to be a master in SharePoint I have to be a master in SharePoint? This makes total sense.
    • SQL Server
      • If you've ever read my blog, tried to access the database directly, or wrote me and asked me what the name of the sproc is to delete a revision of a document from SharePoint you'll know that my motto is "stay out of the database!". So other than setting up (i.e. installing) SQL Server I fail to see what anyone has to master. In any case, there are guys far smarter than I who can make SQL dance and sing. Let them deal with it.
    • Internet Information Server
      • Again, what must I master here? Install it and turn it on.
    • Active Directory
      • Why must you be a "master" in AD to make it easy to configure SharePoint? In every setup I've done (both virtual, development, production, and otherwise) you install and generally AD is known (as long as the server is part of the domain, well, duh) and it just works. True, I've had problems with say domain groups with "&" and other characters in the name, but being a "master" in AD might help in knowing SharePoint but then it might help in knowing anything connecting to AD (like say your desktop).
    • File Stores
      • With each point here, my brain just gets fuzzier and fuzzier. File stores? Master? SharePoint can index a file share. Point it at it and make sure the crawler account has the right priveledges and you're done.
    • Indexing
      • Indexing in SharePoint can be more of an art than a science, but for all but the largest of installations I've seen or done (meaning 100 million documents spread over 3 data centres), indexing was OOTB and pretty much a brain-dead exercise.
    • Software Development
      • You must be a master in Software Development in order to use SharePoint. If I said this at a conference, half the room would walk out (the half that didn't already walk out when my demos blew up). I mean seriously, "software development" is such a broad term. Yes, if you want to build web parts and solutions using SharePoint you'll need to know what you're doing, but that's true for most anything. Try doing brain surgery without training. It's a messy subject.
    • Search Engines (To help customize the brutal search built into it)
      • I think the search engine needs work and customizing it can be difficult, however again it's perception and what is crap to some is fine for others. I once had a client who said "make it more like google" because they didn't like the search. We changed the style sheet to look like a Google search page and apparently that fixed the "problem" they had with it. Go figure.
    • Database Design and Development
      • I have no idea what database design and development has to do with SharePoint? You don't "design" nor "develop" databases with SharePoint and if you are, then you're doing something wrong.
    • XML
      • Calling any web service requires some understanding of XML. I don't consider the XML coming out of SharePoint web services or the XML config files to be any more complex than anything else that uses XML.
    • .NET 2.0
      • Again, if you're doing development you'll need to know .NET to build web parts but for the majority of clients I work with, they just want a tool that works and don't want (or need) to invest the time/money/effort to build custom solutions with code. A lot can be done with templates, packages, features, and what you can leverage OOTB.
    • ISA Server
      • For front-end web servers knowledge about this is a must, but for the majority of clients SharePoint installs are internal and don't need this. Again, if you need an externally facing system you'll probably have someone that knows ISA.
    • Master Pages
      • Being a master of master pages does not help you with SharePoint. It merely lets you build dazzling looking websites (if you know what you're doing and have a little design savvy) for use with SharePoint.

    I think not.

    I just don't get the whole "Document Management Process" excuse. He basically goes on to say Microsoft is so busy with pushing their SharePoint crack they're ignore people who need to understand document management, and people miss the mark. Document Management, or better yet, Information Architecture, is bloody hard. It's not something you can sit down with 2 guys and a small dog and bang together in an hour (unless you really do have 2 guys and a small dog in which case your IA is probably 5 emails a day and 3 PDF files). It takes a lot of work to figure out taxonomies, best practices, placement, structural and organization change, and adoption. All of which completely hinge on your customer and how mature their understanding of their own information is. No tool on Earth is going to help you do this right as it's all contextual to the organization, so blaming SharePoint for this failure is just asinine.

    Wiki's Rock. Uhh, yeah. I guess so. Grant you Wikipedia is pretty good and at least you can search for stuff. However wikis suffer from two major problems. First, you have to know what you're looking for before you go looking for it. Second, structurally they suck when trying to organize information in a sensible way. Substituting another tool (MediaWiki) for SharePoint to solve an information problem is missing the root cause of the issues. While it may look better in the short term, years from now when you have thousands (or millions) of Wiki pages and you're trying to discover something useful, you'll kick your IT guy in the head who came up with this flavor of the day.

    The author makes a comparison to MediaWiki and BaseCamp. Again, I don't get it. BaseCamp is like an online project management tool (with some document management bolted on the side). True, it's highly customizable from the look and feel but at the core it's a mess and isn't extensible to do anything but what it was designed to do. And MediaWiki, well, it's a wiki. You create content. That's about it. If we want to compare apples to apples, let's talk about SharePoint vs PeachTree vs Oracle vs LiveLink.

    What the author mentions isn't anything new. We've known it for years and it's a sore point for us working with SharePoint. However as with any technology, it's got it's growing pains and it's far from being done. We'll continue to learn, adapt, and make things better. Any software the size of SharePoint *is* inherently complex. The KISS principle doesn't apply across the entire system, but it can be applied (and is) in isolated parts of it where appropriate. A comment on the original blog about Photoshop is interesting. I don't go into Photoshop (much) and when I do it's for simple things (resizing an image) and an alien world. I consider myself a fairly bright guy who can pick up most anything, but PS is a crazy beast with all kinds of secrets of the underworld just waiting to be revealed. There are people on this planet that can make it sing and dance and stand on it's head. I am not one of these people. I however can so similar things with SharePoint so it's all complex, from a certain point of view.

    As Bill English stated, this is a product with a huge install base and revenues of +billions. Yeah, billions of dollars are going into BillG's pocket (or maybe it's Ray now, I can't keep track of those two crazy kids) as a direct result of this product. That speaks volumes. Millions of documents and thousands of sites spread across multiple data centres housing most everything Microsoft does on a day to day business. That speaks volumes. The drive and demand we put on Microsoft to make the product bigger, better, and faster than it ever was (well, at least easier to work with, faster would be nice too... and works on Vista maybe?) is what we do and what people want, and there's demand for this. If it wasn't we wouldn't be asked to speak and continue to communicate and educate people about SharePoint and we would all be living in a WebSphere world wouldn't we?

    I fail to see how this author can write this sort of post without having first hand experience with SharePoint (meaning setting it up in an enterprise environment, not just looking at the pretty pictures on the web or running a VM for a few minutes).

  • DasBlonde, Calgary bound October 29th

    One of my favorite speakers, whom I rarely get to see, is coming to Calgary. Courtesy of The Calgary .NET User Group this months speaker is Michele Leroux Bustamante, aka DasBlonde, will be presenting at the Nexen Conference Centre Theatre on October 29th at 5pm (my Birthday! Yay for me). Michele will be presenting about Windows CardSpace.

    CardSpace is a client technology that is part of the .NET Framework 3.0 that allows users to create, manage and share their digital identities in a secure and reliable manner. CardSpace makes it possible to create personal identities that replace the common user name and password for an application - with the strength of certificates and without the complexity. CardSpace also supports installing identities issued by third parties for authentication. This session will provide an overview of the identity metasystem in which CardSpace plays a role, and describe how it helps to prevent identity theft and increase trust in online transactions. You'll learn how to create and manage information cards, how they are used to generate tokens with the help of a Security Token Service (STS), and the role of the STS when CardSpace is incorporated in the authentication story for ASP.NET Web applications and WCF services and clients. You'll also learn how to trigger CardSpace from ASP.NET or WCF applications and services.

    Should be fun so come on out and welcome Michele to Calgary. You can register for the event (free) here.

  • Deep Thought of the Day

    Some people are like slinkies,
    They don't really have a purpose,
    But they still bring a smile to your face
    when you push them down the stairs.

    (found on Facebook, where else)

  • RYO AltNetConf

    There's a tremendous amount of goodness (the "new" goodness?) that's circulating around the 'sphere. Martin Fowler chimed in with his take on it and I'm glad we're all generally singing from the same song sheet.

    Jeremy Miller Jeffrey Palermo brought up mention of how the original Code Camp spread like wildfire as the format and idea was easy to implement. As time goes on, I think this is true for the AltNetConf idea. Jeff summed the idea of the AltNetConf best with this quote:

    AltNetConf's are open spaces conferences where DotNetters get together to discuss how to build better .Net software.

    Short and sweet. Just the right amount of description.

    Given this the idea of new conferences springing up and spreading the new goodness is a great idea. What does it take to start up your own AltNetConf? The passion and desire to do so. So why not? There's nothing stopping you.

    On the heels of the first one in Austin there are a few good ideas that you could use when you're building your own AltNetConf:

    • Keep the size manageable. I think the 100 person limit was great for the Austin one. This also helps you locate a place for it.
    • Self-organizing agenda. Rather than pre-canned agenda of topics, the first day/night of the conference is the time to collaborate and drill out what people are passionate about. What bugs people, what do they want to talk about. This is an agenda driven by both speaker and speakee (as I would consider everyone a speaker for each session, with someone keeping the conversation on topic rather than coffee-talk, much like a Scrum Master does during the daily standups)
    • Nothing but .NET. This isn't Alt.JAVA so the conversations follow building on Microsoft platforms using the most appropriate tool, technology, and technique that makes sense for the problem at hand.
    • Don't turn it into a vendor fest. While it may be Microsoft related, I think the last thing an AltNetConf needs is "Brought to you by [insert .NET vendor product here]". True, it should be free and things cost these days, but there are too many ideas that spiral out of control and become product showcases rather than guys and girls talking about software development.
    • Follow the OpenSpace approach to organization and flow. Just resonates on the ideas above.

    I'm at a disadvantage as I didn't directly attend the conference in Austin so I'm looking for those that were there to maybe bring out a AltNetConf retrospec. What worked well? What didn't work. What can we do better?

    So spread the news, pick a location, and start doing it. For me, I'm looking to see if we can get an AltNetConfCalgary or AltNetConfEdmonton (or AltNetConfAlberta for that matter) going so ping me if you're interested. Let's keep the momentum going!

    Hopefully lessons learned and ideas here would be applied to future conferences like this (which we all hope to see soon everywhere as we don't all need to coalesce to one single place once a year).

  • Blogging Time Machine

    Okay, I'll be the first to admit that I ego-surf. Type your name into Google and see what comes up. With Google's blog search, I like to see who's referencing things I've mentioned and whatnot.

    However today I saw this link on some Windows Live space site (not even sure what that is, some bastard child from MySpaces?) posted October 11. My name came up so I started skimming through the entry. I know it was about DotNetNuke and SharePoint so I assumed the writer was referencing my blog post from January 2006. After reading through the Windows Live space I realized it was a copy of my own blog entry I was reading (I *thought* it looked familiar as I was skimming). I got down to the end and it said "Published Tuesday, January 31, 2006 by Bil Simser".

    Right. I get it now. It's some aggregator that copies content. I've seen them before. However what befuddles my meager brain is why in the name of all that is holy is my post from January 2006 showing up in October 2007? Slow mail delivery or something. The guys crawler *just* got around to finding my blog entry?

    I just don't get it.

  • In the Queue...

    Stuff that is swirling in my head and being worked on coming shortly to this blog:

    • Updates on SharePoint Forums and KB Web Projects for Office 2007 support
    • Continuous Integration Build Indicators (via X10 and some fancy integration with Cruise Control.NET)
    • What does Alt.NET mean to you and growing the Alt.NET community
    • Greenfielding Agile in the Enterprise
    • Making sense of the *DDs
    • Sessions for Edmonton Code Camp (Oct. 20)
    • Getting ready for XNA 2.0 and driving out games with TDD
    • Fixing a dead TFS server with Subversion

    Feel free to toss your own ideas on the pile that you would like to see...

  • Join me for my first geekSpeak show in November

    I'll be presenting a webcast in November (November 14th at noon PST to be exact) via the MSDN geekSpeak group. Here's the blurb about what geekSpeak is from their site:

    geekSpeak is a new kind of webcast series, hosted by Jacob Cynamon and Glen Gordon (from the MSDN Events team). Dispensing with slide decks and scripted demos, geekSpeak webcasts bring you industry experts in a sort of "talk-radio" format. These experts share their knowledge and experience around a particular developer technology. You'll hear about industry trends, new technology, real-world experiences and more. During the webcasts you will be able to have your questions answered realtime, hear lively discussion and debate, and add your comments to the fray. Who knows, you might even see a whiteboard sketch or an off-the-cuff demo. It's another way for you - the developer - to engage with Microsoft in an interesting and effective way!

    So tune in and hear me blather on for an hour or so about Tricks of the WPF Programming Gurus. We'll play around with WPF, look at what we can (and can't) do with it, build some cool apps, talk about Mort (kidding!), and generally nerd out.

    I'm looking for a lot of the content to be driven by the listeners as that seems to be the geekSpeak way and hey, when in Rome...

    You can register for the event here.

  • Calgary Agile Project Leadership Network (APLN) 2007/2008 Kick-Off

    Calgary APLN is a local chapter of the Agile Project Leadership Network (APLN). The APLN is a non-profit organization that looks to enable and cultivate great project leaders. I've worked with Janice before and Mike is a well known person in the Agile community and an awesome presenter so check this event out.

    Description:  The Calgary chapter of the Agile Project Leadership Network (APLN) invites you to the APLN 2007/2008 Season Kick-Off Meeting.
    Guest Speakers: Janice Aston and Mike Griffiths
    Date: Wednesday, October 17th, 2007
    Time: 12:00pm - 1:00pm
    Location: Fifth Avenue Place Conference Room, Suite 202, 420 - 2 St. S.W.
    Come and experience Agile planning in action at the Calgary APLN season kick-off meeting. If you are interested in how to run effective agile projects here is your opportunity to help choose the presentation topics for this season's talks and workshops.


    • Welcome and overview of Calgary APLN group
    • Report on new initiatives from the APLN
    • Brainstorming of topics for 2007/2008 season
    • Affinity grouping and ranking of topics
    • Top 5 list identified

    About the Speakers

    Janice Aston has over 16 years of project management experience with an emphasis on delivering business value. She is passionate about building high performing teams focused on continuous improvement. Janice has a proven track record delivering on project commitments with a heart for leadership and people. She has recently founded Agile Perspective Inc. specializing in creative collaboration.

    Mike Griffiths is an independent project manager and trainer with over 20 years of IT experience. He is active in both the agile and traditional project management communities and serves on the board of the APLN & Agile Alliance, and teaches courses for the PMI. Mike founded the Calgary chapter of the APLN in 2006 and maintains the Agile Leadership site .
    Please visit for more details and to sign up for this event.

  • Justice Gray IS Kyle Baley! (and Tyler Durden too)

    For those of you who have never seen Fight Club, ignore this post. If you haven't seen it, go rent it then come back and read this blog entry. It's okay. I'll wait.

    Back yet?


    It's okay, I'm still waiting.



    There is a conspiracy on the Internet tonight and it's name is Justice Gray.


    Perhaps you've seen the metrosexual hunk of a software consultant present at an Edmonton Code Camp. Or have you? Or were you really looking at this man:


    You see dear friends, it is my experience and understanding that we've all been shim-shamed. Hoodwinked. In reality, Justice Gray is Kyle Baley!

    First let's take the letters for Justice Gray and Kyle Baley and mix them up a bit. What do we get?

    "A celibate sly jerky guy"


    You read it here. Justice Gray is not the metrosexual guy you think he is, he's really a eunuch. Definitely a man. But is he real or not?

    Kyle I can vouch for. I've worked with him and we've stood in the same room (but not the bathroom). He's not just a disembodied voice on the phone from the Bahamas (although recently he has been) nor is he some crazed lunatic writing blog entries furiously in the dead of the night (although he does sometimes). No, he's quite real.

    Justice on the other hand falls in "another" category. You see I don't recall ever meeting him myself. Ever. He's bailed on my Edmonton User Group presentations and was absent from our spectacular-spectacular dual-screen head-to-head XNA presentation at Code Camp. Again, where's the proof that he's real?

    And more importantly, and I can categorically state this to be true, "I have never seen Justice and Kyle together in the same room at the same time!".

    Just like Batman and Clark Kent.

    Go figure.

    But wait dear reader, the conspiracy does not end there!

    I believe that Justice Gray is really the Tyler Durden of Kyle Baley. How so? Because any time Kyle wants to meet Justice, Justice just happens to not be able to show up. Edmonton User Group. Calgary Code Camp. conference.

    The proof? I have it right here...

    Take the letters from Kyle Baley and Justice Gray and they form the name “Tyler Durden”.

    Well, almost.

    Okay. So what if we’re missing a couple of ‘D’s and a few other letters but there’s a conspiracy here I tell you!



    Missing: RD


    When you take the remaining letters and put them together they form the phrase “Say Big Cake Jay”.

    So in summary:

    1. Kyle Baley's real name is Jay
    2. Justice Gray's real name is Jay
    3. Kyle Baley is Justice Gray. They are the same person!
    4. At some point in Jay's tortured life he was asked by his mother to "Say Big Cake". I believe he could not pronounce this phrase and rather it came out as "Play Fig Make" which emotionally scarred him for life and has branded him to live behind this facade I'm revealing today.
    5. Justice Gray is just a figment of everyone's imagination. You have never seen him present nor does he really exist. The man you were looking at was Jay.

    Don't believe me? Invite them both to dinner and see who shows up.

    Well that just says it all doesn’t it? Jay, wherever you are man, I love you.

  • Alt.NET, stop talking just do it!

    Hopefully the last of my Alt.NET soapbox posts for the day. There was a post by Colin Ramsay that while was quite negative about the whole Alt.NET thing (it was called Abandon Alt.NET) but it contained a single nugget that I thought was just right for the moment:

    If they really wanted to change things then they should be writing about their techniques in detail, coming up with introductory guides to DDD, TDD, mocking, creating screencasts, or giving talks at mainstream conferences, or producing tools to make the level of entry to these technologies lower than it is.

    I argue we've been doing this. Just visit the blogs at CodeBetter, Weblogs, and ThoughtWorks (these are just three aggregates that collect up a bunch of musings from Alt.NET people, there are others as well as one-offs). There's noise to the signal, so you have to sift through it but the good stuff is there if you look hard enough.

    I totally agree with Justice (and others) in what he said on the mailing list:

    Looking at this from a perspective of the conference participants being the developers and the general .NET community being the "client" in this case, how much value is the "client" going to derive from either:
    a) what our mission statement is
    b) what we choose to name this group?
    in comparison to actual involvement with devs, recaps of sessions, evangelism efforts?

    So just do it. Enough with the name bashing, mission identity, who is and who isn't, and all that fluff. No fluff. Just code. Just go out and write. Blog. Present. Mentor. Learn. And if you're already doing that, you're ahead of the game.

  • !Alt.NET

    I live in a Alt.* world. Have been all my life. I prefer the alternate movies over mainstream. I would rather sit and watch an art film from 1930s German cinema than the latest slap-fest from Ben Stiller. I prefer alternate music over mainstream. Give me Loreena McKennit or Mike Oldfield over Britney Spears and Justin Timberlake anyday. So it's only natural I'm sucked toward the Alt.NET way of software development.

    Back before I was into this software thing I was an artist. I jumped from graphic design to commercial advertising. During my 7 year itch I spent a good part of it in comics, and more precisely the alternate comics. I never tried out for Marvel or DC (although a Marvel guy who shall remain nameless liked my stuff and invited me down to New York to talk to them) so the alternate scene for me was Dark Horse, Vertigo, and Image. These were the little guys. The guys who preferred glossy paper over stock comic newsprint. The guys who were true to reality and weren't afraid to show murder, death, kill in the pages. I did a comic once about drug dealers in Bogota, Columbia (and the band of 5 guys [think A-Team but cooler] who would bring them down). The writer put something simple on the page like "the drug industry in Columbia was everywhere". If I was at Marvel or DC, one might draw the factories and lots of trucks, people packaging up the drugs, and shipping them off to the Americas. However this was Alt.Comic land and we told it like it was. I thought showing drug addicts (including one guy shooting up in an alley on one panel) mixed in with the tourists was the way to go. It was deemed a little racy and I was asked to tone it down, but it wasn't censorship and in the end I got to express what I really intended to do. I felt like I had made a difference and wasn't going to let the mainstream way of doing things cloud my judgement.

    Alright, back to software development. I still don't know if I'll call myself an Alt.NETter simply because I'm not sure it's clear what that means. Any Alt.* movement in the world has it's basis in reality. Alternate art, movies, and music were created as a way to exercise expression of freedom, not just to be different. What is it that we look at in the "mainstream" way of software development that bothers us (enough to create an Alt.NET way). This really doesn't have anything to do with Microsoft does it? However many people have tagged MS as being the "evil empire" and using Microsoft tools is the wrong way, Alt.NET is the right way. Even the name seems to resonate against .NET and way Microsoft does things.

    To me, Alt.NET means doing things differently than some whitepaper or robotic manager tells you how to do it because that's how it's been done for years. Alt.NET is any deviation from the "norm" when that norm doesn't make sense anymore. Maybe it made sense to shuttle fully blown DataSets across the wire at the time, because the developer who wrote it didn't know any better. However in time, as any domain evolves, you understand more and more about the problem and come to a realization you only need these two pieces of information, not the whole bucket. And a simple DTO or ResultObject will do. So you change. You refactor to a better place. And you become an Alt.NETter.

    It's not about doing things differently for difference sake. You see a flaw in something and want to correct it (hopefully for the better). Perhaps BizTalk was chosen as a tool when something much smaller and easier to manage would have worked (even a RYO approach). Dozens of transactions a day instead of thousands and no monitoring required. If there's pain and suffering in using a tool or technology, don't use it. When you go to the doctor and say "Doc, it hurts when I do this" and he replies with "Don't do that" that's what we're talking about here. If it pains you to go in and maintain something because of the way it was built, then there's a first order problem here in how something was built (but not necessarily the tools used to do the job). That's my indicator that something isn't right and there must be a better way.

    As software artists we all make decisions. We have to. Sometimes we make the right ones, sometimes not so right. However it is our responsibility if we choose to write good software, to make the right decisions at the right time. Picking a tool because it's cool doesn't make it right. Tomorrow that tool might be the worst piece of crap on the planet because it wasn't built right in the first place. Software is an art and a science. There's principles we apply but we have to apply them with some knowledge and foresight to them. Even applying the principles from the Agile Manifesto require the right context. Individuals and interactions over process and tools. We stay true to these principles but that doesn't mean we abandon the others. I use Scrum everyday and pick the right tools for the right job where possible. It's a balance and not something easy to maintain. If all you do is stick your head down and code without looking around to see what's going on around you, you're missing the point. Like Scott Hanselman said, you're a 501 developer and don't really care about what you're doing. You might as well be replaced with a well written script. For the rest of us, we have a passion about this industry and want to better it. This means going out and telling everyone about new tools and techniques, demonstrating good ways to use them, explaining what new concepts like DDD and BDD mean, and most of all being pragmatic about it and accepting criticism where we can improve ourselves and the things we do.

    I suppose you can call it alternative software development. I think it's software development with an intelligent and pragmatic approach. Choose the right tool for the right job at the right time and be open and adaptable to change.

  • Quotes from the ALT.NET conference

    Unfortunately I couldn't make it out with my Agile folks to the ALT.NET conference but from the blogs, various emails and IM's and the photos it sure looked like a blast. 97 geeks (Wendy, Justice and myself couldn't make it but there were probably others) got together and partied only like geeks can do. While I wasn't there, here are some quotes that came out of the conference. Some to think deeply about, others to just... well, you decide. Remember to use this knowledge for good and not evil.

    " is in the eye of the beholder"

    "Oh I spelled beer wrong" -Dave

    "Savvy?" -Scott Hanselman

    "Scott, it's Morts like you..." -Scott Guthrie to Scott Bellware

    "Programmers Gone Wild"

    "There's the butterflies: then there's the HORNETS" -B. Pettichord

    "I think 'grokkable' is more soluble then solubility" -Roy Osherove

    "MVC is that thing that wraps URLs"

    You can view (and contribute!) the altnetconf Flickr pool here. There's also a Yahoo group setup here if you want to carry on with the discussions since isn't only about being at a  conference.

  • Scrum for Team System Tips

    As I'm staring at my blank Team System setup waiting for the system to work again, I thought I would share a few Scrum for Team System tips with you. SFTS has done a pretty good job for us (much better than the stock templates Team System comes with) but it does have it's issues and problems (like the fact that PBIs are expressed in days and SBIs are expressed in hours, totally messes up with the Scrum concept and makes PMs try to calculate "hours per point"). I'm really digging Mingle though and will be blogging more on that as we're piloting it for a project right now and I'm considering setting up a public one for all of my projects (it's just way simpler than Rally, VersionOne or any other Agile story management tool on the planet, hands down). 

    So here's a few tips that I’ve picked up using Scrum for Team System that might be helpful to know (should you or any of your Agile force be put in this position):
    • When a sprint ends, all outstanding PBIs should be moved to the next sprint and the sprint marked as done.
    • The amount of work done in the sprint (SBIs completed) gives you the capacity (velocity) for the team for the next sprint.
    • Only allocate one sprints-worth of PBIs to a sprint when planning and try to estimate better based on previous sprint data.
    • When the customer decides the product is good enough for shipping have a release sprint where the goal is to "mop up" bugs and polish the product ready for shipping. This would include new PBIs like:
      • Fix outstanding bugs
      • Create documentation
      • Package/create MSIs
    • Never attach new SBIs to previously closed PBIs. If the customer changes his mind about the way something is implemented, it is recorded as a new PBI because the requirement has changed.


    PBI: Product Backlog Item, can be functional requirements, non-functional requirements, and issues. Comparable to a User Story, but might be higher level than that (like a Theme, depends on how you do Scrum)

    SBI: Sprint Backlog Item, tasks to turn Product Backlog Items into working product functionality and support a Product Backlog requirement for the current Sprint.

  • Previously on Fear and Loathing...

    Bil was struggling with the question. What would you rather have? No source control or no tests.

    Let's go back a bit to Friday morning when Bil came into the office to find himself alone (it was 5:30AM and what ungodly soul would be at work at that time?) and unable to connect to the Team Foundation Server. It seems the drive that contained the database files filled up. Oh it gets better. No only did the data drive fill up (and BTW, the data files AND log files were on the same drive, not the way I would have set things up) but (drumroll please...) no backups. Somewhere along the way the SQLAGENT was disabled and frankly, when it's disabled nothing really works, including backups. As the data drive filled up, the transaction logs filled up and eventually became corrupted, unreadable, and unrecoverable.

    The short of it that I'm left with a Team Foundation Server and it's databases with no log files. Not the end of the world (at least I don't think so) and there are techniques for re-mounting a file without the associated .ldf file. Grant you, if there were any transactions in play they would be lost, but this was the morning and the drive filled up sometime throughout the night (probably during a build).

    At this point we've tried a variety of things to restore the databases and strangely enough we got them all back online. All but one. TfsVersionControl is the table that (yup) holds all the source code for Team System projects. A single table that just refuses to restore. The single remount trick (which worked for all of the other databases) doesn't work for this one (of course) so we're turning to PSS to help us fix this. There are a couple of "hacks" where you rebuild the database and swap out the data, but again for some reason it won't work. The best and closest we got was getting TfsVersionControl back online, but checking a solution out (any solution) ends with an error about "downloadUrl" being null, and the checkout stops.

    So tune in again tommorow as the geek suffers. We'll see how things go, but as a last ditch effort we do have the latest code on the build server (a separate box) which has the latest checkout so we technically *could* rebuild the projects, we would just lose source control history which in the grand scheme of things, isn't the worst thing in the world.

  • Speaking at the Alberta Architect Forum Monday

    The Alberta Architect Forum is an opportunity to learn from your peers as well as experts on practices and directions in the real-world. An exclusive invitation only event, the Forum allows top architects in Alberta to interact with presenters and discuss architectural issues as well as future directions and technologies from Microsoft.

    I'll be speaking at Monday's event here in Calgary on Software Factories (and specifically the Web Service Software Factory). Here's the day's agenda:

    8:00 am - 8:30 am: Breakfast and Registration
    8:30 am - 8:45 am: Welcome by Dave Remmer
    8:45 am - 9:15 am: Architectural Agility as Business Value, Dave Remmer
    9:15 am - 10:00 am: Architecting for Testability, Mike Mason
    10:00 am - 10:15 am: Break
    10:15 am - 11:30 am: Introducing Web Services Factories, Bil Simser
    11:30 am - 12:00 pm: Office Business Applications, Dave Remmer
    12:00 pm - 1:00 pm: Networking lunch
    1:00 pm - 2:15 pm: Creating Flexible Software, James Kovacs
    2:15 pm - 2:30 pm: Break
    2:30 pm - 3:45 pm: Designing and Building Real World SOA with WCF, Daniel Carbajal
    3:45 pm - 4:30 pm: Silverlight and Rich Internet Applications, John Bristowe
    4:30 pm - 4:35 pm: Wrap-up and Prize Draw

    Should be fun, although I'm only going to be there for my gig as I'm rebuilding a crippled TFS server and our source control is offline at the moment (backup, backup, backup!) but feel free to come out for the days events as they'll be lots to soak in.

    The event is taking place in the Wildrose room at the Sheraton Suites in Eau Claire. You can register and check out the details for the event here.

  • The World Doesn't Need Yet Another .NET Testing Framework

    I have a lot of respect for Jim Newkirk but I was reading Roy Osheroves blog entry about the new XUnit.NET framework that has just been released and I'm scratching my head wondering why. While it has been a few years since the original NUnit framework was released, .NET has evolved and continues to do so, Jim (and others) felt it was time to clean the slate. What bothers me are some of the same things mentioned in Roy's blog.

    I do agree with changes they've done on some parts. For example, I'm now in the camp that [SetUp] and [TearDown] are evil and need to be abolished. Easy enough. I tell people not to use them and if I really had to I could create a custom FxCop rule or something to prevent check-ins with them. However in XUnit.NET they just don't exist. The problem I have with XUnit.NET is that other things have changed, and quite drastically (after downloading and playing with it for a couple of hours). [ExpectedException] is gone in favor of the more JUnit like Assert.Throws() approach. This is fine, but it means more code changes to tests that already work.

    On the positive side, there is extensibility but I have to ask. Why oh why can't we all get along? NUnit is great but I've recently (the last couple of months) shifted over to MbUnit which seems to have a little more life and there are some nice features I'm getting productivity out of. Why couldn't say the 3 cheeses (Andrew, James, and Charlie) get together to say "Let's build a unified framework". On unit testing framework for .NET to rule them all. True, it might drag along the Microsoft approach to backward compatibility, but when we're out there trying to get people to do Agile and write good code with unit tests, I think it's a good thing rather than having to tell them "Oh yeah, and you have to rewrite X% of your already-working-and-nothing-wrong-with-them tests". Sigh.

    Roy mentioned a few other things that irk me. No Assert.Fail. Well, the only reason I used it was to call it at the end of a test that had an [ExpectedException] tag so that it would fail if it fell through to there (I know, it would probably fail on the missing exception so it might be moot). Still, dropping this completely isn't cool. The bigger thorn in my side is Assert.Equals being deprecated in favor of Assert.Equal<TypeToAssertAgainst>. That's just plain mean and goes back to rewriting X% of your working tests. That would be like changing some HTML tag just because some cool new syntax is available. I see no redeeming value in doing this based on looking at the code for both. The TheoryAttribute is nice, but I already have that (and more) with MbUnit's [RowTest] and extended attributes.  Frankly, I see XUnit.NET as a framework that a) removes some features which some people use b) forces you to rewrite perfectly working tests and c) adds very little that you can't get elsewhere already.

    Like Roy, I'm not about to jump on this bandwagon and I don't suggest you do either. If you're looking to start writing new tests with it, I guess it's okay (other than the things mentioned above) but that's for you to decide. If you're looking to replace NUnit or MbUnit with XUnit.NET forget it. It's not worth it as a lot of your tests might break or worse case, you'll have to actually just replace code via rename because of the syntax. For me it took me about 10 minutes to move 2,200 unit tests from NUnit to MbUnit in a project. I would suspect looking over the list of changes in XUnit.NET that it would be a significant change to move those tests to XUnit.NET. Not worth it in my books and a new unit testing framework today, unless it's really fixing a real problem we have, doesn't make a heck of a lot of sense.

    I go back to why can't we all get together and build a unified testing framework for .NET and be done with it. MbUnit (and NUnit for that matter) could have been giving some attention and perhaps the extensibility model could have been applied here. I'm not saying that we need one and only one unit testing framework, but we certainly don't need a dozen. CsUnit seems to have gone by the wayside in favor of NUnit and MbUnit. VSTSUnit (or whatever it's called, if it has a name) only exists because it's there and people don't know any different. I'm all for competition and adding something new to the mix but only if there's value-add and we're not just telling people that the new Red is Blue. Now it's getting silly with all the choices because there's little bang for your buck with some of them (especially XUnit.NET). I personally feel like I'm getting shafted like I do at the video store and my DVD collection. DVD comes out in it's normal form. Pay. DVD comes out in Directors Cut. Pay. DVD comes out in Extended Directors Cut, but this time it contains a special version with extra commentary you can't get anywhere, but they decide to drop some of the additional features that came out in the Directors cut so if you want both features you have to buy both copies (plus the original because it contains some special interview that got discarded along the way). Too many choices, too confusing to end developers to try to choose and frankly, why should they? Slam them all together and call it a day.

  • Dopewars for the BlackBerry

    Huh? Dopewars? BlackBerry? That's not SharePoint. That's not Agile. Am I reading the right blog? Yup. You are. Really.

    It's been awhile since I blogged here as I've been wrapped up in Baby 1.0 and a slight vacation off the grid in British Columbia (with 300 photos to pull off the camera and sift through). However I thought I would post this here as it is development related, just not my normal train of incoherent thoughts and creations.

    A couple of months ago I got a new cell phone and decided on the BlackBerry Curve for a variety of reasons. Constant connection, compact, cheaper than the iPhone, etc. It was a toss up between the BlackBerry and an HTC Windows Mobile phone. Of course the Windows Mobile would be nice since I'm a bit of a Microsoft guy, but the BlackBerry won out in features and reliability. Of course, what's the first thing you do when you get a SmartPhone? Why, hunt for games silly rabbit.

    There I was combing the BlackBerry archives looking for something good and came across DopeWars by Mark Sohm. I installed it and was immediately taken back to my BBS days when I ranked high in the echelons of the digital drug dealer world. The game was fun and would let me kill the hour ride on the train to work. The game was well done, easy to play, and adhered fairly true to the original although with it's own conversion to the UI constraints imposed by the BlackBerry. Still, it's fun and IMHO a good app.

    Anyway, the development bug got to me and I discovered that BlackBerry apps were Java based. Not a horrible language (beats the heck out of C and the PalmOS that I played with years ago) so I set out to create my development environment and try my hand at building BlackBerry apps (still trying to build a CCTray for the BlackBerry). The SDK documentation was pretty good and contained a wealth of examples but for me, I really need a good example to pull all the bits together.

    That was a bigger challenge, as anything out there in the open source land was pretty sad (from the Java/BlackBerry perspective). Lots of samples, but nothing that either a) worked or b) was a good example to understand how to glue things together. Most samples were sparse and the ones that were quite involved were so convoluted they looked like a poor mans attempt at converting Visual Basic code to C to Java (or some such silly thing).

    Then I thought about DopeWars. Maybe it was another fish in a sea of crap code, but then again maybe not. I hunted down the files but couldn't find the source anywhere. So a little Internet detective work and I tracked down the author, Mark Sohm, who just happened to work for Research In Motion (the makers of the BlackBerry). Someone who works for the company writing a game? Why not. Mark had put the app together in his spare time and after a few emails I nudged him into releasing the source code as an example of writing a Java game for the BlackBerry.

    A few weeks later and yet another new SourceForge site is born. You can download the binaries for DopeWars for your BlackBerry Curve (well, any Java enabled BlackBerry with colour capabilities) from here. The source code is up there and available via anonymous SVN access (instructions here). I will be putting together a source code zip file for those that don't have (or don't want to) install a Subversion client. Note that there was already a "DopeWars for BlackBerry" site on SourceForge but it consisted of C++ code that doesn't seem to compile and doesn't have a binary release so I considered it abandoned and just made another one.

    I won't be doing too much development on the project (I barely have time for my current projects), but being open source (released under the MIT license) anyone can contribute to it. Feel free to fix bugs or add enhancements. Via SourceForge you can post suggestions in the forums or use the tracker (bugs, suggestions, etc.) or email me if you would like developer access to the project (just include a note about what your plans are for the code). There is one bug that bugs me and that's when you finish the game. You end up finishing when the day turns 31 rather than at the end of the 31st day. Personally I think it should let you finish any transaction on the 31st day then end. Otherwise, it's quite stable code and well separated (for a BlackBerry Java app) in terms of responsibility and separation of concerns.

    So give it a look see if you have a BlackBerry, are interested in what a BB app might look like, or just want to poke around. Who knows, you might like it.

  • Problem with if block using interfaces

    We came up with a weird error that I've never seen before today. Here's a customer class and an interface to a strategy class (ICustomerStrategy) with two concrete implementations (GoodCustomer, BadCustomer):

       1:      public class Customer
       2:      {
       3:          private readonly int _creditLimit;
       5:          public Customer(int creditLimit)
       6:          {
       7:              _creditLimit = creditLimit;
       8:          }
      10:          public int CreditLimit
      11:          {
      12:              get { return _creditLimit; }
      13:          }
      14:      }
      16:      public interface ICustomerStrategy
      17:      {
      18:      }
      20:      internal class BadCustomer : ICustomerStrategy
      21:      {
      22:      }
      24:      internal class GoodCustomer : ICustomerStrategy
      25:      {
      26:      }

    In the consumer class we have a method called GetStrategy that returns an ICustomerStrategy object.

    This statement using an if-block works fine:

       1:      public class Strategy
       2:      {
       3:          public ICustomerStrategy GetStrategy(Customer customer)
       4:          {
       5:              if (hasGoodCredit(customer))
       6:              {
       7:                  return new GoodCustomer();        
       8:              }
      10:              return new BadCustomer();
      11:          }
      13:          private static bool hasGoodCredit(Customer customer)
      14:          {
      15:              return customer.CreditLimit > 1000;
      16:          }
      17:      }

    However this statement using the conditional operator doesn't:

       1:      public class Strategy
       2:      {
       3:          public ICustomerStrategy GetStrategy(Customer customer)
       4:          {
       5:              return hasGoodCredit(customer) ? new GoodCustomer() : new BadCustomer();
       6:          }
       8:          private static bool hasGoodCredit(Customer customer)
       9:          {
      10:              return customer.CreditLimit > 1000;
      11:          }
      12:      }

    The only way to make it work (thanks ReSharper for the tip!) is to cast the first object to the interface like so:

       1:          public ICustomerStrategy GetStrategy(Customer customer)
       2:          {
       3:              return hasGoodCredit(customer) ? (ICustomerStrategy) new GoodCustomer() : new BadCustomer();
       4:          }

    You can also make it work by using a base abstract class rather than an interface, but I prefer the interface driven approach.

    So the question on my noodle is... why?

  • The Death of a Community?

    When I started SharePointKicks a little over a year ago, I thought we were starting a good thing. Gavin Joyce had created a small community around DotNetKicks and we started building up sister sites to it, which included SharePointKicks. The idea was solid. Community driven content, allowing you dear reader the ability to bump up the stories you thought were worthy so they were there at the top, waiting to be read by others who were looking for the good stuff. Lots of signal, less noise.

    As with any community, it had to start somewhere. It was me posting various stories from the SharePoint blog-o-sphere and getting the word out. However it's not something anyone should do full time. Do you think Kevin Rose feeds digg with all of it's stories these days? I'm not comparing SPK to digg on the level of popularity but rather concept, since they're both the same, although SharePointKicks is more specialized for the IT crowd.

    After a month or two I felt the site had gathered up enough momentum and people would drive. Guess I was wrong. Back in May I posted a note about how the community had fell a little by the wayside and was suffering for it. At that time I thought there was still enough interest to keep the site going, however today it doesn't seem like that's been the case.

    The Death of a Community?

    So here we are looking into the chasm. The last popular post was submitted over a month ago on the site. There are a few people that post and the items trickle in, but that's it. They trickle. Like a dried up well with little life and less interest. And most of the entries are spam these days, which I've been cleaning out but it's a time-consuming job for which I don't have the time to do.

    From the looks of it, the SharePoint community (that would be you guys, myself included) doesn't seem to want to keep this beast alive. Or do you? Like the infamous Andy Kaufman vote on Saturday Night Live in the 80s, I have to ask the question. If there isn't going to be an overwhelming "oh, don't take our SharePoint kicks away!" from a large number of people who are committed to seeing the site grow and flourish, I'm going to shut down the site. We'll redirect the domain name to DotNetKicks, which continues to move along nicely, but ultimately shut the site itself down. It's a shame as I thought the site would make a difference for those wading through SharePoint information out there but I don't see that it is. I know there's plenty of news stories, but I can't keep up the pace of posting them all myself so if the users of a community can't drive a community site like this then I don't see the point in keeping it around.

    Let me know what you think via comments on this post or through email if that's your thing.

  • Being a better developer by studying other peoples code

    The course is wrapped up and I have 2 more entries to get out. Next week I'm at the Edmonton User Group to present about Fit and DDD so catch me there if you're in town. As the course winds down my brain is still in overdrive thinking about things, both technical and personal. I'm going to post a follow-up to the 6 months to a better developer spiel that's been going around, as I feel I didn't do the original question enough justice (that's justice, not Justice).

    One thing Scott Hanselman mentioned in the his podcast on the subject was that you could get better by looking at other peoples code. I'm a strong believer of this and have about 200mb of projects squirreled away that I've collected over the years. From time to time I dump them as I realize they're junk, but some examples shine on and I just keep them around as reminders and ideas. I'm always finding little development nuggets in code that I come across (even the entire app is a gong show) and they go onto my USB drive for later reflection. When I was a graphic designer we would keep a physical file cabinet (I called mine a graveyard) where we just kept clippings. Ideas or designs or styles that we liked (or didn't like) would go here and from time to time you would peek inside. Everything was organized by subject or topic so sometimes I was doing a design that would require teddy bears and out came the teddy bear folder from the graveyard, to spark an idea. My USB drive is like that (although I'm trying to find a good way to organize it better than what folders can do).

    Anyway, here's a list of projects that I've looked over and find to be valuable to check out if you're looking for coding examples, ideas, and snippets that I consider little gems in the open source world and refer to from time to time.


    CC.NET is a great tool, but people keep forgetting it's open source and there's unit tests for everything. It's not a bad implementation and has some good ideas around separation of concern and handling providers. The code is clean and makes good use of interfaces almost everywhere.


    Both of these unit test frameworks are open source and their own source is a pretty good resource to check out. Good overall design and good use of various patterns like proxies, listeners, and reflection. You can imagine how hard it might be to unit test a unit test framework but these guys did it and did it right.

    Time and Money

    This is a utility library written in Java but it follows Eric Evan's Domain Driven Design principles and is a good reference for such. There's a C# version kicking around on SourceForge you can check out (but it's not completed yet and I can't track down a URL for it).


    Blog engines that have evolved. As Scott points out, dasBlog is the abyss but there's some good stuff in there either he, Clemens, or the community has written. Especially check out the HTTP handlers if you've never written one before. Again, lots of unit tests here. I feel the model falls down a bit with the XML dependency and requirement for test files, but all in all they're both good resources to study.

    RSS Bandit

    Despite my aggregation with this project switching the UI every chance it can get and tying itself to commercial libraries, it's a pretty good example of a WinForms RSS reader. The implementation does suffer from memory bloat (running it just chews up as much memory as it can) but overall it's a good application design and there are some nice things in here like HTML parsers, interacting with RSS,  and a well designed plug-in system.


    I can't this is a great example of code to learn by, but it's the only open source IDE out there that contains concepts you could follow (and for the most part, they're not that bad). You really can't crack open Eclipse and learn how it ticks but SharpDevelop is written in all C# and pretty easy to follow.


    This is an odd name for a project, but it's an excellent reference app for an eCommerce web portal system which offers a forum, CMS-like functionality, blogs, polls, RSS feeds, shopping, and other goodies. A pretty good example of a nicely done ASP.NET website that can not only be used for studying, but building your own site.


    The codebase isn't too bad here (although I feel there's a little too much in the ASPX code behind) but is a good example of a digg-like community site. Lots of AJAX here with user controls so if you're looking to build the next community, give kicks a look at for ideas.


    CSLA is an application framework for building business apps. Rocky Lhotka has done an excellent job with the codebase and while I'm not a fan of Active Record or domain objects that know how to persist themselves, the code is clean and a good tool to study from. Rocky keeps it updated frequently.


    While it's still only alpha, Jeremy Miller is doing a bang-up job on making this an indispensable tool for Fit development and using TDD to drive it out. Nice work here to take a look at even in it's young stages.

    Omea Reader

    Omea reader is supposed to come online with it's source code released by JetBrains. To date they haven't released it yet but plan to. I emailed them a few months ago and they were still working on getting the code ready. Hopefully with a company like JetBrains behind it, the code base will be interesting to examine.

    Feel free to add your own to the list via the comments (I'm sure I missed a few key ones) and go ahead and challenge me on my findings here as I may be off on some projects, however this is my opinion and YMMV on your own findings.

  • Fonts and Colours

    A lot of people have been asking me about my Visual Studio settings, as I tend to favor a black background over the traditional white. Traditional. Heh. Remember the DOS days when it was all black? Guess that's just my old school showing through, but I prefer it and feel it's easier on the eyes. I use this in my blog posts as well as any presentations I do and my day to day work.

    You can download them from here. They're based on Scott Hanselman's settings file that he posted here, except mine uses only Consolas, the only programmer font you'll ever need. You'll need the Consolas font installed for it to work (but I think it'll fall back to Courier New if you don't). If you're on Vista, you've already got it. If you're on XP you can grab the Consola Font Pack here.

    Works for me. YMMV.

  • Nothin but .NET - Tips and Tricks - Day 3

    Coffee, coffee, coffee. Oh we need coffee. Okay, got it now we're ready. It's been a crazy week and it's getting crazier. JP is coding at about 8,000 words and using about 40,000 ReSharper shortcuts per minute now. He's also talking at about 10,000 words a minute. If you blur your eyes when looking at him, he begins starts to look like Neo in the Matrix. There is no mouse.

    Service Layers are Tasks

    I really like this paradigm of naming the service layer classes. Rather than having something like CustomerService.GetAllCustomers(), it's named CustomerTasks.GetAllCustomers(). I've always used the ServiceXXX naming strategy but it makes more sense to name things as tasks because in reality, they are tasks. Tasks in the service layer to serve up information to external consumers. It's about readability so choose whatever works for you.

    Recording Ordered Mocks

    We were working on the database today (gasp) but started mocking out a data gateway. I found it amusing that we couldn't record an ordered mock (but there was a way).

    Hey Oren, check this out (in case you didn't know). This block fails:

       54 [Test]

       55 public void Should_be_able_to_get_a_datatable_from_the_database_ordered_failure()

       56 {

       57     using (mockery.Ordered())

       58     {

       59         using (mockery.Record())

       60         {

       61             Expect.Call(mockFactory.Create()).Return(mockConnection);

       62             Expect.Call(mockConnection.CreateCommandForDynamicSql("Blah")).Return(mockCommand);

       63             SetupResult.For(mockCommand.ExecuteReader()).Return(mockReader);

       64             mockConnection.Dispose();

       65         }


       67         using (mockery.Playback())

       68         {

       69             DataTable result = CreateSUT().GetADataTableUsing("Blah");

       70             Assert.IsNotNull(result);

       71         }

       72     }

       73 }

    The error is:

    [failure] DBGatewayTest.Setup.Should_be_able_to_get_a_datatable_from_the_database_ordered_failure.TearDown
    TestCase 'DBGatewayTest.Setup.Should_be_able_to_get_a_datatable_from_the_database_ordered_failure.TearDown'
    failed: Can't start replaying because Ordered or Unordered properties were call and not yet disposed.
        Message: Can't start replaying because Ordered or Unordered properties were call and not yet disposed.
        Source: Rhino.Mocks

    In this case, the mock objects were created from interfaces that inherited from IDisposable. However it was a simple change to put the ordered call inside the record (because we don't care about ordering on playback). This slight change now works:

       33 [Test]

       34 public void Should_be_able_to_get_a_datatable_from_the_database()

       35 {

       36     using (mockery.Record())

       37     {

       38         using (mockery.Ordered())

       39         {

       40             Expect.Call(mockFactory.Create()).Return(mockConnection);

       41             Expect.Call(mockConnection.CreateCommandForDynamicSql("Blah")).Return(mockCommand);

       42             SetupResult.For(mockCommand.ExecuteReader()).Return(mockReader);

       43             mockConnection.Dispose();

       44         }

       45     }


       47     using (mockery.Playback())

       48     {

       49         DataTable result = CreateSUT().GetADataTableUsing("Blah");

       50         Assert.IsNotNull(result);

       51     }

       52 }

    Just in case you ever needed to do have run into this problem one day.

    More Nothin

    JP is doing this 5-day intensive course that stretches people to think outside the box of how they typically program and delve into realms they have potentially not event thought about yet, about one per month. For the next little while (he's a busy dude) he's going to be travelin' dude. In October he'll be in New York City. In September he'll be in London, England and other locations are booked up into the new year.

    If you want to find out more details about the course itself, check out the details here and be sure to get on board with the master.

  • Being descriptive in code when building objects

    As we're working through the course, I've been struggling with some constructors of objects we create. For example, in the first couple of days the team was writing unit tests around creating video game objects. This would be done by specifying a whack of parameters in the constructor.

        9 [Test]

       10 public void Should_create_video_game_based_on_specification_using_classes()

       11 {

       12     VideoGame game = new VideoGame("Gears of War",

       13         2006,

       14         Publisher.Activision,

       15         GameConsole.Xbox360,

       16         Genre.Action);


       18     Assert.AreEqual(game.GameConsole, GameConsole.Xbox360);

       19 }

    This is pretty typical as we want to construct an object that's fully qualified and there's two options to do this. Either via a constructor or by setting properties on an object after the fact. Neither of these really appeal to me so we talked about using the Builder pattern along with being fluent in writing a DSL to create domain objects.

    For the same test above to create a video game domain object, this reads much better to me than to parameterize my constructors (which means I might also have to chain my constructors if I don't want to break code later):

       36 [Test]

       37 public void Should_create_video_game_based_on_specification_using_builder()

       38 {

       39     VideoGame game = VideoGameBuilder.

       40         StartRecording().

       41             CreateGame("Gears of War").

       42             ManufacturedBy(Publisher.Activision).

       43             CreatedInYear(2006).

       44             TargetedForConsole(GameConsole.Xbox360).

       45             InGenre(Genre.Action).

       46         Finish();


       48     Assert.AreEqual(game.GameConsole, GameConsole.Xbox360);

       49 }

    Agree or disagree? The nice thing about this is I don't have to specify everything here. For example I could create a video game with just a title and publisher like this:

       34 VideoGame game = VideoGameBuilder.

       35     StartRecording().

       36         CreateGame("Gears of War").

       37         ManufacturedBy(Publisher.Activision).

       38     Finish();

    This avoids the problem that I have to do when using constructors in that I don't have to overload the constructor.

    Here's the builder class that spits out our VideoGame object. It's nothing more than a bunch of methods that return a VideoGameBuilder and the Finish() method that returns our object created with all it's values.

       44 internal class VideoGameBuilder

       45 {

       46     private VideoGame gameToBuild;


       48     public VideoGameBuilder()

       49     {

       50         gameToBuild = new VideoGame();

       51     }


       53     public static VideoGameBuilder StartRecording()

       54     {

       55         return new VideoGameBuilder();

       56     }


       58     public VideoGameBuilder CreateGame(string title)

       59     {

       60         gameToBuild.Name = title;

       61         return this;

       62     }


       64     public VideoGameBuilder ManufacturedBy(Publisher manufacturer)

       65     {

       66         gameToBuild.Publisher = manufacturer;

       67         return this;

       68     }


       70     public VideoGameBuilder CreatedInYear(int year)

       71     {

       72         gameToBuild.YearPublished = year;

       73         return this;

       74     }


       76     public VideoGameBuilder TargetedForConsole(GameConsole console)

       77     {

       78         gameToBuild.GameConsole = console;

       79         return this;

       80     }


       82     public VideoGameBuilder InGenre(Genre genre)

       83     {

       84         gameToBuild.Genre = genre;

       85         return this;

       86     }


       88     public VideoGame Finish()

       89     {

       90         return this.gameToBuild;

       91     }

       92 }

    This makes my tests read much better but even in say a presenter or service layer, I know exactly what my domain is creating for me and I think overall it's a better place to be for readability and maintainability.

    This isn't anything new (but it's new to me and maybe to my readers). You can check out the start of it via Martin Fowler's post on fluent interfaces, and the followup post early this year on ExpressionBuilder which matches what the VideoGameBuilder class is doing above. And of course Rhino uses this type of syntax. I guess if I'm 6 months behind Martin Fowler in my thinkings, it's not a bad place to be.

  • Nothing but .NET - Tips and Tricks - Day 2

    It's the second day here with JP and crew. Yes, posting on the 4th day isn't quite right is it, but 12 hours locked up in a room with JP is enough to drive anyone to madness. Really though, it's just completely draining and for me I have an hour transit ride and a 45 minute drive to get home so needless to say, sometimes I get home and just crash. Anyways, expect days 3 and 4 to follow later today.

    Okay so back to the course. After the gangs morning trip to Starbucks we're good to go for the day (and it's going to be a long one).

    Decorators vs. Proxies

    Small tip for those using these patterns, decorators must forward their calls onto the objects they're decorating, proxies do not have to. So if you have some kind of proxy (say to a web service or some security service) the proxy can decide whether or not to forward the call onto the system. When using decorators, all calls get forwarded on so decorators need to be really dumb and the consumer needs to know that whatever it sends the decorator, it'll get passed down the line no matter what. Good to know.

    ReSharper Goodness

    I'm always happy seeing new stuff with ReSharper and I don't know why I didn't get this earlier. NAnt support in Visual Studio with ReSharper. Oh joy, oh bliss. I was struggling with NAntExplorer, which never really worked. I never realized ReSharper provides NAnt support, until now. Load up your build script and hit Ctrl+F12 to see all the items, or do a rename of a task or property. Awesome!


    Builds and Local Properties

    A new tip I picked up is using local property files and merging them with build scripts via NAnt. Traditionally it's always been a challenge when dealing with individual developers machines and setups. Everyone is different as to what tools they have installed, where things are installed to, and even what folders they work in. A clever trick I've done (at least I thought it was clever) was to create separate environment configurations with a naming standard, then using post-build events (NAnt would have worked as well) and copying them as the real app.config or web.config files. This is all fine and dandy, but still has limitations. I think I've found a utopia with local property files now.

    So you have your build script but each developer has a different setup for their local database. Do you create a separate file for each user? No, that would be a maintenance nightmare. Do you force everyone to use the same structure and paths? No, that would restrict creativity and potentially require existing setups to change. The answer my friend is blowing in the wind, and that wind is local property files merged into build scripts to produce dynamic configurations.

    Typically you would create say an app.config file that would contain say your connection string to a database. Even with everyone on-board and using a connection string like "localhost", you can't ensure everyone has the tools in the same place or whatever. This is where a local property file comes into play. For your project create a file called something like It can contain all the settings you're interested in localizing for each developer and might go like this:

        1 <?xml version="1.0"?>

        2 <properties>

        3   <property name="" value="C:\program files\microsoft sql server\90\tools\binn\" />

        4   <property name="osql.connectionstring" value="-E"/>

        5   <property name="osql.exe"  value="${}\osql.exe" />

        6   <property name="initial.catalog" value="NothinButDotNetStore"/>         

        7   <property name="config.connectionstring" value="data source=(local);Integrated Security=SSPI;Initial Catalog=${initial.catalog}"/>     

        8   <property name="devenv.dir" value="C:\program files (x86)\microsoft visual studio 8\Common7\IDE"/>

        9   <property name="" value="NT Authority\Network Service"/> 

       10   <property name="database.provider" value="System.Data.SqlClient" />

       11   <property name="database.path" value="C:\development\databases" />   

       12   <property name="system.root" value="C:\windows\"/>

       13 </properties>

    In this case I have information like where my tools are located, what account to use for security, and what my connection string is. Take the template file and rename it to then customize it your own environment. This file does not get checked in (the template does, but not the customized file).

    Then inside your NAnt build script you'll read this file in and merge the properties from this file into the main build script. Any properties with the same name will be overridden and used instead of what's in your build script so this gives you the opportunity to create default values (for example, default database name). If the developer doesn't provide a value the default will be used. The NAnt task is simple as it just does a check to see if the local properties file is there and then uses it:

       60 <!-- include the machine specific properties file to override machine specific defaults -->

       61 <if test="${file::exists('')}">

       62 <echo message="Loading" />

       63     <include buildfile="" />

       64 </if>

    That's it. Now each developer can have anything setup his own way. For example I only have SQL Express installed but the others have SQL Server 2005 installed. We're all using the same build scripts because my local properties file contains the path to osql.exe that got installed with SQL Express and my connection string connects to MACHINENAME\SQLEXPRESS (the default instance name).

    Finally, these values are used to spit out app.config or web.config files. The key thing here is that we don't have an app.config file inside the solution. Only the a template is there which is merged with the properties from the build script to generate dynamic config files that are specific to each environment (and again, not checked in to the build).

    BTW, the whole build process, build scripts and differences from deployments could be an entire blog post so I don't think I'm finished with this topic just yet.


    I'm a huge fan of Notepad++ and use it all the time. I have it configured on my system to be the default editor, default viewer for source code in HTML pages, and well... the default everything. I was a little surprised when our build ran the MbUnit launched Notepad. Nowhere I could find a place to change this and I'm told that it's using the default editor. That's a little odd to me because Notepad++ is my default editor. So maybe someone has info out there as to how to get it to launch my editor instead of Notepad? Either that or I need to physically replace Notepad.exe with Notepad++.exe (which can be done, instructions on the NP++ site).

    New Cool Tool of the Day

    Console2. When I launch my command prompt, it's a giant glowing screen of goodness. I'm a bit of a console junkie as I find I can get a lot more done by jumping around in the console rather than hunting and pecking in something like Windows Explorer. I got a tool from JP called Console2 by Marko Bozikovic which makes my command line experience that much better. It's not really a cmd.exe replacement but rather a shell that runs cmd.exe. However the really cool thing here is that a) it can launch any shell you want (like PowerShell) b) supports transparency (even under Windows XP) and c) it has a tabbed interface. It's like a command line ala Firefox (or that other browser that has tabs).


    Now I don't have to launch a whole bunch of consoles, just one will do, and I can tab back and forth and even mix things up with cmd.exe and PowerShell (insert whatever shell you like here as well).

    Wrap up

    Another draining day but full of good things to inject in your head. I can't say enough praise for this course, although things are moving along much slower than I would like them to, it's still a great experience and I highly recommend it for everyone, even seasoned pros.

    As I mentioned before, I'm trying to get away from the mouse. I have always found the mouse to be a problem. It's the context switching from 1. take hand away from keyboard 2. move hand to mouse 3. move mouse 4. move hand back to keyboard. Bleh. Yesterday was a mostly-keyboard day, today I'm going commando with all keyboard as I have my mouse locked up and hopefully won't need it at all.

    More Nothin

    JP is doing this 5-day intensive course that stretches people to think outside the box of how they typically program and delve into realms they have potentially not event thought about yet, about one per month. For the next little while (he's a busy dude) he's going to be travelin' dude. In October he'll be in New York City. In September he'll be in London, England and other locations are booked up into the new year.

    If you want to find out more details about the course itself, check out the details here and be sure to get on board with the master. 

  • Nothin but .NET course - Tips and Tricks - Day 1

    I'm currently popping in and out of Jean-Paul Boodhoo's Nothin but .NET course for the week and having a blast. We brought JP in-house to the company I'm at right now for his .NET training course, which is arguably probably the best .NET course out there. Period.

    I don't claim to be an expert (at least I hope I've never claimed that) and I'm always looking for ways to improve myself and add new things to my gray matter that I can hopefully recall and use later. Call them best practices, call them tips and tricks, call them rules to live by. Over the course of the week I'll be blogging each day about what's going on so you can live vicariously through me (but you really need to go on the course to soak in all of JP and his .NET goodness).

    So here with go with the first round of tips:

    File -> New -> Visual Studio Solution

    This is a cool thing that I wasn't aware of (and makes me one step closer to not using the mouse, which is a good thing). First, we have a basic empty Visual Studio .sln file. This is our template for new solutions. Then create new registry file (VisualStudioHack.reg or whatever you want to call it) with these contents:

    Windows Registry Editor Version 5.00

    "FileName"="Visual Studio Solution.sln"

    Double-click the .reg file and merge the changes into your registry. The result is that you now will have a new option in your File -> New menu called "Microsoft Visual Studio Solution". This is based on your template (the empty .sln file) that you provide so you can put whatever you want in here, but it's best to just snag the empty Visual Studio Solution template that comes with Visual Studio. Very handy when you just need a new solution to start with and don't want to start up Visual Studio and navigate through all the menus to do this.


    MbUnit rocks over NUnit. My first exposure to the row test and while you can abuse the feature, it really helps cut down writing a ton of tests or (worse) one test with a lot of entries.

    In NUnit let's say I have a test like this:

       35 [Test]

       36 public void ShouldBeAbleToAddTwoPositiveNumbers()

       37 {

       38     int firstNumber = 2;

       39     int secondNumber = 2;

       40     Calculator calculator = new Calculator();

       41     Assert.AreEqual(firstNumber + secondNumber, calculator.Add(firstNumber, secondNumber));

       42 }

    And I need to test boundary conditions (a typical thing). So I've got two options. First option is to write one test per condition. The second is to write a single test with multiple asserts. Neither is really appealing. Having lots of little tests is nice, but a bit of a bear to maintain. Having a single test with lots of asserts means I have to re-organize my test (hard coding values) and do something like this in order to tell which test failed:

       33 [Test]

       34 public void ShouldBeAbleToAddTwoPositiveNumbers()

       35 {

       36     Calculator calculator = new Calculator();

       37     Assert.AreEqual(2 + 2, calculator.Add(2, 2), "could not add positive numbers");

       38     Assert.AreEqual(10 + -10, calculator.Add(10, -10), "could not add negative number");

       39     Assert.AreEqual(123432 + 374234, calculator.Add(123432, 374234), "could not add large numbers");

       40 }

    Or something like, but you get the idea. Tests become ugly looking and they feel oogly to maintain. Enter MbUnit and the RowTest. The test above becomes parameterized that looks like this:

       28 [RowTest]

       29 public void ShouldBeAbleToAddTwoPositiveNumbers(int firstNumber, int secondNumber)

       30 {

       31     Assert.AreEqual(firstNumber + secondNumber, calculator.Add(firstNumber, secondNumber));              

       32 }

    Now I can simply add more Row attributes, passing in the various boundary conditions I want to check like so:

       23 [Row(2,2)]

       24 [Row(10, -10)]

       25 [Row(123432, 374234)]

       26 [RowTest]

       27 public void ShouldBeAbleToAddTwoPositiveNumbers(int firstNumber, int secondNumber)

       28 {

       29     Assert.AreEqual(firstNumber + secondNumber, calculator.Add(firstNumber, secondNumber));              

       30 }

    That's cool (and almost enough to convince me to switch) but what's uber cool about this? In the MbUnit GUI runner, it actually looks like separate tests and if a Row fails on me, I know exactly what it was. It's a failing test out of a set rather than a single test with one line out of many asserts.


    As with any tool, you can abuse this so don't go overboard. I think for boundary conditions and anywhere your tests begin to look crazy, this is an awesome option. I haven't even scratched the surface with MbUnit and it's database integration (I know, databases in unit tests?) but at some point you have to do integration testing. What better way to do it than with unit tests. More on that in another blog.

    Subject Under Test

    This term was coined (at least my first exposure to it) was from Gerard Meszaros excellent book xUnit Testing Patterns and he uses it throughout. It makes sense as any unit test is going to be testing a subject so therefore we call it the Subject Under Test. One nomenclature that JP is using in his tests is this:

       15 private ICalculator calculator;


       17 [SetUp]

       18 public void SetUpCalculatorTest()

       19 {

       20     calculator = CreateSUT();

       21 }


       23 private ICalculator CreateSUT()

       24 {

       25     return new Calculator();

       26 }

    So basically every unit test has a method called CreateSUT() (where appropriate) which creates an object of whatever type you need and returns it. I'm not sure this replaces the ObjectMother pattern that I've been using (and it's really not a pattern but more of a naming convention) but again, it's simple and easy to read. A nice little tidbit you pick up.

    In doing this, my mad ReSharper skills got the best of me. Normally I would start with the CreateSUT method, which in this case returns an ICalculator by instantiating a new Calculator class. Of course there's no classes or interfaces by this name so there's two options here. One is to write your test and worry about the creation later, the other is to quickly create the implemenation. At some point you're going to have to create it anyways in order to compile, but I like to leave that until the last step.

    Under ReSharper 2.x you could write your test line by writing CreateSUT() then press CTRL+ALT+V (introduce variable). However since ReSharper 2.5 (and it's still there in 3.x) you can't do this. ReSharper can't create the ICalculator instance (in memory) in order to walk through the method table (which would give you intellisense). So the simple thing is to write the CreateSUT() method and just whack ALT+ENTER a few times to create the class and interface.

    Thread safe initialization using a delegate

    I have to say that I never looked into how to initialize event handlers in a thread-safe way. I've never had to do it in the past (I don't work with events and delegates a lot) but this is a great tip. If you need to initialize an event handler but do it in a thread-safe way here you go:

    private EventHandler<CustomTimerElapsedEventArgs> subscribers = delegate { };

    It's the simple things in life that give me a warm and fuzzy each day.

    Event Aggregators and Declarative Events

    We spent the better part of the day looking at events, delegates, aggregators and whatnot. This is apparently new to the course that he's just added and hey, that's what Agile is about. Adapting to change.

    Anyways, as we dug into it I realized how little I knew about events and how dependent I was on the framework to help me with those object.EventName += ... snippets I would always write. Oh how wrong that was as we got into loosely coupled event handlers (all done without dynamic proxies, which comes later). It's pretty slick as you can completely decouple your handlers and this is a good thing. For example, here's a simple test that just creates an aggregator, registers a subscriber, and checks to see if the event fired.

    First the test:

       25 [Test]

       26 public void ShouldBeAbleToRegisterASubscriberToANamedEvent()

       27 {

       28     IEventAggregator aggregator = CreateSUT();

       29     aggregator.RegisterSubscriber<EventArgs>("SomethingHappened", SomeHandler);

       30     eventHandlerList["SomethingHappened"].DynamicInvoke(this, EventArgs.Empty);

       31     Assert.IsTrue(eventWasFired);

       32 }

    Here's the delegate that will handle firing the event:

       58 private void SomeHandler(object sender, EventArgs e)

       59 {

       60     eventWasFired = true;

       61 }

    And here's part of the aggregator class which simply manages subscribers in an EventHandlerList:

        7 public class EventAggregator : IEventAggregator

        8 {

        9     private EventHandlerList allSubscribers;


       11     public EventAggregator(EventHandlerList allSubscribers)

       12     {

       13         this.allSubscribers = allSubscribers;

       14     }


       16     public void RegisterSubscriber<ArgType>(string eventName, EventHandler<ArgType> handler)

       17         where ArgType : EventArgs

       18     {

       19         allSubscribers.AddHandler(eventName, handler);

       20     }


       22     public void RaiseEvent<ArgType>(string eventName, object sender, ArgType empty) where ArgType : EventArgs

       23     {

       24         EventHandler<ArgType> handler = (EventHandler<ArgType>) allSubscribers[eventName];

       25         handler(sender, empty);

       26     }


       28     public void RegisterEventOnObject<ArgType>(string globalEventName, object source, string objectEventName)

       29         where ArgType : EventArgs

       30     {

       31     }

       32 }

    Again, basic stuff but lets you decouple your events and handlers so they don't have intimate knowledge of each other and this is a good thing.

    A very good thing.

    BTW, as we extended this we sort of built CAB-like functionality on the first day and in a few hours (only event publishers and subscribers though). We just added two attribute classes to handle publishers and subscribers (rather than subscribing explicitly). It wasn't anywhere near as complete as Jeremy Miller's excellent Build your own CAB series, but none-the-less it was a simple approach to a common problem and for my puny brain, I like simple.

    It was fun and appropriate as the team is using CAB so the guys in the room are probably more positioned to understand CAB's EventBroker now.

    Other Goodies and Tidbits

    JP uses a nifty macro (that I think I'll abscond and try out myself) where he types the test name in plain English but with spaces like so:

    Should be able to respond to events from framework timer

    Then he highlights the text and runs the macro which replaces all spaces with underscores:


    I'm indifferent about underscores in method names and it's a no-no in production code, but for test code I think I can live with it. I reads well and looks good on the screen. Using a little scripting, reflection, or maybe NDepend I could spit out a nice HTML page (reversing the underscores back to spaces) and form complete sentences on my test names. This can be a good device to use with business users or other developers as you're trying to specify intent for the tests. I have to admit it beats the heck out of this:


    Doesn't it?

    Wrap up

    The first day has been a whirlwind for the team as they're just soaking everything in and making sense of it. For me, I'm focusing on using my keyboard more and getting more ReSharper shortcuts embedded into my skull. I've now got MbUnit to go plow through, check out the other fixtures and see what makes it tick and take the plunge from NUnit if it makes sense (which I think it does as I can still use my beloved TestDriven.NET with MbUnit as well!).

    More Nothin

    JP is doing this 5-day intensive course that stretches people to think outside the box of how they typically program and delve into realms they have potentially not event thought about yet, about one per month. For the next little while (he's a busy dude) he's going to be travelin' dude. In October he'll be in New York City. In September he'll be in London, England and other locations are booked up into the new year.

    If you want to find out more details about the course itself, check out the details here and be sure to get on board with the master.

  • Looking for some Code Review Tools

    I'm looking around for some tools to help us with code reviews. Currently code reviews are setup every couple of weeks, reviewers are sent into the codebase to pick what they want to review (we had been giving them chunks but I felt that put us at a disadvantage in picking out the good code and not seeing the bad), then during the review we'll just have Visual Studio up walking through the solution. ReSharper is great and makes navigation a breeze, but I'm looking for something better and wondering what other people use. Ideally we only want to review code that's changed since the last review of it, so something that would give us visual diffs from a date or something might be useful (but haven't quite figured out how that would work logistically). So what tools are you using for your code reviews to help automate or speed up the process? Print outs and projectors or is it a little more sophisticated then that?

  • eScrum, part deux

    Microsoft has come out with an update to what I consider their Gong Show release of eScrum. The new version (v1.0 as opposed to v1, go figure) has the following fixes:

    • MSI to provide a prerequisites separately during installation
    • Provided start menu link to documentation, to locate Help 
    • Provided choice to install only the process template, and/or the web application 
    • Rebuilt with latest Ajax Control Library Version 
    • Automated copy of XSS library in MSI
    • Automated creation and configuration of application pool and website in MSI
    • Automated upload of process template in MSI
    • Automated share point template setup in MSI
    • Automated creation/configuration of cache and log paths in MSI

    There are still some manual steps involved but it's a vast improvement and it all sounds good, I just can't tell you first hand right now if it's all it's cracked up to be.

    You can download the package from here.

  • At least she's not named Longhorn

    Lots of great feedback (good, bad, and ugly) on our naming strategy with our daughter. I'm still compiling the list as they come in and will be posting a large assortment of links with the various humor people have added to it. Saw a new one this morning which was well written but accompanied by this cartoon which was bang-on with our experience (although I didn't survey you guys to determine her name).

    Skyping Baby Names

    Hmmm... I wonder if various recent baby names (Google, 2.0, etc. including our own) spawned this cartoon? What's really funny is the notes from people saying this was all a marketing ploy. If that were true, I'm still waiting for my cheque Microsoft!

  • FIT and DDD in Edmonton

    I'm giving a talk to the Edmonton .NET Users Group at the end of the month (man how time flies) about how to use FIT with Domain Driven Design. Should be a blast as we're heading up for a couple of days to soak in the shopping experience (what else do you do in Edmonton?). Here's the abstract:

    Many development teams undertake projects by diving straight into writing code, maybe even doing Test Driven Development based off of requirements. For business scenarios, there exists an excellent tool that can help the team both understand the domain and produce testable requirements as the understanding about the domain grows. What does it take to translate business requirements from spec to tests and how can you ensure that the domain is valid and meets the needs of the system?

    This session will outline the important of FitNesse and the FIT Library by showing how to drive out foundations of domain-driven design and walk through building out testable requirements that a cross functional team can produce in a short amount of time. We will explore the identification of domain concepts and how they "fit" in with the testing model through code and web pages.

    More information about the session with directions, times, and all that boring stuff on their website here. See you there!

  • SCSFContrib First Release

    The SCSFContrib gang (Kent Boogaart, Ward Bell, Chris Holmes, Ezequiel Jadib, Matias Woloski, and myself) got our first release out after a month of forming, storming, and norming. This includes:

    • A full implementation of the UI layer for CAB done in WPF with 100% code coverage in tests
    • The Outlook Bar Workspace with a new Quickstart application
    • A WPF CAB Visualizer that lets you see behind the scenes how WorkItems, SmartParts, Workspaces, and Services fit together
    • The BankTeller Quickstart rewritten using SCSF 2007 and WinForms SmartParts
    • The BankTeller Quickstart rewritten using SCSF 2007 and WPF SmartParts

    Not bad for a first release and a month of work. Hopefully we can maintain the momentum and crank out releases every month (I'll probably get beat up for that comment from the team).

    You can the release here from CodePlex.

  • Being a Better Developer... in 6 months

    I've been a little behind in blog-land (both reading and writing, gasp) as other things have occupied my time. However my good friend Justice tagged me in his update on how he's going to be a better developer over the next 6 months so naturally I can only reply, and tag 5 more people. Yes, this is like the tell 5 things about someone but with a geek slant and I think it's more interesting this way. So here goes...


    I have a list of books I normally keep on my shelf. These include Domain Driven Design, Test Driven Development and such and I plan to re-read some of these in the next 6 months (to me, something like Applied Domain Driven Design by Jimmy Nilsson you can read over and over, but then I'm a geek). For new stuff, I'm consuming Windows Presentation Foundation Unleashed by Adam Nathan right now and getting into that and I'll round it out to say that I'm prepared to soak up 3 new books over the next 6 months. Sometimes I can gobble up a book in one sitting, but for the most part I suck one down then go back to it over the course of a few weeks to really did into the nitty gritty parts (and maybe write some small spikes to really grok a subject, like WPF will be for me).


    Other than my blog, I intend to have a few new articles published that are currently in the works (MSDN Magazine, O'Reilly, etc.) and generally keep the pace I have on my blog. I'm no Oren and you'll never see 150 posts in a month from me (other than "All work and no play makes Bil a dull boy"), but I intend to keep up the pace to around 20 good posts a month. This is an increase from last year and who knows, maybe with some hallucinatory inciting drugs, I *can* get posting 150 times a month with some choice stuff.


    The speaking schedule this year is a little slow thanks to baby and family (which is the way it should be) but I'm still heading out to amuse and bemuse you from time to time. The next 6 months will be a few user groups (Edmonton and Calgary specifically, but I'm open to others and you can contact me through the MSDN Canada Speakers Bureau which is painfully out of date) and I'm just submitting some talks to DevTeach Vancouver. Hopefully I'll get in, but the competition is tough and I might not make it it depending on how many heavy hitting Agile guys there are out there who are far better than me (and there are a lot of them). I don't consider myself a great speaker but I have passion and try to emote that through my presentations with a little humor. The best advice I got from some of the top speakers out there is to keep doing it. So if you're like me, get out there and just do it. Speak at your own company first, or user groups then move up to the conferences if you like it and you think it's your thing. I get a lot of satisfaction when someone comes away from a session I've done and uses it in their day job. Maybe you can get the same.


    I have my own personal vanity site (down right now thanks to an unsuccessful DNN upgrade) at and it really needs to get it rebuilt. It's been such a pain as I try to find content, yet I have 200 gigabytes of code, documents, snippets, articles, tools, projects and games that I've written over the years that *needs* to get out there. Why? Because I like to share and let people dissect what I do. What doesn't kill you makes you stronger type of thing. So in the next 6 months that *will* come online and I'll get lots of goodies up there to share with the rest of the class.


    I think learning is the key to growth. I'm a sponge. I see a new toy, tool, technology, etc. and just soak it up. Although for me there's an initial soaking-inception-phase where I test the waters to see if I'm really going to like this. I talked to John Lam long ago about Ruby and still have yet to pick it up but some day (maybe now that Silverlight can do it dynamically I'll take that approach). One thing about learning is that you can't learn everything yourself. Don't be afraid to ask stupid questions. I find the MVP group and the Agile guys out there are a wealth of information. I'm often emailing people for hints or ideas or just to bounce things off them. Many times I just get back a nugget with something so subtle but I fit it into my work and I feel I'm a better developer for it. So more of that as you can never know enough.

    Code, Code, Code and more Code

    I feel you can only get betting by doing it. For me, that's writing code and trying out new things. I constantly refine what I consider my reference architecture for applications. I have one for Smart Clients, one for SharePoint Apps, and one for Web Apps and keep them lean and simple. They're not templates but more like guidelines so any time I spin up a new project I use it and tweak it, then incorporate those tweaks into new applications. I code every day even if I have a day full of meetings so I'm going to keep doing that. I think it's important to be there in the code (even though technically I'm an "Architect") as things change and the software is build from code, not Visio drawings. Like speaking, the more you do it the better you get. So code, code, code everyday until you drop.

    Okay, so here's my tag list. These 5 dudes, should they choose to accept this mission, should be telling you on how they're going to be better developers over the next 6 months (and you see from the list, it's going to be tough as these guys are already at the top of their game, however everything can be improved):

    • Scott Hanselman - You can't improve perfection, but it's great to peek inside the mind of the great ones from time to time.
    • Oren Eini (Ayende Rahien) - Because he's probably writing this up right now anyways, and doesn't really post enough.
    • Jeremy Miller - Because all of the books on his "Books that Influence me" post are my library too, I'd like to see what else makes J tick.
    • Jeffrey Palermo - For all the times I've misspelt Jeff's last name (including this post)
    • Rocky Lhotka - Because Rocky needs a reason to stop posting about CSLA ;)


    Update: Strangely enough, I was *supposed* to tag 4 peeps, not 5. However this gives Rocky, Scott, or Oren an out since both J's accepted my challenge and are prolly writing up the blog to institute world peace as we speak. Maybe after the first 6 months I should learn how to better read other peoples blogs.

  • More Vista Baby Naming Fallout

    I have over 100 links to other blogs, news sites, and rumblings (including Windows Vista magazine and The Guardian) over the naming of our daughter last month. I'll probably post a follow-up with links to them, as some of them are quite humorous.

    However this one jumped out at me this morning:

    This time, it’s allegedly a Vice President of Microsoft, Bil Simser, who has charted new baby naming territory. According to a post in the Windows Vista Magazine, Simser’s new daughter is named Vista Avalon.

    Vice President huh? Well it's *got* to be true, it's on the Internet. What a promotion from MVP, and I don't even work for the company! Gotta love the media. As my first executive decision, I hereby declare .NET, SharePoint, and Visual Studio open source! Now should I should send a note to asking for my parking pass and key to the executive squash courts.

    Update: It's been noted that Vista wasn't listed on Wikipedia's page for unusual names. No longer. I updated it with her name and a link to her blog entry. She's the first entry for the letter "V".

  • Change Calendar Time Zone

    Got an odd message this morning as I was slugging through emails (and figuring out how to corrupt the world with my new CrackBerry).


    I have no idea what this means? Someone changed Mountain Standard Time and I didn't get the memo or something? 

  • ReSharper Goodness?

    One of our devs was doing a refresh from source control just now and got this ReSharper exception:

    JetBrains.ReSharper.Util.InternalErrorException: Shit happened

    Shit happened ---> JetBrains.ReSharper.Util.InternalErrorException: Shit happened

                at JetBrains.ReSharper.Util.Logger.LogError(String) in c:\Agent\work\Server\ReSharper2.5\src\Util\src\Logger.cs:line 389 column 7

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.ProcessAssemblyReferences(AssemblyReferenceProcessor) in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 409 column 9

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.get_References() in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 442 column 9

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.UpdateAssemblyReferences() in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 205 column 7

                at JetBrains.ReSharper.Shell.<>c__DisplayClass1.<Invoke>b__0() in c:\Agent\work\Server\ReSharper2.5\src\Shell\src\Invocator.cs:line 225 column 33

                at System.RuntimeMethodHandle._InvokeMethodFast(Object, Object[], SignatureStruct&, MethodAttributes, RuntimeTypeHandle)

                at System.RuntimeMethodHandle.InvokeMethodFast(Object, Object[], Signature, MethodAttributes, RuntimeTypeHandle)

                at System.Reflection.RuntimeMethodInfo.Invoke(Object, BindingFlags, Binder, Object[], CultureInfo, Boolean)

                at System.Delegate.DynamicInvokeImpl(Object[])

                at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry)

                at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object)

                at System.Threading.ExecutionContext.runTryCode(Object)

                at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode, CleanupCode, Object)

                at System.Threading.ExecutionContext.RunInternal(ExecutionContext, ContextCallback, Object)

                at System.Threading.ExecutionContext.Run(ExecutionContext, ContextCallback, Object)

                at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry)

                at System.Windows.Forms.Control.InvokeMarshaledCallbacks()

                at System.Windows.Forms.Control.WndProc(Message&)

                at System.Windows.Forms.ScrollableControl.WndProc(Message&)

                at System.Windows.Forms.ContainerControl.WndProc(Message&)

                at System.Windows.Forms.Form.WndProc(Message&)

                at System.Windows.Forms.ControlNativeWindow.OnMessage(Message&)

                at System.Windows.Forms.ControlNativeWindow.WndProc(Message&)

                at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr, Int32, IntPtr, IntPtr)


    Followup: Found a few references to this here and here. It was fixed in 2.5.2.

  • Project Structure - By Artifact or Business Logic?

    We're currently at a crossroads about how to structure projects. On one hand we started down the path of putting classes and files into folders that made sense according to programming speak. That it, Interfaces; Enums; Exceptions; ValueObjects; Repositories; etc. Here's a sample of a project with that layout:


    After some discussion, we thought it would make more sense to structure classes according to the business constructs they represent. In the above example, we have a Project.cs in the DomainObject folder; an IProject.cs interface in the interface folder; and a PayPeriod.cs value object (used to construct a Project object) in the ValueObjects folder. Additional objects would be added so maybe we would have a Repository folder which would contain a ProjectRepository.

    Using a business aligned structure, it might make more sense to structure it according to a unit of work or use case. So everything you need for say a Project class (the class, the interface, the repository, the factory, etc.) would be in the same folder.

    Here's the same project as above, restructured to put classes and files into folders (which also means namespaces as each folder is a namespace in the .NET world) that make sense to the domain.


    It may seem moot. Where do I put files in a solution structure? But I figured I would ask out there. So the question is, how do you structure your business layer and it's associated classes? Option 1 or Option 2 (or maybe your own structure). What are the pros and cons to each, or in the end, does it matter at all?

  • More ReSharper 3.0 Goodness

    Found an interesting tidbit today. I had a piece of code with the word "bug" in a comment. It showed up like this in my IDE:


    Lo and behold I found that ReSharper is finding keywords in my comments and colorizing them so they stand out.

    It also does this with TODO and other keywords that you define. In the Tools options you'll find a leaf node called Todo Items. In there you can set up patterns. Here's the pattern for Bug:


    So any time it find "bug" (using the regular expression) it'll colorize it red and display the error icon on that line. The default items that are added are Todo, Note, and Bug. You can add your own so you might use this as a good way to highlight things in your code to junior developers (for example creating one called "Pattern" to highlight an implementation of a specific design pattern).

    Note, this might not be a 3.0 thing but since I don't have 2.5 installed anymore I can't tell if it's been there all along.

    Very neat!

  • Richard Campbell in Calgary Wednesday, June 27

    Richard Campbell will be presenting to the Calgary .NET User Group on Wednesday, June 27th. Richard is the co-host of .NET Rocks and an awesome speaker. Based out of Vancouver, he haunts our Calgary corner a few times (last time I remember he was at our 2006 Code Camp) so please do try to get out to see him.

    I had issues (read:errors) trying to register on the website so you should be able to just show up for registration (I tried seeing where you could contact someone, but their contact page doesn't seem to have any contact info like oh, emails or phone numbers). The event is in the Nexen Centre, located at 800 7th Ave SW. Once you get in, you have to go upstairs to +15 level, then past the Brown Bag (a sandwich shop), then over a walking bridge to the conference centre. It's poorly marked, but it's the same place I gave my MOSS 2007 presentation if anyone was paying attention. See you there!

  • A Visual Tour of ReSharper 3.0

    ReSharper 3.0 is out now in final form and looks great. Here's a visual walkthrough of some of the 3.0 features, along with some old and otherwise existing ones ReSharper has to offer.

    Code Analysis

    ReSharper 3.0 has more code analysis features than previous versions. For example here it tells me that I can make this field read-only. Why? Because it's only ever initialized in the declaration and never gets assigned again. You'll also get this suggestion with fields that are initialized in constructors only (but this is a test fixture so there's no constructors). A quick hit of Alt + Enter and I can change this field to the suggestion ReSharper is offering.


    Putting your cursor on the field and hitting Ctrl + Shift + R let's you select from a list of applicable refactorings. By applicable I mean they're context sensitive to the type, scope, and value you're looking at. For example here I get a set of refactorings I can do to a field.


    Now if I hit the same shortcut on a method I get these offerings. Note that I can now invoke the Change Signature refactoring (and others) but Encapsulate Field is no longer available. ReSharper recognizes I'm in a method and not a field and does things in a smart fashion by filtering the refactoring menu down to only what's valid.


    Another suggestion is when methods are only ever referenced by a local class and don't access external values or objects. In that case, ReSharper will suggest that you make the method static. This will reduce on execution time (but we're only talking about saving a few mips here, so don't get too excited).


    With this (and other refactorings) you can press Alt + Enter to see a list of options. This also appears as a small light bulb in the left hand gutter and shows you a list of refactorings and optimizations you can perform on a method or variable.



    ReSharper not only offers great productivity with it's refactorings, but it really helps out when you're trying to navigate around your codebase. With a few simple keystrokes, you'll be flying through your code in no time.

    You can search for a type name by pressing Ctrl + N. This brings up a window for you to type in and narrow down the search. For example here I entered "MI" which shows me all the classes that start with "MI". You'll also notice that "ModuleInfoElement" is also included. This is because the search filters on CamelCase names, which you can also filter down even further.


    Here we've filtered the "MI" list down a little more by entering "MIC".


    Even further we enter "MICV" which shows me the view, presenter, and fixture.


    Documentation and Guidance

    ReSharper also knows about your code and can tell you about it. This helps as sometimes you just don't know what a method is expecting or why a parameter is passed to a method.

    Here I have my mouse cursor in the parameter to the Add method and pressed Ctrl + P to show parameters and documentation. This is culled from the XML comments in your codebase so it's important to document these!


    ReSharper also has the ability to generate some simple documentation (via the Ctrl + Q key) in the form of a popup. This provides information about a type, it's visibility, and where it's located (along wtih hyperlinks to types in the popup). Very handy for jumping around (although you do have to engage the mouse).


    Other Productivity Features

    A few other small features that I always find useful.

    Ctrl + Shift + V

    This pops up a dialog which contains all of the things you've recently copied to the clipboard. You can just highlight the one you want and insert it. Very handy when you have a small snippet that you want to re-use.


    Ctrl + Alt + V

    One of my favorites as I hate typing out values for objects. I'd rather just create the object and not worry about it (ala Ruby) however in C# you do sometimes want a variable around. ReSharper helps you by creating a dialog for taking a method and introducing a variable. It understands the return type and even suggests a name for you. Very quick when you want to reduce the keystrokes:


    There are a ton of more features that are out there. If you're interested, you should check out Joe White's 31 days of ReSharper he posted back in March/April that has a small tip every day from installation and setup to almost all of the refactorings and tools ReSharper has to offer. Awesome.

  • Mike Cohn is blogging

    Or maybe I'm just slow on the uptake? I got word via Mountain Goat Software, Mike's company, that his blog Succeeding with Agile is now available. However there are posts there dating back to January. In any case, whether it's new or not, it's a blogger to read. Mike has always been there for me with little tidbits of extra info and sending me resources when I was swimming in Agile questions. He's an excellent speaker and I look foward to his blog entries, even if they're only going to be once a month (hey, he's a busy guy). Check it out and consider adding him to your blog roll as he's on of the key guys in Agile software today.

  • No iPhone for you Canada!

    I was informed by informed sources (and this is probably old news) that they'll be no iPhones for Canadians, unless you're willing to pay Cingular roaming charges. I was planning on getting an iPhone but found out that a) the plan is locked to Cingular b) Cingular only services the U.S. c) you cannot simply drop in a SIM card from any other provider as iPhones are locked to the Cingular provider.

    My personal opinion is that Apple should have unlocked the phone and let you use any carrier. Okay, so they wouldn't have got the big bucks they're obviously getting from Cingular but if you crunch the numbers (and I'm sure they did) I would think you would have more hardware sales than payola in the long run. Guess not, so until Steve Jobs calls me up and puts me in that position it's no iPhone for us Canucks.

    Update: I was doing a little blog sleuthing and came across various rumours about Rogers being a carrier for the iPhone. However a) it's about 6 months out at best b) there's no official word that I can find and c) more informed (non-official) sources tell me this is false. Gizmodo says it's "confirmed" but I have doubts. Every report though says "a customer service email" or "customer service representative". To me, that's not official in any capacity.

    Someone will obviously hack this and probably within 6 months (or sooner) you could use one up here, but otherwise the only way would be to get a Cingular plan then pay roaming fees all the time. I may have good consultating rates, but not that good.

    Anyways, now I'm looking at the HTCs as people are saying they're good. Looking for any suggestions from anyone on model. There was an article a few days ago Forbes on iPhone alternatives so they look pretty good. Let me know what you recommend?

  • Load Testing Smart Clients

    It's a question, not a blog post. Anyone got some good tips, tricks, techniques, and tools for load testing Smart Clients? There's a plethora of info out for load testing web applications but little to nothing on Smart Clients. Just looking for ideas from the code monkeys out there.

  • An attempt at working with eScrum

    Okay, first off this tool wins the "Most Horrible Name Marketing Could Come Up With" award. I mean seriously, eScrum? Well, I guess when Scrum for Team System is taken what else do you do?

    I took a look at eScrum but after an hour of configuration and various error messages I gave up. I'm the type that if I need to spend half a day to try something out, something that I kind-of already have, that's half a day wasted. I personally think most of the people out there that are saying this tool is "pretty nice" haven't actually installed it (or tried to install it).

    So take this blog entry with a grain of salt as I didn't complete it to get to the finish line.

    What is eScrum?
    Anyways, eScrum is a web-based, end-to-end project management tool for Scrum built on top of TFS. It allows multiple ways to interact with your Scrum project:

    • eScrum web-based UI
    • TFS Team Explorer
    • Excel
    • MS Project

    Like any Scrum tool, it offers a one-stop place for all Scrum artifacts like product backlogs, sprint backlogs, retrospectives, and those oh-so-cool burndown charts.

    Installation is pretty painless. That is until you realize that you need a bevy of Microsoft technologies and tools installed in order to run eScrum. eScrum uses a variety of web and back-end technologies and you need to install of of them before getting your eScrum site up and running, although you can install them before or after eScrum, your choice.

    You'll need to install:


    Once everything is installed hang on a second kids, there's still configuration to be done! eScrum is a bit of a pain to configure. Configuring eScrum is like installing Linux, there are a lot of steps and at any point you can really screw things up.

    ASP.NET AJAX Control Toolkit Version Conflicts
    Since the release site of the AJAX Control Toolkit does not allow download of previous versions and eScrum is compiled with a specific version, you may need to update the web.config file to allow automatic usage of a newer version of the AJAX Control Toolkit.  eScrum has not been tested with newer versions, but may work well.

    Add following XML to the eScrum web.config file after the </configSections> close tag.  Afterward, update the newVersion attribute to the version of the control toolkit that you are using.

        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
                <assemblyIdentity name="AjaxControlToolkit"
                <bindingRedirect oldVersion="1.0.10301.0" newVersion="1.0.CHANGEME.0"/>

    Setting up SharePoint Templates
    Oh yeah, the fun still continues and we're still not finished. The eScrum TFS Template includes a team SharePoint portal template which gets installed when a new TFS Project is created with the eScrum template.  The SharePoint templates must be added to the server before creating a TFS Project with the eScrum Process Template.
    Deployment Steps. Follow these instructions to get this step done:

    1. Log on to the target server
    2. Open a command prompt and change directory to: <SystemDrive>\Program Files\Common Files\Microsoft Shared\web server extensions\60\BIN
    3. Add the new templates using
      1. stsadm -o addtemplate -filename <path>\eScrum.stp -title "eScrum"
      2. stsadm -o addtemplate -filename <path>\eScrumFeaturesIdeas.stp -title "eScrum Features & Ideas"
      3. stsadm -o addtemplate -filename <path>\eScrumRiskLog.stp -title "eScrum Risk Log"
      4. stsadm -o addtemplate -filename <path>\eScrumStrategy.stp -title "eScrum Strategy & Issues"
    4. Type IISRESET to reset IIS

    Setting up an eScrum TFS Project
    eScrum uses eScrum TFS Projects as a back end storage and management, so you won't be able to use it on existing projects. Once you have added the eScrum Process Template to your TFS server, you will need to create a new TFS Project using the eScrum Template.

    First you'll need to get the templates uploaded via Team Explorer (or inside Visual Studio). Make sure you don't have even the Word document open while you're uploading the template or it will fail when it tries to create the zip file.

    Once you've uploaded the templates and they're available, you need to create a project using the eScrum template:

    1. In Team Explorer, right click your server and select "New Team Project…"
    2. Name your project and use the eScrum template
    3. Add yourself and your team members are all added to the Project Contributors (or Project Administrators, depending on your preference) security group.
      1. Right-click on your new Project and select "Team Project Settings.Group Membership…"
      2. Double-click either the Administrators or Contributors group
      3. Change the "Add member" selection to "Windows User or Group"
      4. Add your members
      5. Click OK

    There are some other installs they want you to do and I suggest you follow the various installation and configuration guides but for my test this was good enough to get something up and running.

    Now browse to where you installed it and you'll see something like this:


    Creating Projects
    eScrum is a little odd, but it seems to align to the Scrum process. Of course the thing with Scrum is that it's adaptable. There is no golden rule of how it works. There are guidelines and people generally follow them but for example in eScrum you must have a product. The eScrum project you create isn't good enough, it needs something actually called a "Product" (using the concept that multiple products form a project). I don't personally do Scrum that way so found it a little frustrating. The other frustrating thing when setting up a project (oh sorry, "product") was that I couldn't save it until I Product contributors were added (team members) and it wouldn't let me add team members until I created groups and that's where I stopped before my brain exploded.

    Enough Configuring, I give up!
    Yes, I gave up installing and configuring the beast as it was just too much. I mean, I'm all for tools and setting up websites but after an hour of screwing around (even though I knew what I was doing) I said enough was enough. Realistically, give yourself a half day (if you rush) or a full day with some testing to get this puppy up and running.

    In fact, even after I had the template setup and a test project created I had no idea (other than through the Web UI) how to create a product? (which I couldn't do because of the security issues) It didn't look like I could create one in Team Explorer as all it would let me create was a bug, product details (but it needs a product first), sprint details, sprint retrospective, or a sprint task. WTF?

    Yeah, the SharePoint Scrum Master was lost so either I'm an idiot (possible) or this tool isn't very intuitive, even for someone who thinks he knows what he's doing.

    I wasn't going to go through the rest of the steps and who knows what else was needed, thus I wasn't able to get screenshots with projects configured and sprint backlog items, etc. I'll leave that for another soul to give up his day for.

    I do however have some images for the various tabs so you can get a feel for what eScrum has to offer:

    Product Page


    Sprint Page


    Daily Scrum Page


    Retrospective Page


    Bottom Line
    Was it worth it? Was it worth all the installing and configuring and configuring and installing?

    IMHO, no.

    I'm very happy with Conchango's Scrum for Team System and hey, to install that I just had to upload a new process template from Team Explorer. No mess no fuss.

    Once you do get the configuration and installation out of the way, eScrum looks interesting. It's got a nice dashboard for tracking your sprint, lets you keep on top of the daily Scrum electronically, and offers a bevy of Scrum reports like burndowns, metrics, and a product summary (none of which I have seen because I didn't take it that far when setting it up).

    There are problems with the setup (even though I didn't finish). For example the SharePoint template contains entry into the Links list pointing to http://eScrum and http://eTools, none of which are correct so you have to fix this (and frankly, I don't even know what the eTools link is supposed to be). The SharePoint templates are just custom lists with a few extra fields, nothing special here. Even the logo for the site was broken in the template so it's obviously this is either rushed or nobody cares about the quality of presentation of the tool (and I wouldn't call this a 1.0 release).

    Other things that immediately are a problem I had with this, you had to modify an XML config file every time you needed to add a project (and it's called a "Group" inside of the config file). Maybe you can do it through the web UI, but it looked to me like you had to modify this for each project.

    I think for any kind of adoption, Microsoft needs to put together an installer for this as we don't all have a day to kill configuring a tool that should be seamless (after all, it's just a website and a TFS template remember). They also should have some documentation/guidance on this. From the looks of what I could get up and running there's very little actual "guidance" on using the tool and frankly, from the websites there's very little anything about this tool. Does MS think you install it (assuming you have the gumption to go through the entire process) and it'll just work and people will understand it? Even Scrum for Team System has nice documentation written on the process that goes along with the tool. Tools and technologies alone do not make for a good package.

    If you want to use Scrum with TFS, stick to Conchango's Scrum For Team System template. It has it's own share of flaws but installs in about 5 minutes.

  • Coalescing with ReSharper 3.0

    The ReSharper 3.0 beta is out and I'm really digging it. It's little things that make my day a better place.

    For example I had a piece of code that looked like this:

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     if (proxy.Credentials != null) _errorReport.Proxy.Credentials = proxy.Credentials;

       18     else _errorReport.Proxy.Credentials = CredentialCache.DefaultCredentials;

       19     _needProcessEvents = needProcessEvents;

       20 }

    "if" statements are so 1990s so you can rewrite it like this using the conditional operator "?"  

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     _errorReport.Proxy.Credentials = (proxy.Credentials != null)

       18                                         ? proxy.Credentials

       19                                         : CredentialCache.DefaultCredentials;

       20     _needProcessEvents = needProcessEvents;

       21 }

    However with nullable types in C# 2.0 you can use the language feature of using the "??" assignment (called the null coalesce operator). The ?? operator defines the default value to be returned when a nullable type is assigned to a non-nullable type. This shortens the code but makes it more readable (at least to me).

    ReShaper to the rescue as it showed me squiggly lines in my code where I had the "?" operator and said (in a nice ReSharper way not an evil Clippy way):

    '?:' expression could be re-written as '??' expression.

    So a quick ALT+ENTER and the code becomes this:

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     _errorReport.Proxy.Credentials = proxy.Credentials ?? CredentialCache.DefaultCredentials;

       18     _needProcessEvents = needProcessEvents;

       19 }

    Okay, so maybe I'm easily amused but it's the little things that make my day (and maybe yours) a little brighter. The coding assistance feature of ReSharper 3.0 is shaping up to be very useful indeed.

  • Refactoring Dumb, Dumber, Dumbest away

    In my previous crazy post I had shown some code that was butt ugly. It was a series of if statements to determine some value (an enum) then use that value to switch on a case statement that would assign some values and audit the steps as it went along. It was ugly and the challenge before me was to refactor it away. I chose the Strategy Pattern as it seemed to make sense in this case, even if it did introduce a few more classes. So here we go.

    First, lets look at the full code we're refactoring. Here's the silly enumerated value:

       15 enum DumbLevel

       16 {

       17     Dumb,

       18     Dumber,

       19     Dumbest,

       20     Not

       21 }

    And here's how it's used:

      695 for (int i = 0; i < _segments.Count; i++ )

      696 {

      697     ISegment segment = _segments[i];


      699     #region determine cable count & heat segment voltage


      701     int cableCount;

      702     decimal heatSegmentVoltage;

      703     DumbLevel dumbLevel;


      705     //determine the specific segment configuration

      706     if (_segments.Count > 1)

      707     {

      708         cableCount = segment.GetCableCount(); //use # of tracers to determine cable count


      710         if (cableCount == 1)

      711         {

      712             if (PassesCount > 1)

      713             {

      714                 if (i <= (_segments.Count - 2)) //for all but last

      715                     dumbLevel = DumbLevel.Dumbest;

      716                 else

      717                     dumbLevel = DumbLevel.Dumb; //last segment

      718             }

      719             else

      720                 dumbLevel = DumbLevel.Dumb;

      721         }

      722         else

      723             dumbLevel = DumbLevel.Dumber;

      724     }

      725     else

      726         dumbLevel = DumbLevel.Not;



      729     //calculate cable count and heat segment voltage based on the segment configuration

      730     switch (dumbLevel)

      731     {

      732         case DumbLevel.Dumb:

      733             cableCount = segment.GetCableCount(); //use # of tracers to determine cable count

      734             AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);

      735             AuditStep("");


      737             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage;

      738             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2})", SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      739             break;


      741         case DumbLevel.Dumber:

      742             cableCount = segment.GetCableCount(); //use # of tracers to determine cable count

      743             AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);

      744             AuditStep("");


      746             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage / PassesCount;

      747             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2}) / Passes # ({3})", SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);

      748             break;


      750         case DumbLevel.Dumbest:

      751             cableCount = _passesCount;

      752             AuditStep("Cable Count: {0} (based on Passes #)", _passesCount);

      753             AuditStep("");


      755             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage / PassesCount;

      756             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2}) / Passes # ({3})", SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);

      757             break;


      759         case DumbLevel.Not:

      760             cableCount = 1;

      761             AuditStep("Cable Count: 1");

      762             AuditStep("");


      764             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop;

      765             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1})", SupplyVoltage, Project.VoltageDrop);

      766             break;


      768         default:

      769             throw new ApplicationException("Could not determine a known segment configuration.");

      770     }

      771 }

    Basically it's going through a series of segments (parts of a cable on a line) and figuring out what the segment configuration should be. Once it figures that out (with our fabulous enum) it then goes through a case statement to update the number of cables in the segement and the voltage required to heat it. This is from an application where the domain is all about electrical heat on cables.

    Lets get started on the refactoring. Inside that tight loop of segments, we'll call out to get the context (our strategy container) with a method we extracted called DetermineCableCountAndHeatSegment. Here's the replacement of the Dumb/Dumber if statement:

      710 SegmentConfigurationContext context = DetermineCableCountAndHeatSegmentVoltage(i, segmentCount, segment, segmentPercentage);

      711 int cableCount = context.Configuration.CalculateCableCount();

      712 decimal heatSegmentVoltage = context.Configuration.CalculateVoltage();

    And here's the actual extracted method:

      765 private SegmentConfigurationContext DetermineCableCountAndHeatSegmentVoltage(int i, int segmentCount, ISegment segment, decimal segmentPercentage)

      766 {

      767     SegmentConfigurationContext context = new SegmentConfigurationContext();


      769     int cableSegmentCount = segment.GetCableCount();


      771     if(segmentCount > 1)

      772     {

      773         if (cableSegmentCount == 1)

      774         {

      775             if (PassesCount > 1 && (i <= (segmentCount - 2)))

      776             {

      777                 context.Configuration = new MultiPassConfiguration(PassesCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      778             }

      779             else

      780             {

      781                 context.Configuration = new SinglePassConfiguration(cableSegmentCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      782             }

      783         }

      784         else

      785         {

      786             context.Configuration = new MultipleCableCountConfiguration(cableSegmentCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);   

      787         }

      788     }

      789     else

      790     {

      791         context.Configuration = new DefaultSegmentConfiguration(1, SupplyVoltage, Project.VoltageDrop);                       

      792     }


      794     return context;

      795 }

    The if statement is still there, but now it's a little easier to read in a method and makes more sense overall. We can tell now, for example, if there's more than 1 cableSegmentCount we use the MultipleCableCountConfiguration strategy object. Okay, not as clean as I would like it but it's a step forward over a case statement.

    The configuration classes are the strategy implementation. First here's the interface, ISegmentConfiguration:

        3 public interface ISegmentConfiguration

        4 {

        5     int CalculateCableCount();

        6     decimal CalculateVoltage();

        7 }

    This just gives us two methods to return the count of the cables and the voltage for a given segment.

    Then we create concrete implementations for each strategy. Here's the Simplest one, the DefaultSegmentConfiguration:

        3 class DefaultSegmentConfiguration : SegmentConfiguration

        4 {

        5     public DefaultSegmentConfiguration(int cableCount, int supplyVoltage, decimal voltageDrop)

        6     {

        7         CableCount = cableCount;

        8         SupplyVoltage = supplyVoltage;

        9         VoltageDrop = voltageDrop;

       10     }


       12     public override int CalculateCableCount()

       13     {

       14         AuditStep("Cable Count: {0}", CableCount);

       15         AuditStep("");

       16         return CableCount;

       17     }


       19     public override decimal CalculateVoltage()

       20     {

       21         decimal heatSegmentVoltage = SupplyVoltage * VoltageDrop;

       22         AuditStep("Supply Voltage ({0}) * Voltage Drop ({1})", SupplyVoltage, VoltageDrop);

       23         return heatSegmentVoltage;

       24     }

       25 }

    Here it just implements those methods to return the values we want. DefaultSegmentConfiguration inherits from SegmentConfiguration which looks like this:

        5 internal abstract class SegmentConfiguration : DomainBase, ISegmentConfiguration

        6 {

        7     protected int CableCount;

        8     protected int SupplyVoltage;

        9     protected decimal VoltageDrop;

       10     protected decimal SegmentPercentage;

       11     protected int PassesCount;

       12     public abstract int CalculateCableCount();

       13     public abstract decimal CalculateVoltage();

       14 }

    This provides protected values for the sub-classes to use during the calculation and abstract methods to fulfil the ISegmentConfiguration contract. There's also a requirement to audit information in a log along the way so these classes derive from DomainBase where there's an AuditStep method (we're looking at using AOP to replace all the ugly "log the domain" code).

    Now we have multiple configuration classes that handle simple stuff. Calculate cable count and voltage. This lets us focus on the algorithm for each and return the value needed by the caller. Other implementations of ISegmentConfiguration will handle the CalculateVoltage method differently based on how many cables there are, voltage, etc.

    Like I said, it's a start and puts us in a better place to test each configuration now rather than that ugly case statement (that's practically untestable). Its also clearer for new people coming onto the project as they [should] be able to pick up on what the code is doing. Tests will help strengthen this and make the use of the classes much more succinct. More refactorings that could be done here:

    • Get rid of that original if statement, however this might be a bigger problem as it bubbles up to some of the parent classes. At the very least, it could be simplified and still meet the business problem.
    • Since all ISegmentConfiguration classes return the cable count, maybe this should just be a property as there's no real calculation involved here for the cable count.

    Feel free to suggest even more improvements if you see them!

  • Acropolis, CAB, WPF, and the future

    "Acropolis, the future of Smart Client"

    So sayeth Glenn Block, product lead for the Smart Client Software Factory and CAB. Glenn's a good friend and he's just doing his job, but I felt a little shafted when Acropolis popped up on the scene. I mean, after the last few weeks of CAB is complex and CAB is this and CAB is that, the last thing we need is a CAB replacement but here it comes and it's called Acropolis.

    There were many requests to ship a WPF version of SCSF/CAB and well, we're actually doing it now with the SCSFContrib project up on CodePlex. Is Acropolis a WPF version of CAB? We'll see but Glenn says "Acropolis takes the concepts of CAB to levels that folks in p&p might have never dreamed". From the initial reaction I'm seeing from people like Chris Holmes and Oren, Acropolis doesn't look all that impressive. Another wrapper on top of WPF, a little orchestration thrown in to "wire up components and dependencies" and the promise of building apps without writing or generating any code. I've heard this story before with CASE and like Oren, I see ugly XAML (or XML or XML-like) code being behind all this which doesn't give me a warm and fuzzy.

    I have yet to setup Acropolis and take it for a real test drive so I have to act like the movie reviewer who's never seen the movie but heard other reviews and has some initial reactions from the trailer. If CAB wasn't on the scene, this would be a great. It's hard enough to get deep into XAML as it is, so layering more complexity on top of that requires something that will help a developer, not hinder him. True, you can still (and will) rip open the XAML to figure out what's going on and make those adjustments but at least it's not that complex right now with POWPF (plain old WPF if there is such a thing). It's 2007 and we've evolved (almost) to the point where we can trust designers and editors. I still have to tweak the .designer generated files [sometimes] to get the right objects parenting in a WinForm app, but I consider that part of the territory. However when I look at what is behind the Acropolis XAML it makes me shudder. There was a quote from another blog that really disturbed me "Probably the best suggestion I can give to my customers, as I always do, is to take inspiration from all of these solutions and to build his own one". Wow. Last option I would ever say to someone especially if there's something out there to do the heavy lifting for you.

    What bothers me about this whole thing is the MS statement of "we currently have no further plans for SCSF releases". I bought into Software Factories and thought the implementation Microsoft chose (the GAT and GAX) was a good option. Building my own factories or modifying others isn't that difficult and I can express what I really intend in a factory quite easily. With no future releases it means not only CAB is stopped in its tracks, so is SCSF. We just launched the SCSFContrib project which was basically a way to extend the core without touching it, however that restriction now becomes a bit of a roadblock, and we haven't really even got rolling on the project yet.

    Maybe we need to go one step further and allow the core of CAB to be modified/rewritten/extended and let the community evolve it. Is that something that would be useful? I mean, after the debate that raged on and Jeremy Miller banging out his own "roll your own CAB framework" maybe we need to open the heart of the beast and give it an implant that will let it live past the Acropolis phase. Some of us have invested already in one framework and I don't think there's a cost benefit to shift to another one, although that seems like the path we're being pushed down. Maybe the SCSFContrib project needs to be modified to support core changes and really divorce CAB from it's over architected implementation. A CAB where the guts are abstractions might help support a more popular community driven adoption and get it past the dependency on using MS tools. How about a CAB where you can use log4net, or Windsor, or pico? If Oren can build his own Event Broker in hours and Jeremy can instruct people on building your own CAB over a dozen blog posts I don't see why this isn't possible given some help from the world around us.

  • The Robot is in the House

    Errr, make that the yard.

    Quick. What am I doing right now? I mean, right now, this very instant. Yes, I'm typing this blog entry but what else am I doing? I'm cutting my lawn. Really.

    Late last year we had a rather talented chap come in, brave the coldness of Alberta in September, and proceed to lay out 10,000 square feet of sod (no easy feat). That's the size of our backyard. Of course come the spring brings the weeds, and the grass, and the chore of cutting all this new grass. There I was faced with the decision. Should I buy a ride-on mower and be a weekend Andretti, chewing up the grass, and probably killing myself or at least one of the dogs in the process. Or is there a better option?

    Enter the RL1000 Robomow from Friendly Robotics. Yeah, I kid you not. I heard about it through the grapevine, did some research and lo and behold most people felt it was a good buy. It's the same cost as a ride-on mower so it was one or the other. The geek in me of course chose the robot. After all, how many people can claim to have a robot do their backyard lawn cutting for them?

    It arrived on Friday and I spent part of Saturday setting it up. I felt like Sam Neill in Bicentennial Man when their Robot (in the guise of Robin Williams) gets delivered to the house, although the Robomow doesn't talk, do dishes, or take care of kids it does a splendid job of cutting grass. It took me about an hour to lay out 500 feet of perimeter wire. This is a long piece of wire pegged into the ground that holds it down and surrounds your lawn. The Robomow, on it's first run, will travel around the perimeter following the wire and get a feel for how your backyard is layed out. Luckily ours, while huge, is pretty much square with a small section on either side of the house.

    After pegging everything in, I fired it up and had Sheila (RedvsBlue fans will know where this comes from) go around the figure stuff out. Once that worked (and it did on the first shot, I was impressed) I commanded her to do my bidding and mow that lawn. First it edges the lawn, cutting along where the perimeter is setup, then it criss-crosses over the middle part. It does this several times and as it hits each edge, it will make a small adjustment and go back the way it came (in a slightly altered direction from where it came). After 3 or 4 passes it's done. It takes about 2-3 hours to do the lawn but I just sit back and don't worry about it.

    Works like a charm, although a) I had to go and remove some of the larger weeds in the middle of the ground as Sheila got stuck on them and b) you still have to trim the edges but it only takes about 20 minutes for that (and less for a smaller lawn). All in all, a good investment. I have it programmed now to go every Tuesday, Thursday and Saturday night at 7pm. This leaves me the chore of watering the beast on Sundays which is fine for me and keeps everything short and sweet. There's no bagging, it just mulches things down to a fine cut and is actually better for the lawn overall.

    If you're looking for a new geek toy and have a big, flat, square(ish) lawn I highly recommend the Robomow. It really does work as advertised.

  • My own Private WTF

    I've always wanted to submit something to The Daily WTF (come to think of it, I think I did but that was a long time ago) but today it just made me cry as I experienced my own private WTF. We had a developer leaving today and as I was doing a code sweep (looking at what was there, how the domain was shaping up, etc.) I came across this gem:

        5         private enum DumbLevel
        6         {
        7             Dumb,
        8             Dumber,
        9             Dumbest,
       10             Not
       11         }
    Okay, I said to myself. It's his last day, he's having some fun. Back when I worked with wood burning computers I wrote code with silly variable names too.

    Then of course my curiosity was piqued and I just had to know how this enum was being used. This led me to this snippet:

       19             if (cableCount == 0)
       20             {
       21                 if (cableCount == 1)
       22                 {
       23                     if (PassesCount > 1)
       24                     {
       25                         if (i <= (_segments.Count - 2)) //for all but last
       26                             dumbLevel = DumbLevel.Dumbest;
       27                         else dumbLevel = DumbLevel.Dumb; //last segment
       28                     }
       29                     else dumbLevel = DumbLevel.Dumb;
       30                 }
       31                 else dumbLevel = DumbLevel.Dumber;
       32             }
       33             else
       34             {
       35                 dumbLevel = DumbLevel.Not;
       36             }
       38             //calculate cable count and heat segment voltage based on the segment configuration
       39             switch (dumbLevel)
       40             {
       41                 case DumbLevel.Dumb:
       42                     cableCount = segment.GetCableCount(); //use # of tracers to determine cable count 
       43                     AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);
       44                     AuditStep("");
       45                     heatSegmentVoltage = SupplyVoltage*Project.VoltageDrop*segmentPercentage;
       46                     AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2})", SupplyVoltage,
       47                               Project.VoltageDrop, segmentPercentage);
       48                     break;
       50                 case DumbLevel.Dumber:
    	           // code omitted for sanity
       51                     break;
       52             }

    This was by far the ugliest code I've seen on this project. The if statements alone started my blood to boil, but then when I started to see the case statements my brain turned to mush and that was my day done.

    Sigh. Oh well, next week I'll introduce the team to the concept of inheritance so they can see how to make the case statements (and the craziest set of Enum values I've ever seen) go away. And Donald thinks he has it rough hiring new guys?

    NOTE: sorry about the formatting, my VS settings are just hosed today.

  • The Red "X" of Death

    You've heard of the Blue Screen of the Death (Windows). You've heard of the Yellow Screen of Death (ASP.NET). Now here's the Red "X" of Death. 

    Got this on one of our apps the other day by our QA folks. It's a Smart Client app using the DevExpress Ribbon, CAB, and a host of other UI goodness. Needless to say, the error wasn't too useful to anyone trying to fix it.

    There's *supposed* to be a grid in the middle there with all kinds of useful information and calculations. DevExpress just decided that it really didn't want to do all that work and gave us a nice big red "X" as if we're missing an image from a website.

  • Old and busted or new hotness

    Roy Osherove posted a what's hot and what's not list, mainly aimed at this whole ALT.NET developer talk that's been going on. Unfortunately, I'm a little at odds with what Roy posted and don't agree with some (most?) of his comparisons. It's also hard to compare things here as he's grouped items together that either overlap, are completely different, or don't make sense to be together and are vague. I really don't care for the whole ALT.NET tagging as I think even the term ALT.NET is silly but here's my spin on Roy's items.

    UPDATE: Roy updated his blog entry with a note that he didn't necessarily agree with the list, these were his observations of the world. I was a little confused because I thought he was emoting what he felt. Silly me. Still, I think the comparisons are a little strange as it mixes technology with concepts hence why I put my list at the end together. I also stand corrected on A# and that Castle can do both DI and AOP quite well. Thanks for the info!

    Hot: Castle, ActiveRecord, NHibernate
    Not: Datasets, Entity Framework, MS Application Blocks

    I'm not quite sure what he's talking about here. I don't feel ActiveRecord is "hot" and I try to avoid the pattern altogether. NHibernate for sure and Castle is cool (over DataSets any day). Is he comparing Castle and it's DI against MS Application Blocks? More on that later.

    Hot: MVC, NUnit, MonoRail
    Not: Web Forms, SCSF, VSTS, MSTest

    Again it gets a little clouded here (at least with my glasses on). Definately NUnit over MSTest hands down. With the pain and suffering Oren's been going through with Web Forms, MonoRail looks like a good alternative (JP gave a presentation at the Calgary Code Camp and from what I saw it looked promising). MVC hot? A pattern? I guess. However it's a tough call here as SCSF implements MVC and it's not a horrible implementation of the pattern, so how can one be hot and the other not. Also I'll agree that VSTS isn't necessarily hot (more like complex, expensive, etc.) but what are you comparing it to?

    Hot: XP, TDD, Scrum
    Not: MSF Agile, MSF for CMMI

    No argument here and right on the money. I wish MSF Agile was never created.

    Hot: OR/M, NHibernate, LLBLGen, etc.
    Not: DLinq, Data Access Block, Plain ADO.NET

    NHibernate for sure, but LLBLGen generates code that uses ADO.NET under the covers here. I guess the point is that it's not hot to write ADO.NET code directly but have a code generator do it for you? Personally that's fine because anyone that writes their own full DAL is just wasting brain cells.

    Hot: Open Source  (Mono, SourceForge)
    Not: Application Blocks, CodePlex

    This confuses me. Open Source is one thing, but it's being compared to... Open Source. The MS Applicaition Blocks are all open source and every project on CodePlex is as well. If you're comparing SF to CP from an open source perspective, neither really are. You can't get CodePlex code at all (there's an Open Source version someone wrote but it's far from complete) and SourceForge hasn't released their code for years with the old code barely able to instal and configure (Alexandria). Better to go with GForge if you're looking to run your own SourceForge site and source code is provided.

    Hot: CVS, SVN
    Not: VSS, VSTS Source Control

    Agreed. VSS is the devil's spawn, although with all the crashing you can get I would suggest SVN is hot and CVS isn't. CVS is better than VSS stability wise, but it's still not all that hot.

    Hot: Subtext, DasBlog, WordPress, etc.
    Not: Microsoft MSN Spaces, Community Server

    Roy is comparing blog software for those that haven't clued in. I do agree that SubText/DasBlog/WordPress is much more powerful and are better blog engines. CS seems to be bloatware now (and I have a bad feeling from it after the last weblogs update). Microsoft does have SharePoint for blogs and it's getting better, but maybe still not ready for primetime to compete with something like DasBlog. The thing about blog software though is that there isn't going to be a giant shift. I mean, let's say the next guy (Google, MS, whoever) comes out with the be-all and end-all blog engine. Do you think thousands of DasBlog or WordPress users are going to migrate en masse? My blog is on CS, so is Roy's. So are we not hot because of this setup and Hanselman is?

    Okay, I'll stop there as there are some other weird deviations Roy makes. I totally am all in when it comes to simplicity in design, but he compares it to the entire P&P (which started this entire thread the last couple of weeks). I'll bite that CAB is complex and we've done that discussion to death. However where is the more simplistic version of CAB that gives us everything we need? Where's the HOT version of CAB? RYO Winforms, I don't think so. And I haven't worked at Google but being at Microsoft is pretty fun, but I guess that depends on what team you're on.

    There's also a point he makes about Google Gears being hot and Smart Client not. I haven't had an opportunity to really get into Gears and it sounds great, but things always come in great packages. Is this really the future of apps? I mean, with Silverlight we have super rich clients written in .NET managed code and all doing whatever they need to over the wire. Are going back to writing crappy web apps (maybe with MonoRail to reduce the crappiness) and just plug Gears in and voila, offline capabilities. Is a Silverlight/Gears combination the golden ticket here and Smart Clients go the way of big fat clients from VB6 days long passed.

    Like I said, I do agree with some of his comparisons, but let's compare apples to apples here. Here's my modified list where it's just one product/technology/concept against the other. I've also ommitted things that Roy and I agree on that are already in his list:

    Hot Not
    NHibernate Entity Framework
    Windsor Container ObjectBuilder
    Aspect# Policy Injection Application Block
    CruiseControl.NET Visual Studio Team Build
    SharpDevelop Visual Studio
    MonoRail Web Forms
    NUnit/MBUnit MSTest
    Scrum MSF Agile
    NAnt MSBuild
    log4net Logging Application Block
    Silverlight Flash
  • Extending the Notification Pattern Example

    Recently I've been looking at implementing the Notification pattern as described by Martin Fowler here. The UI calls methods on the domain to validate the data which is held in a Data Transfer Object (DTO). The DTO contains both the data needed by the class (for reconstituting domain objects) and a class for holding errors returned by the validation method. It's a nice way to remove managing errors in your domain layer but still have them available to your presentation layer.

    One thing I've always found with Fowlers posts is that the code is very thin (and it should be since it's just an example) and sometimes not complete. So for you guys I decided to put together a working example based on his sample for checking a policy claim number. A couple of things I've done to go beyond the example on Martin's page are:

    • Working solution file that compiles and runs (C# 2.0 although with slight modifications [removal of generics] it could work under 1.1)
    • Implementation of the Model View Presenter pattern. Martin uses the Autonomous View approach in his sample because he's really focused on Notification, but I thought it would be nice to show it implemented with MVP. Autonomous View is a pattern for putting all presentation state and behavior for a window in a single class, but that really doesn't support a base approach that I prefer namely MVP and separation of concerns so the MVP pattern is here for completeness.
    • Added Rhino mock tests to show how to test the presenter with a mocked out view. I thought this was important as the example is all about UI validation and this would be a good example to mock out a view using Rhino.

    The Tests

    It starts with the tests (it always starts with the tests). As I was re-implementing an example my tests were slighted a little bit towards how the sample worked. However I was focused on 3 main validations for the UI:

    • Policy Number is present
    • Claim Type is present
    • Incident Date is present and valid (cannot be set in the future)

    With those tests in mind, here's the check for the missing policy number (with mock setup and teardown):

       20 [SetUp]

       21 public void SetUp()

       22 {

       23     _mockery = new MockRepository();

       24     _view = _mockery.CreateMock<IRegisterClaimView>();

       25 }


       27 [TearDown]

       28 public void TearDown()

       29 {

       30     _mockery.VerifyAll();

       31 }


       33 [Test]

       34 public void MissingPolicyNumber()

       35 {

       36     Expect.Call(_view.PolicyNumber).Return(INVALID_POLICY_NUMBER);

       37     Expect.Call(_view.IncidentDate).Return(VALID_INCIDENT_DATE);

       38     Expect.Call(_view.ClaimType).Return(VALID_CLAIM_TYPE);

       39     _view.ResponseMessage = "Not registered, see errors";

       40     _view.SetError("txtPolicyNumber", "Policy number is missing");

       41     _mockery.ReplayAll();


       43     RegisterClaimPresenter presenter = new RegisterClaimPresenter(_view);

       44     presenter.RegisterClaim();

       45 }

    The constants are defined so it's easier to read and are defined like this:

       13 private const string INVALID_POLICY_NUMBER = "";

       14 private const string VALID_POLICY_NUMBER = "1";

       15 private const string INVALID_CLAIM_TYPE = "";

       16 private const string VALID_CLAIM_TYPE = "1";

       17 private static readonly DateTime INVALID_INCIDENT_DATE = DateTime.MinValue;

       18 private static readonly DateTime VALID_INCIDENT_DATE = DateTime.Now.AddDays(1);

    The view is mocked out (which is what we're testing) so we expect calling the 3 properties of the view (that match up to the UI). There's also a ResponseMessage property which displayed if there are errors or not. The SetError method needs a bit of explaining.

    In Martins example, he uses Autonomous View which is great and the way he resolves what control is causing what error it's easy to wire up. All the controls are there for the picking. When I implemented the MVP pattern I had a bit of a problem. I wasn't about to pollute my presenter with controls from the UI (otherwise it would be easy) so how could I get the view to wire up the right error message to the right control. The only way I could do it (in the implemenation of the view) was to pass in the control name as a string. Then in my view implementation I did this:

      115 public void SetError(string controlName, string errorMessage)

      116 {

      117     Control control = Controls[controlName];

      118     showError(control, errorMessage);

      119 }

    Then showError just handles setting the error via the built-in .NET error provider:

       37 void showError(Control arg, string message)

       38 {

       39     _errorProvider.SetError(arg, message);

       40 }

    Once I had the missing policy test working it was time to move onto the other requirements. MissingIncidentType and MissingIncidentDate are both the same (except there's no such thing as a null DateTime so I cheated a bit and returned DateTime.MinValue). The other check against the Incident Date is to ensure it's not set before the policy date. Since we don't have a policy screen I just stubbed it out in a stub Policy class and set it to the current date. So an invalid date would be something set in the past:

       76 [Test]

       77 public void CheckDateBeforePolicyStart()

       78 {

       79     Expect.Call(_view.PolicyNumber).Return(VALID_POLICY_NUMBER);

       80     Expect.Call(_view.ClaimType).Return(VALID_CLAIM_TYPE);

       81     Expect.Call(_view.IncidentDate).Return(VALID_INCIDENT_DATE.AddDays(-1));

       82     _view.ResponseMessage = "Not registered, see errors";

       83     _view.SetError("pkIncidentDate", "Incident Date is before we started doing this business");

       84     _mockery.ReplayAll();


       86     RegisterClaimPresenter presenter = new RegisterClaimPresenter(_view);

       87     presenter.RegisterClaim();

       88 }

    The presenter is pretty basic. In addition to just registering the view and talking to it, it has one main method called by the view (when the user clicks the submit button) called RegisterClaim. Here it is:

       25 public void RegisterClaim()

       26         {

       27             saveToClaim();

       28             _service.RegisterClaim(_claim);

       29             if(_claim.Notification.HasErrors)

       30             {

       31                 _view.ResponseMessage = "Not registered, see errors";

       32                 indicateErrors();

       33             }

       34             else

       35             {

       36                 _view.ResponseMessage = "Registration Succeeded";               

       37             }

       38         }

    Basically it calls saveToClaim (below) then calls a service layer method to register the claim. Information is stored in a Data Transfer Object which contains both the data from the view and any errors. The claim DTO has a Notification object which has errors (or not) and the presenter will tell the view if there are any problems, letting the view set the display accordingly.

    First here's the saveToClaim method in the presenter that will create a RegisterClaimDTO and populate it with information from the view:

       67 private void saveToClaim()

       68 {

       69     _claim = new RegisterClaimDTO();

       70     _claim.PolicyId = _view.PolicyNumber;

       71     _claim.IncidentDate = _view.IncidentDate;

       72     _claim.Type = _view.ClaimType;

       73 }

    The RegisterClaim method on the ClaimService object will just run it's own command (which does the actual registration of the claim and checks for any errors). The core part of the validation is in the Validate method on the RegisterClaim object:

       39 private void Validate()

       40 {

       41     failIfNullOrBlank(((RegisterClaimDTO)Data).PolicyId, RegisterClaimDTO.MISSING_POLICY_NUMBER);

       42     failIfNullOrBlank(((RegisterClaimDTO)Data).Type, RegisterClaimDTO.MISSING_INCIDENT_TYPE);

       43     fail(((RegisterClaimDTO)Data).IncidentDate == RegisterClaimDTO.BLANK_DATE, RegisterClaimDTO.MISSING_INCIDENT_DATE);

       44     if (isNullOrBlank(((RegisterClaimDTO)Data).PolicyId))

       45         return;

       46     Policy policy = FindPolicy(((RegisterClaimDTO)Data).PolicyId);

       47     if (policy == null)

       48     {

       49         Notification.Errors.Add(RegisterClaimDTO.UNKNOWN_POLICY_NUMBER);

       50     }

       51     else

       52     {

       53         fail((((RegisterClaimDTO)Data).IncidentDate.CompareTo(policy.InceptionDate) < 0),

       54             RegisterClaimDTO.DATE_BEFORE_POLICY_START);

       55     }

       56 }

    Here it checks the various business rules and then uses the Notification object to keep track of errors. The Notification object is an object embedded in the Data Transfer Object, which is passed into the service when it's created so our service layer has access to the DTO to register errors as it does it's validation.

    Finally coming back from the service layer, the presenter checks to see if the DTO's Notification object HasErrors. If it does, it sets the response message (mapped to a textbox in the UI) and calls a method called indicateErrors. This just runs through each error object in the DTO through a method to check the error:

       44 private void indicateErrors()

       45 {

       46     checkError(RegisterClaimDTO.MISSING_POLICY_NUMBER);

       47     checkError(RegisterClaimDTO.MISSING_INCIDENT_TYPE);

       48     checkError(RegisterClaimDTO.DATE_BEFORE_POLICY_START);

       49     checkError(RegisterClaimDTO.MISSING_INCIDENT_DATE);

       50 }

    checkError is a method that takes in the Error object which contains both the error message and the control it belongs to. If the Notification list contains the error it's checking, it then calls that ugly SetError method on the view. This will update the UI with the appropiate error message attached to the correct control:

       56 private void checkError(Error error)

       57 {

       58     if (_claim.Notification.IncludesError(error))

       59     {

       60         _view.SetError(error.ControlName, error.ToString());

       61     }

       62 }

    And that's about it. It's fairly simple but the sample has been broken down a bit further hopefully to help you understand the pattern better. 

    Here's the class diagram for all the classes in the system:


    And here's how the classes break down in the solution. Each folder would be a layer in your application (or split across multiple assemblies):

    Application Layer

    Domain Layer

    Presentation Layer

    Service Layer

    So other than the ugly SetError method which takes in a string for a control name, I think this isn't bad. Maybe someone has a suggestion out there how to get rid of the control name string but like I said, I didn't want to introduce UI controls into my presenter so I'm not sure (other than maybe an enumeration) how to hook up the right error message with the right control. Feel free to offer a suggestion for improvement here. 

    You can download the entire solution with source code here. Enjoy!

  • I keep Hugh in my back pocket

    I really do. Sort of.

    I had to get a new set of business cards. All those fish bowls and throwing cards away really has dwindled my supply. Rather than going with traditional cards, I've been using ones with designs by Hugh MacLeod on them. I just ordered a new batch tonight, landing solidly on my favorite cartoon to date from Hugh (besides the Blue Monster). Here's my new cards:




    A little hard to read but just the basic info, my tagline from my blog, my blog address, phone number and email. The usual suspects. I'm really happy with the cards and they make for an interesting conversation piece at conferences, user groups, and general geek fests. I highly recommend them. You can check out all 72 gapingvoid designs here on Streetcards (with new ones always being added when Hugh comes up with a brilliant idea, which is quite often).


  • Introducing the New Vista iPod

    Finally, after almost 9 months of pre-production she's finally arrived. Here she is:

    Vista Avalon Simser, ready for duty!

    Vista Avalon Simser was born May 18th at 18:10 MST, weighing in at 5 pounds and 11 ounces (well, 10.6 ounces but the hospital seems to round it up).

    Okay, first I know that most of you are reading this on the bus, at home, at work, and you're laughing. Some people are shocked and probably scratching your head why a nerd would potentially put their child through the slings and arrows of naming their spawn after an operating system. Hopefully by the time she's old enough for someone to make fun of her name, nobody will remember where it came from.

    Her name came as a discussion about what her name should be, as they always do. At first we didn't know what sex the baby was going to be so we started with a boys name. We rummaged around the Internet, baby name books, and our brains finally to arrive at Dev. Yeah, geek origins but it had meaning to us. Dev (as in short for Developer) sounded like a good boys name; had it's origins in Sanskrit, it was unique and interesting and we liked the sound of it. Then came the process of finding a good middle name and again after some time, we liked Orion (as in the constellation).

    We stared at the piece of paper with his name written out:



    There it was, plain as day. Our son's initials would be DOS. We laughed and laughed and then came the afterthought. Well, if our son's initials are DOS, a daughter would have be an upgrade. And thus the name Vista was born.

    Vista (the operating system) hadn't been released yet, but we looked at it on paper. Vista. I liked the sound of it. True, it was spawned from the name of Microsoft's next operating system but it was also a word seeded in the Italian language (from visto) meaning a sight. Well, a daughter who is a sight. That works for us. Besides, above all (other than the glares we'll get from geeks and this blog entry) Vista is a pretty name for a girl. As for the middle name, it was not driven by the fact that Avalon was the codename for Windows Presentation Foundation. Again we turned to unique names and needed something that fit. Something that sounded right to match Vista. Avalon being the paradise where Arthur was carried to after his death; Avalon the peninsula in Canada in Newfoundland; Avalon the Druidic site in Glastonbury, England. This just became the name we wanted for our daughter and it stuck.

    There are two reactions we get from her name. Probably everyone reading this blog, is the first reaction. "You named your daughter after an operating system?". The other is "Oh that's such a pretty name". We can separate the nerds from the norms with the reaction.

    Of course there are some advantages to being named after one of the most expensive operating systems in history (notice that I didn't say popular, good, or fast; let's not get into that holy war):

    • Her blog will contain the largest number of search hits with people looking for information about Vista
    • She has her very own carrying case (a laptop bag) and other personalized "logoware", most of which I can buy from the Microsoft store or any geek conference for the next 10 years
    • She'll be the only one at her school with a service pack (or two, or three, ...) named after her
    • If she's cute when she's older (and she will be) boys will make many crazy jokes about "starting her up" and "rebooting her" to which I will pummel them upside the head with an XPS laptop that I'll carry around to "interview" any potential suitors.

    Bottom line, we think it's a pretty name and it's hers for life. We like it, and she's our daughter not yours so deal.

    Not every entry into this world is perfect and there were complications. Needless to say, we were disappointed at the process (but not the result, not in the least). We had gone through obtaining a midwife. We're true believers of the natural way and were convinced having a midwife and a home birth was what we wanted. No drug induced delivery. No machines that go ping. No drip, drip, drip of some bag attached to the baby that's hooked up to a monitor. It was going to be natural, fun, and without stress.

    The best laid plans.

    As a result of a lot of factors (incompatible blood types between momma and poppa, go figure) it was difficult and we ended up doing everything we didn't want to happen. It was a hospital birth, we had machines that went ping, drugs were used to induce, an emergency C-Section was needed, etc. Like I said, the best laid plans.

    Vista spent a week in the hospital, mostly for jaundice. However she seemed to enjoy her personal suntan studio as you can see below.

    Everyone is home now. Mommy and Baby are doing great. Vista's two weeks old now and is thriving. This was a life changing experience for me. It will be long remembered, not only for the birth of my daughter; the changes we went through; and the journey ahead. I hope you've enjoyed the sharing of the experience and this is something that I'll someday show to her, so feel free to leave comments for her to read when she's old enough. So welcome Vista to the world as she'll be a big part of it.

    Here's Vista's entire Flickr set which of course grows every day with new images.

  • Do as I say, not as I do

    More fallout from the TestDriven.NET vs. Microsoft department. I read your comments to my blog (no, I didn't moderate any of them) and read through Dan Fernandez's well written and concise response here. Dan is the lead product manager for Visual Studio Express, so other than the legal guys, this is coming from the horses mouth. Phil Haack has a great piece here with his take on it (which has an interesting spin, as he declares MS violated the TestDriven.NET agreement by reverse engineering it to determine how TD.NET works with Express, touche).

    After reading through everything out there (including all the comments on Jamie's Slashdotted blog) I do understand both sides of the story. I do agree that MS is in their rights to put what they want in their EULA and they're right that users of TestDriven.NET in Express products are violating it. I don't agree that MS is playing by their own rules.

    Specifically I personally have a real problem with Microsoft saying TestDriven.NET violates a EULA, yet they themselves do exactly the same thing with Popfly Explorer, XNA Game Studio Express, and the Reporting add-in. It's no different than saying cops are allowed to break the law because they're cops. No. You write the rules and live by the rules (lead by example). Just because it's your product doesn't give you the right to violate your own agreement.

    There's also confusion in this issue because it's an EULA that TestDriven.NET violates. Let's look at that. End User License Agreement. IANAL, but in many cases of "agreements" they have never held up in court. They're simply that, an agreement. You either agree or disagree, but in the end there's no legal ramification either way against you. Remember the Dell incident where the EULA for Windows was shrink-wrapped but yet you had to agree to it. If you opened the package to read the EULA, you were agreeing to it even if you didn't agree after reading it.

    I do agree with the Blue Monster in that it's their right to put whatever they want in their EULA. It's theirs and they craft it. The thing here is that it's an End User agreement. So who's at fault here? Jamie for building a tool, or everyone who's using it. I believe it's the latter and most Americans will agree (yeah, I'm going to get slack for this generalization) in the same vein as it's not the gun manufacturers that are at fault, it's the people using them. So everyone who's installed TestDriven.NET on an Express SKU and allowed it to run is in violation. Where's the cease and desist orders for all of you? Jamie certainly isn't in the wrong to create the software he did (and MS recognizes this) but users (including himself perhaps, assuming he tested it) are violating their agreement with Microsoft by using it.

    The general consensus I'm seeing from the community (via comments and blogs) is that MS should patch the Express SKUs to not even allow loading add-ins. Of course, there's still the issue of their own add-ons but I'm sure they could get around that somehow. There's still the question of what specifically Jamie is violating (or rather what clause). Many people are asking that question but I guess it's a legal-speak problem as I can't find anything specific enough from Dan Fernandez's Blog:

    "Jamie has also made available a version of his product that extends the Visual Studio Express Editions which is a direct violation of both the EULA and “ethos” of the Express product line."

    If it's a "direct violation" what's the "specific clause" it's violating? Again I read it this afternoon and I can't see it. As for the violation of the "ethos" of the Express product line, ethos [meaning a distinguishing character according to Merriam-Webster] seems very subjective to me depending on who's looking at it. There's part of the EULA that states that you may not "work around any technical limitations in the software". Again, subjective here as I'm not sure that adding new functionality that didn't exist is a work around technical limitation. Express does not have a technical limitation running Unit Tests, it was just never designed with it in. Much like it can't edit images directly, do I voilate the EULA if I build something that let's me manipulate image files in Paint.NET instead of Visual Studio Express? Another comment is on reverse engineering the product (VS not TD) but I know how Jamie wrote his addin and it never reverse engineered anything. It uses a documented and public API that's been there for years.

    I do like and agree with Frans' comment on Phil's blog entry:

    "MS should have disabled add-in support in the toolkit. OF course they were tool lazy to do so or technically unable to do so, so they thought they could hide behind a silly phrase in an EULA which isn't even applicable here (as the EULA has no right on what Jamie distributes to OTHERS). If Jamie compiles his code on teh command line the whole EULA argument is moot, just to illustrate the point."

    So a few options could be pursued at this point:

    • Jamie removes the Express support for TD.NET. Maybe end of story? He did it before, and only since it was re-enabled has this bear reared it's ugly head.
    • MS issues a patch to the Express line to not load add-ins. Problem is their own add-ons won't load (unless they themselves circumvent that)
    • MS finds out everyone who's running TD.NET and issues a cease and desist letter them them because they're violating their EULA. Won't happen and again, they would have to tell their own users of Popfly Explorer and other tools to do this.
    • MS strong-arms Jamie to remove the product, support, or both. Jamie collapses under legal costs and gives up. Might happen as Microsoft has more than enough resources to just simply throw at this problem to make it go away.

    What a silly mess. Anyways, I'm done with this thread. Jamie has been Slashdotted, and life will find a way.

  • The pot calling the kettle black

    Who's the pot? Microsoft. Enough with the craziness over Jamie Cansdale's excellent (read:must install now) addon for Visual Studio TestDriven.NET. I'm a huge fan of the tool (bought a copy to support Jamie and his excellent efforts, I recommend any good developer to do the same) and have supported Jamie in his efforts, especially after they booted him from the MVP program over some questionable tactics and reasoning. I followed him via emails and his blog posts discussing the matter and the sillyness of it all. Now it's come to a head.

    The last few days it's been legal mayhem on his blog, posting the various letters and emails he's been getting and sending to MS lawyer-types. What really peeves me the most is the clause in the EULA that they are griping over:

    " may use the software only as expressly permitted in this agreement. In doing so you must comply with any technical limitations in the software that only allow you to use it certain ways... You may not work around any technical limitations in the software."

    What a load of crap. I'm sorry but let me rattle off two very big tools from Microsoft that voilate this EULA: Popfly Explorer and XNA Game Studio. Both are "add-ons" that *only* work with the Visual Studio Express SKU.

    Since when does building a tool that simply automates running a unit test runner constitute working around a technical limitation? Is the technical limitation that VSExpress doesn't have support for unit test frameworks. If that's true, then any macro that shells out and runs nunit-console.exe could be considered in volation. If they're willing to stretch TestDriven.NET to fall into this category, then I call foul on Popfly Explorer and XNA Game Studio. They are "manipulating" how Visual Studio Express works and there's obviously a technical limitation in that Visual Studio Express, OOTB, does not support the XNA content pipeline or understand Popfly so again, someone is in voilation here.

    Unfortunately for Jamie, he's between a rock and a hard place. EULA are just that. Agreements. IANAL but from what I know of past issues concerning EULAs, they're not legally binding. However with the Microsoft Man behind this nobody is going to be able to stand up (legally) against them.

    So is Microsoft going to sue themselves? Might as well, since the lawyers are already doing a damn fine job at making an ass out of themselves.

    My advice for Jamie, 1) pull the support for the Express SKU (again) if that will appease the Blue Monster and 2) contact the EFF. They have a good track record in these type of things and might be able to support you. I know I will so just yell if you need me.

  • ScrewTurn Wiki

    Kind of a crazy name for a piece of software (in this politically correct world, the use of "screw" doesn't go over very well with some management) but a really great example of Open Source in action.

    I was hunting around for a wiki for our development documentation and standards. My first thought was SharePoint but we're not rolled out yet to 2007 and I didn't want to bank on that yet. I wasn't quite sure what I was looking for, but needed a wiki that did basic features and had a few "must-have" features (like AD integration and content approval). A great site for checking out and comparing wikis is WikiMatrix. This site lets you compare *all* of the wiki software packages out there and even includes a wizard to step you through what you're looking for (OSS, platform, history, etc.) and gives you a nice side-by-side comparison page (much like comparing cars) to help you select a package.

    First I took a look at FlexWiki which was fairly popular and easy to setup. I had set it up on my laptop before as I was toying around with using a wiki as my home page. FlexWiki was simple and more importantly (for me anyways) it was C# and windows based so if I wanted to extend it, play around, write extensions, etc. then that would be bonus. Flex is nice and if you don't look at anything else, probably suites the purpose (although CSS-style customization seems to be pretty complex). While I was leaning towards C# type wikis, I knew that the best and most mature ones were PHP/MySQL based (like the one Wikipedia runs on, MediaWiki). However I just didn't want to introduce another stack of technology at my client just for the purpose of documentation.

    Finally I stumbled across ScrewTurn Wiki. Like Flex, it was easy to setup and like my favorite blogging software (dasBlog) it could be file based so you could just set it up and go. I installed ScrewTurn and messed around with it and it worked well. We handed the duties of really digging into it over to a co-op student we have for the summer and he's really gone to town with it. AD integration was added (it was always there, I just didn't enable it) and he's found some plugins and even written some code to extend it. What's very cool about ScrewTurn is that the common pages are written in C# and live as .cs files on your server. You just edit them and override methods, introduce new ones, whatever. New functionality without having to recompile assemblies or anything (everything is just JIT'd on the fly).

    Anyways, ScrewTurn looks like a very good IIS based wiki if that's your thing. I find it more mature than Flex, written in C# 2.0 and has a lot of great features. Like I said, if you have a LAMP environment in your world then you might want to look at something like MediaWiki but for a Microsoft world, ScrewTurn is da bomb. The plugin support is great and I'm hoping that the community will step up and build new plugins for the system so it can grow into something special.

    So you might want to give ScrewTurn a try if you're looking for a simple documentation system for your team.

  • ReSharper 3.0 Beta Available

    Great stuff from the guys that make the cool tools, JetBrains anounces the release of their latest beta version of ReSharper 3.0.

    This release includes refactoring for C# and Visual Basic.NET now. The C# side has been beefed up so it gives you some code suggestions that you may or may not choose to implement. Also with this release is XML and XAML support (handy when working with Silverlight now), a neat feature called "Go to Symbol" navigation which I'm prefering over "Go to Declaration", a smart TODO list, and a reworked Unit Test Runner (although I still prefer TestDriven.NET).

    You can grab the beta from here. I'll see if I can find some time and put together some screenshots or (gasp) a webcast on the features as talking about them is rather boring. Enjoy!

  • Efficiency vs. Effectiveness, the CAB debate continues

    There's been two great posts on the CAB debate recently that were interesting. Jeremy Miller had an excellent post over the brouhaha, citing that he really isn't going to be building a better CAB but supports the new project we recently launched, SCSFContrib. I think Jeremy's excellent "Roll your own CAB" series is good, but you need to take it in context and not look at it as "how to replace CAB" but rather "how to learn what it takes to build CAB". Chris Holmes posted a response called Tools Are Not Evil from Oren's blog entry about CAB and EJB (in response to Glenn Block's enty, yeah you really do need a roadmap to follow this series of blog posts).

    Oren's response to Chris Holmes post got me to write this entry. In it he made a statement that bugged me:

    "you require SCSF to be effective with CAB"

    Since this morning, it looks like he might have updated the entry saying he stands corrected on that statement but I wanted to highlight the difference between being efficient with a tool, and being effective with the technology the tool is supporting.

    Long before SCSF appeared, I was groking CAB as I wanted to see if it was useful for my application or not and what it was all about. That took some time (as any new API does) and there were some concepts that were alien but after some pain and suffering I got through it. Then SCSF came along and it enabled me to be more efficient with CAB in that I no longer had to write my own controller, or implement an MVP pattern myself. This could be done by running a single recipe. Event the entire solution could be started for me with a short wizard, saving me a few hours I would have taken otherwise. Did it create things I don't need? Probably. There are a lot of services there that I simply don't use however I'm not stoked about it and ignore them (sometimes just deleteting them from the project afterwards).

    The point is that SCSF made me more efficient in how I could leveage CAB, just like ReSharper makes me a more efficient developer when I do refactorings. Does it teach me why I need to extract an interface from a class? No, but it does it in less time than it would take manually. When I mentor people on refactoring, I teach them why we do the refactoring  (using the old school manual approach, going step by step much like how the refactorings are documented in Martin Fowlers book). We talk about why we do it and what we're doing each step of the way. After doing a few this way, they're comfortable with what they're doing then we yank out ReSharper and accomplish 10 minutes of coding in 10 seconds and a few keystrokes. Had the person not known why they're doing the refactoring (and what it is) just right-clicking and selecting Refactor from the menu would mean nothing.

    ReSharper (and other tools) make me a more efficient developer, but you still need to know the what and why behind the scenes in order to use the tools. I compare it to race car driving. You can give someone the best car on the planet, but if they just floor it they'll burn the engine out and any good driver worth his salt in any vehicle could drive circles around you. Same as development. I can code circles around guys that use wizards when they don't know what the wizard produces or why. Knowing what is happening behind the scenes and the reason behind it, makes using tools like ReSharper that much more value-added.

    SCSF does for CAB what ReSharper does for C# in my world and I'll take anyone that knows what they're doing over guys with a big toolbox and no clue why they're using them anyday.


  • Whoever Has the Most Toys, Wins

    Or do they? In the past couple of years, Google, Microsoft, and Yahoo have been buying up little niche products and technologies and absorbing them into their collectives like mad little scientists working in a lab. It reads like a whos-who of the IT world.


    Feedburner - Very recent purchase and something many of us bloggers use. Good move Google!
    Blogger - Awesome concept but I think it went by the wayside as Blogger became the MySpace for blogger-wanna-bes. Still some good bloggers use it, but I don't think it's panned out the way Google wanted it to.
    Picasa - Neat desktop app that I tried out a few times and well done. Hopefully they'll do something with this in the near future.
    Youtube - Biggest acquisition that I know of (who has this much money besides Microsoft?) but with thousands of videos being pulled every day by Viacom or someone else threatening to sue, I wonder what the future holds.
    DoubleClick - Have no idea what this is all about as DoubleClick is the evil of the Internet. Maybe they bought it to kill it off (doubt it).


    Flickr - Probably the best photo site out there, many new features being added all the time and nobody nearly as interesting as these guys out there in this space.
    Konfabulator - Never really caught on and too many people compared it to the Mac desktop (which already had this capability OOTB). Windows Gadgets tries to be like this but again not a huge community swelling up around this. - Next to Flickr, one of the best buys for Yahoo and the best social bookmarking system out there.


    Connectix - Huge boost in the virtual space, although I think it still trails behind VMWare
    Groove - What put Ray Ozzie on the map is now part of Office 2007 and still growing.
    Winternals - Best move MS made in a long time, awesome tools from these guys who know the inner workings of Windows better than Microsoft in some cases
    FolderShare - Great peer-to-peer file sharing system, but hasn't really taken off has it?

    There's a bunch more but I didn't want to get too obscure here. There's a very cool graph here that will show you the acquisitions and timelines.

    Who's Left?

    And here's the hot ticket items these days that are still blowing in the wind. It's anyone's guess who goes up on the block next and who walks away with the prize.

    Facebook - Whoever gets this gets gold at 100,000 new members a day (!). My money is on MS to pull out the checkbook any day now.
    Digg - Kevin Rose, who's already probably laughing his way to the bank will cash in big time on this if someone grabs it. Maybe Google to offset the Yahoo purchase?
    Slashdot - Yeah, like anyone would want this except to hear Cowboy Neal talk about himself (don't worry, Slashdotters don't read my blog - I hope)

    Any others?
    (SharePointKicks... yeah I wish)

    Maybe it's a good, maybe it's bad, my question is who will end up with the most toys? Or maybe once all the little ducks are bought up the three top dogs will duke it out with one winner walking away. UFC at the Enterprise level kids. Should be a fun match.

  • SCSFContrib is Alive!

    I'm pleased to announce the startup of a new project on CodePlex. I'm very happy to be part of the team to bring you SCSFContrib.

    What is SCSFContrib? If you're familiar with NAntContrib where members of the community contribute extensions to NAnt (specific NAnt tasks) then you're on the same wavelength. SCSFContrib is very similar, extra goodness for CAB/SCSF with a few differences:

    • It is based on the Smart Client Software Factory (SCSF) and the Composite Application UI Block (CAB)
    • It allows you, the community, to contibute to an effort that extends patterns and practices deliverables
    • It shortens the time that contributions/changes/extensions to SCSF/CAB will make it into the public. Rather than waiting for a drop from the patterns and practices team, our team will help manage these and make them available through the CodePlex site
    • Provides guideance to the patterns and practices team as to where gaps exist in the current factory and how they can make improvement in the core

    There are three Contrib projects in motion, SCSFContrib being one of them. The other two are EntLibContrib and WCSFContrib (Web Client Software Factory) which allow contributions to each of those projects.

    Note that this project does not allow you to contribute code directly to the core application blocks. We're talking about extensions here (for example there's an Outlook Bar extension that will be one of the first ones we release under the SCSFContrib project) but that doesn't preclude you from creating your own version of a core block. For example you could replace the Dependency Injection block completely if you wanted, but we won't be replacing it directly in the Factory. It could be enabled as part of a recipe (Use ObjectBuilder or XYZ for Dependency Injection). Of course everything generated has to work with the changes, but that would be up to you. In addition, the one small thing we ask for is full unit tests with any new development (although Alpha/Beta projects won't require these).

    While this was initiated by Glenn Block and the Patterns and Practices team (thanks guys!), the SCSFContrib project is run by community folks (myself included). Here's who's on the team:

    Kent Boogaart

    Author of WPFCAB. Kent has done some great work in the CAB space by developing a WPF version of our Windows CAB extensions. He’s graciously created an unmodded version that will be included in the contrib.. This product is in production in several environments today. Kent’s blog is at

    Ward Bell

    Ward is the product manager for IdeaBlade, one of Microsofts key partners out there spreading CAB adoption through their Dev Force solution. IdeaBlade was one of the pioneers of Smart client software development in the .NET space. Ward is also extremely seasoned in the industry with 20+ years of experience. Ward’s blog is at

    Matias Woloski

    Matias works at Southworks. Matias and the whole Southworks gang are truly gurus at everything related to CAB, SCSF and WCSF as they helped Microsoft write it. Matias is also the author of the Outlook bar extensions for CAB. Matias’ blog is at

    And me, Bil Simser, major geek and general all-around nice guy.

    Here's to the successful launch of another P&P Contrib project and hopefully you'll find use with the SCSFContrib project in your own solutions. You can check out the CodePlex site here and be sure to voice your opinion via the discussion forums or issue tracker as to what you're looking for (or contribute something you've built with CAB/SCSF if that's your thing).

  • Dude, where's my alarm clock?

    I just realized, after running Windows XP for 5+ years, that there's no built-in alarm clock. I needed one as I was just dozing on the couch and couldn't be bothered to go to the bedroom and I had no other way to wake up. I figured I would just use XP. I mean, it must have an alarm clock after all. Nope. Nothing that I can find. Had to download a cheap freebie. Does Windows Vista have one? Does nobody really need one except for me? Hmmm... maybe another WPF weekend project I could do to pass the time.

  • An API for my crack addiction

    I'm addicted to crack. That crack is called Facebook.

    At first it was a silly thing. A social networking site with very little geek factor. It's fun to connect with old friends, make new ones, and generally keep on top of where people are and what they're doing. However I felt empty. A site like Facebook is just ripe for tearing into it and presenting and using the information you want the way you want to. The REST-like access to it seemed kind of klunky and you had to log in via a web page to obtain a session (there's a bit of a hack to do an infinite session, but it's just that, a hack). So I wasn't too interested in what it could provide.

    Now my crack addiction has a proper API and a developer toolkit. Finally I can actually do something with my addicition rather than just admire it. The toolkit requires a developer key (which you can get from facebook for free) and the .NET 2.0 framework. You can grab the tookit here. There's also a developer wiki you can checkout with lots of QuickStarts, videos, walkthroughs, tutorials, and discussions. Is it just me, or is everything here very MS centric? Maybe MS should just buy Facebook (as everyone else is buying everything else out there) and call it a day. Of course they would have to rewrite it since it seems to run in PHP, but with dynamic languages and the .NET framework in the pipeline it could probably just be converted on the fly.

    I'm still waiting for my invite to come through for Popfly, but in the meantime this will keep me happy as I write up some cool new Silverlight/Facebook apps on SharePoint. Yeah, nothing like mashing up all kinds of new stuff together to see how it works.

  • Reusability vs. RYO

    Every so often, a topic brushes by my RSS feeds that I have to jump into and comment on. The latest foray is a conversation between Chris Holmes, Jeremy Miller, and Oren Eini. It started with Oren and a post about not particularly caring for what the Microsoft Patterns & Practices guys are producing (EntLib, CAB, SCSF, etc.) and ballooned here, here, and here. Oren started down the path that CAB (and other components produced by P&P) was overly complex and unnecessary. I'll focus on CAB but there are other smatterings of things from EntLib here. The main points Oren was getting across (if I read him correctly) was lack of real world applications backing what P&P is producing and overly complex solutions for simple(r) problems. Oren put together his version of the policy injection block (a recent addition to EntLib) in 40 minutes. Last night I was reading Jeremy Millers response and needed to chime in as I'm very passionate about a few things, namely Agile software development and CAB.

    I'll be the first to admit that CAB is complex. EntLib is large. There is a lot there. As Chris said this morning in what I think was an excellent response to the entire discussion, CAB for example is not just about building maintainable WinForm apps. I like CAB as it gives me a bunch of things and they all work together in a fairly harmonious way. EventBroker is a nice way to message between views and keeping the views separate; ComandHandlers allow me to hook up UI elements indirectly to code to execute them; the ActionCatalog let's me security trim my commands (and in turn my UI); and the implementation of the MVP pattern using views lets me write presenter tests and keep my UI thin. This all makes me feel good. Did it take me a while to get here? Absolutely. I've spent the better part of a year learning CAB, EntLib, ObjectBuilder, WorkItems, and all that jargon but it's no different than learning a dozen different 3rd party libraries. I simply chose the MS path because it was there and everything was in one neat package. If you packaged up Castle, NHibernate, StructureMap, and others together in a single package maybe I would have chosen that path (and is there really two different paths here? I use both tools together anyways).

    Oren's defense is around the fact that he (and Jeremy) follow the guideline of evolving a framework from your application needs, not building one (like what the P&P guys have done). Okay, that's fair but at some point you have to stop building things over and over again. So when does your own work become a framework that you reuse? Is it as lean and mean as what you want it to be. Sure, you can put together the basic needs of an IoC in half a day (half a day Bil time, 40 minutes Oren time) but it's the just the beginning. It serves the need you have today and the problems you might be facing right now. I would argue that if you took something like StructureMap and evolved it to handle scenarios that you're not dealing with today, that you would be starting to build your own implementation of EntLib.

    We all want lean software that does the job however I subscribe to the mentality that if you leverage something else (aka not reinventing the wheel) then do so as long as it doesn't come at a cost higher than doing it yourself. That's a hard decision to make as you don't want to get too predictive on what the future may hold (do we need logging, security, etc. in the future?) but you gauge your response based on current affairs and what feels best. It's more of an art than a science. When I first looked at CAB I thought it was huge, but once I sat down to grok the pieces and how it all fit together, it made sense. EntLib and CAB do include everything and the kitchen sink and you do need to get past the learning curve, but in the end it's a good collection of tools that you can have in your toolbox. Unfortunately it's not something I could introduce at a conference or User Group session and describe the entire stack in an hour, so I tend to avoid showing off applications and concepts using it as it just turns into a discussion of what [SmartPart] means instead of the main goal like describing MVP which I can do with my own code.

    Is EntLib/CAB/etc. doing too much maybe? Perhaps but then if I choose the 3rd party elements I want and wire them together to suit my needs, what kind of Frankenstein have I built in the progress? When I look at CAB holistically, there's a lot there but it's not a bad implementation. I don't think Oren or Jeremy are saying the P&P guys did a bad job on in, they just choose to evolve their own solutions using a minimalist approach. I'm all for that. It's very TDD-like. When I build systems I start by writing single tests against my domain and only doing what I need at the time (the YAGNI principal). However at some point you end up with a very rich domain, hundreds (or thousands) of unit tests, dozens (hundreds) of classes and methods, and a lot of functionality. I argue that is in fact what EntLib and CAB have become. They're rich, re-usable tools that do a lot but frankly you can still use what you need. Maybe you'll deploy all the EntLib assemblies with your application and only use the logging feature, but so what? As an example, I had to implement NHibernate in an application recently to apply persistence to my domain. When I ran some db unit tests, I found out that I need the NHibernate assemblies, log4net, and an assembly from Castle to make it work. Disk space is dirt cheap so having the extra there means nothing (except a few extra seconds of download time).

    I'll cite Rocky and his excellent CSLA.NET as an example. It's a large framework, lots of classes, lots of functionality. That's what frameworks are about. However while I like what Rocky's done and he's had great success at it, I don't subscribe to the approach he took. I'm not a fan of the ActiveRecord pattern and don't like how business objects are tied to data implementation (even if there's a level of abstraction there). I simply cannot use CSLA with DDD. Is the framework a bad product? No way. Would I recommend it to others? Absolutely. Would I use it myself. Nope, but it's a good piece of software and I wouldn't discount it.

    CAB follows similar concepts as it's big and ugly in some places (like ObjectBuilder). Sure I could use Castle to do better (real) dependency injection, but if I don't buy into the MS song and use CAB and EntLib to it's full extent I end up with bits and pieces of goo all over the place. Like I mentioned with NHibernate, I needed to deploy log4net as it needs it, even if I didn't turn on that feature. At least with EntLib, if I'm not using security for example I don't need to deploy the security module. In my case now, I have EntLib logging deployed and now I've got a second logging system deployed because NHibernate dragged it along for the ride. Eventually I could have a really ugly monster on my hands with copies of Castle, StructureMap, CAB, EntLib, NHibernate, log4net, and who knows what else all living (hopefully) together in happy existence. I don't want that.

    CAB gives me most of what I need (except O/R mapping and persistence) so for me I leverage as much as I can from CAB and EntLib and fill in the gaps with things like NHibernate for persistence. I could use EntLibs database factory but then I'm rolling my own DAL and that's not a path I want to take, so I choose to ignore the EntLib database component. The nice thing is that I don't have to deploy it so as long as my code doesn't call it, I'm golden.

    As Jeremy put it, the P&P guys are a good thing as they're out there getting the Agile word out to many more people that we can. While they do produce large(r) tools, frameworks, and components that include perhaps more complexity that you need at the time at the end of the day, you'll probably end up using it. IMHO I'm happy with what CAB and EntLib provide. Could I get the same functionality from the other alteratives? For sure, however I would probably be writing more code to wire things together than I would with CAB. For that reason, I like what the P&P guys do and look forward to the future as to how they'll evolve hoping these kind of discussions will help adjust their path towards a better end game for all of us.

  • ASP.NET Weblogs, the saga continues

    Hmmm, more odd things happening at ASP.NET Weblogs, even after playing Halo 3 for 4 hours (and boy are my thumbs killing me).

    I noticed I was getting trackbacks. Nothing new here, I get them all the time. Except I was getting them from myself. Huh?

    Yeah, the last two posts I made created a trackback, to itself. Sigh. More email, more notifications, less sleep...

    Update: Now my RSS feeds are only partial. Grrr. Argh.

  • Woohoo! Finally...

    This makes the ASP.NET Weblogs upgrade and me not being at DevTeach all that much better.

    Sleep, I knew you well...

  • Scrum for SharePoint

    Agile teams are all about co-location and communication. We have a wall where tasks are posted. The wall is life. It is the source of truth. From the wall, the ScrumMaster (me generally) enters in the hours remaining for tasks and updates some backend system (in our case, VSTS with the Scrum For Team System templates).

    There are many tools out there to do Scrum, XP, etc. and keep track of your items. I think I did a round up of the tools out there but I missed one. SharePoint. Yup, my two favorite topics, SharePoint and Agile, come together.

    A friend pointed me to an article on Redmond Developer News (a new feed I didn't even know about and one that looks pretty cool) by David Christiansen called Building a Virtual Bullpen with Microsoft SharePoint. Basically he walks you through creating a digital bullpen, complete with product backlogs and sprint backlogs all powered by SharePoint. And easy to do, with a few custom views and all standard web parts and controls.

    I remember Scott Hanselmen mentioning that they used SharePoint for Scrum awhile back on an episode of Hanselminutes. He said it worked well for them. I've setup clients using standard out-of-the-box lists to track Product Backlog items and such. The only thing 2003 won't give you are burndown charts. With Excel Services, a little bit of magic, and MOSS 2007 behind the scenes this now becomes a simple reality.

    Check out the article to get your virtual bullpen setup and drop me a line if you need a hand (or just want to share with the rest of the class).

  • What the heck happened to ASP.NET Weblogs?

    I see (or guess) that ASP.NET Weblogs (where this blog is hosted) upgraded to a newer version of Community Server but boy it doesn't look good. Besides the change in the control panel and things, the look is pretty different on my blog, the sidebar has a few broken things now that I had to remove, but most importantly isn't showing any content. Hopefully they'll have this fixed soon. Normally they announce major upgrades and such, but I guess you get what you pay for (free) so I can't complain too much.

    Update: Seems a lot of people are complaining about the upgrade. Things are a little messed up here as the CSS has changed. I use the stock Marvin3 from the old .Text blog but it changed (or something around it) so there's additional white space and padding everywhere on the site. Other blogs that are using custom skins/css are really messed up. I noticed Frans' tags are just plain ugly and unreadable.

    In addition JavaScript is disabled for the sidebar so I had to remove a few links I had and uploading images is disabled (or some kind of security problem is afoot). A couple of other problems were that the tag filtering doesn't seem to be working. On Weblogs, if we tag entries with certain tags they show up on the main page. Now it seems everything is getting up there.

    I caught a comment by Rob Howard on another blog saying that emails had been sent out regarding the upgrade, but only 115 went out then suddenly stopped, as if thousands of processes suddenly cried out in terror and were suddenly silenced. What a mess.

  • A Scrum by any other name...

    I'm not getting it. I'm seeing a lot of posts about "Feature Driven Development" (or FDD for short) but I'm just not getting it. All I see is Scrum with different terminology. I was reading the Igloo Boy's blog where he's off at DevTeach 2007 (man I'm so jealous, Montreal in the summer time with geeks) and he posted his review of a FDD session with Joel Semeniuk and I just don't see the bru-ha-ha about FDD.

    FDD is defined as a process defined and proven to deliver frequent, tangible, working results repeatedly. In other words, what we try to achieve when using Scrum in software development.

    FDD characteristics include minimum overhead and disruption, Delivers frequent, tangible, working results, Emphasizes quality at each step, Highly iterative. Again, Scrum on all fronts.

    FDD centers around working on features (Product Backlog Items in Scrum) which have a naming convention like:

    <action> the <result> <by|for|of|to> a/an <object>

    Like user stories where:

    As a/an <role> I would like to <action> so that <business benefit> 

    Feature Sets 
    FDD Feature Sets is a grouping of features that are combined in a business sense. In Scrum we've called those Themes.

    So am I way off base here or are we just putting lipstick on a pig? Are we just packaging up Scrum with a different name in order to sell it better? Wikipedia lists FDD as an iterative and incremental software development process and a member of the Agile methods for software delivery (which includes Scrum, XP, etc.).

    There are differences here between Scrum and FDD, like reports being more detailed than a burndown chart (however for me, a burndown chart was more than enough information to know where we were and where we're headed). Practices include Domain Object Modelling (DDD?) and teams centered around Fetures, but again this is just (to me) just Scrum organized a certain way. I would hazard to say I already do FDD because to me it's all about the domain and business value.

    Or maybe this is a more refined take on Scrum. Scrum with some more rigor around focusing on the goal? A rose by any other name... I must be missing something here.

  • Read it, live it, love it!

    If you're struggling with getting in touch to deliver what your customers really want, try this. To me, this is what Agile is all about.


    Print out the big version of this (available here), put it up on your wall (in your face) and read it every morning before you start. Really.

    Hugh is brilliant.

  • Have you tried out Planning Poker?

    I'm a big fan of the Planning Poker technique for estimating. It basically is a process where everyone in the room gets together with cards and estimates effort for user stories. Each card has a number on it (a modified fibinacci sequence) of 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Then everyone reveals their estimate for a given story at the same time. Any estimates on the fringe are challenged and justified and an estimate is arrived, then the process is repeated for the next user story.

    Mike Cohn and the Mountain Goat Software people have put together a fabulous website to solve a problem with planning poker, that is the one of remote users. It doesn't help planning poker if the users are off in another city so planning poker, the site solves that. You create an account (free and only requires 4 fields of information) and log in. Then someone creates a game and everyone participates online. Its a great way of doing this, and you can export the results to HTML and CSV at the end of the session. There's even a 2 minute timer that you can invoke once you've discussed enough about a story and a ready to estimate. Some people have even used it internally by everyone bringing their laptops to the session rather than even using physical cards.

    So check out Planning Poker for yourself as it might be useful in your next planning session. Here are some screenshots that walk you though creating user stories and estimating them with the planning poker site.

    When you log into the site you can view the active or complete games. Complete games can be re-opened if you need to do them across a few days:

    To create a new game, click on the link and fill in the name and description. If you have stories already ready to estimate, you can paste them into the field here from a spreadsheet. The first row should contain the field names.

    To add a story for estimating, just enter it in the form As a/an <role>, I would like to <function> so that <business value>. There's also room for notes that might be something you want to capture but make it light, this isn't the place for requirements gathering details here.

    Once you've added a story, the estimating game begins. Select a card from the screen for that story.

    Then you can accept or play again with that estimate. Your estimate shows up along with others (if they're logged into your game).

    If you were wrong with your original estimate or there's debate on something and you really do agree it's larger/smaller, click play again and select a different estimate.

    When all the estimates are done and the game is complete you can view all of the estimates online.

    Finally if you want to take this information elsewhere, you can export it HTML for viewing/publishing/display or to a CSV file for importing into another tool.

    Note that Planning Poker doesn't work very well under IE7 on Windows XP but the guys are working on it. I flipped over to use Firefox for the screenshots and use FF when I do my sessions using the tool.

  • Feeds not working on

    Not sure what the problem is but if you subscribe to my feed (via my feedburner url, you may have noticed that the feeds around here haven't been updated. In fact there's no new feed since May 1st.

    The feedburner feed is stagnant but what's more disturbing is that feedburner is working correctly, it's the source of the feed from Community Server and that isn't updating. I checked the private feed (sans feedburner) and it also shows May 1st as the last post.

    I sent a note off to the guys but haven't heard back. I'm posting this here in the hopes that someone on is seeing the same problem (and it's not just me) and maybe something gets done about it.

    Maybe I forgot to pay the rent on my site? ;)

    Update: I've been clicking on other peoples RSS link and not seeing items from the past week for many blogs. So either the feeds are not getting through on my end (DNS problem? I doubt it) or something is messed up on Server.

  • Source Code Posted for SharePoint Forums Web Part

    As I continue to cleanup my projects on CodePlex, I've posted the latest version of the source code for my SharePoint Forums Web Part. This is version 1.2 that was released in August (better late than never?).

    If you're interested in contributing/enhancing the project please do so. Right now I'm juggling a bunch of projects with lots of team members so it might take some time to get you added to the team if you're interested. If you are interested in modifying the codebase then Scott Hanselman has a great blog entry here on creating patches. You can simply submit a patch file to me (via email) and I'll add it to the codebase. This way you don't have to sign up for a CodePlex account and go through setting up all those tools. Your choice but please consider contributing to the project.

    The source code does not include the 2007 version as that will be released under the Community Kit for SharePoint project (CKS) which also lives on CodePlex (surprise surprise). I'm donating the 2007 version to CKS but in reality just simply having it hosted under that project. It'lll be the same Web Part however hopefully we'll have some more bodies working on it under CKS.

    You can download the source code directly from here (sort of direct since direct file links don't work anymore on CodePlex) or through a TFS client (Teamprise, Team Explorer, etc.) if you're signed up on the site via the latest change set here.

  • Change Sets Restored on CodePlex for Tree Surgeon

    For those that have been playing along, CodePlex suffered a bit of a hiccup awhile ago and some data was lost.

    The Tree Surgeon project was one of the casualities (along with some of my other projects). The CodePlex team got the work items restored, but the change sets and source tree was lost. Luckily we had two backups, the zip file and I had several local copies of it on various hard drives and backups. I've rebuilt the latest change set on Code Plex so you can hook up and grab the source if you're a member of the team to work on things (or just grab the latest change sets as we get more work done).

    I'm just spending some time tonight to update other projects and get new or updated source code uploaded to CodePlex so watch for more announcements over the next few days.


  • Generic List Converter Snippet

    A useful little class that you might find you need someday. It converts a weakly-typed list (like an IList of items) into a strongly typed one based on the type you feed it.

    It's super simple to use. For example let's say you get back an IList<People> from some service (a data service, web service, etc.) but really need it to be a List<People>. You can just use this class to convert it for you.

    I know, silly little example but just something for your code snippets that you can squirrel away for a rainy day.

        1 public class ListToGenericListConverter<T>

        2 {

        3     /// <summary>

        4     /// Converts a non-typed collection into a strongly typed collection.  This will fail if

        5     /// the non-typed collection contains anything that cannot be casted to type of T.

        6     /// </summary>

        7     /// <param name="listOfObjects">A <see cref="ICollection"/> of objects that will

        8     /// be converted to a strongly typed collection.</param>

        9     /// <returns>Always returns a valid collection - never returns null.</returns>

       10     public List<T> ConvertToGenericList(IList listOfObjects)

       11     {

       12         ArrayList notStronglyTypedList = new ArrayList(listOfObjects);

       13         return new List<T>(notStronglyTypedList.ToArray(typeof(T)) as T[]);

       14     }

       15 }

  • 6 Months of Sprints - A Visual Record

    I thought I would start off the week by airing my dirty laundry, that laundry being one of the projects I'm Scrum Master and Architect on.

    It's been 6 months of month long iterations and the project is currently on hold as we shifted resources around to a fewer "higher" priority ones. I'm looking back at the iterations and the burndown charts that we pulled out (via Scrum for Team System).

    It's not pretty but it's real and it's interesting to see the progress (or sometimes lack of it) along the way. Here we go...

    Sprint 1



    The sprint starts and we're not off to a bad start. In fact, it's one of the better sprints we had, even if it was the first. A little bit of churn the first few days but that's normal. In fact it was a collision between myself and the PM who decided to enter all the tasks again his way. After a few days we got that part straightended out and the rest of the sprint seemed to go pretty well. I was pretty happy so far. 

    Sprint 2


    Another sprint that didn't go too badly. The team (myself and one developer) had some momentum and we were moving along at a nice pace. However technical debt built up already and we ended the sprint with about 40 hours of work left. Still overall I was pretty happy and we seemed to have got our stride. We also picked up a new team member so there was that integration that had to happen, but it worked well for the team.

    Sprint 3


    Third sprint, 3 months into the project and we were moving along. Sometime around the middle of the sprint we were going like gangbusters and realized we were going to end early. That's the big dip around the 17-20th of November 2006. Once we got back on the Monday (the 20th) we decided to add more work to the sprint, otherwise we were going to be twiddling our thumbs for 2 weeks. It worked out well for this sprint as we finished without too much overhead (about 12 hours but some of that was BA or QA work which didn't directly affect the project). 

    Sprint 4


    Ugh. This is ugly but bonus points for the first person to know why (other than who was on the team). The main cause for the burndown to go flatline here is the Christmas. Yup, with people on holidays and not really wanting to end the sprint in early January right when everyone got back, we decided to push the sprint out a little to make up for the lost time over the Christmas season. In addition to this, the first week of this sprint one of the main developers came down with the flu and was out of commission for almost a whole week. That crippled us. By the 22nd or 23rd of January we decided we had to drop a whack of scope from the sprint (which is the sudden drop at the end you see) and we would have to make it up the next sprint, somehow. Even with that adjustment we were still running about 225 hours over at the end of the sprint. Not a good place to be to start your next iteration.

    Sprint 5


    Doesn't look good for the team that was doing so well. This sprint started off with a couple of hundred hours of deferred backlog items, then ballooned up with more technical debt and decomposition of new tasks. The team was larger now but we obviously took on more than we could chew. In fact I remember going in saying that but I was shot down by certain PTB that said "it'll work itself out". Don't believe them! If your burndown charts are looking like this the first week in (and you can tell that the first week in) you'll certain to not succeed on an iteration. Hands down. I like to take a CSI approach to iterations, let the facts show what's going on not peoples opinion. If your burndown is burning up, you need to make adjustments and not "ride it out" because unless you have magical coding elves come in late at night (and I'm not talking about geeky coders who like to keep odd hours) then you're not going to make it, and it's pretty obvious. 

    Sprint 6

    This sprint was just a death march. 800 hours of work which included 80 hours for a task we outsourced to a different group (which really turned into 200 hours of work as that person just didn't turn in any kind of idea for how long it would take) and probably 200 hours of technical debt that's been building for 4 months. We actually got a lot done this sprint, about 200 hours worth of work which isn't bad for 3 developers, 1 QA, and 1 BA but it looks bad here.

    This is how we ended the project until it went stealth. No, we didn't shut the project down because the progress was horrible. As I said, it slipped down the priority chain and we, as an organization, felt it was better to staff a project with 4-6 developers and bring it home rather than 2-3 developers keeping it on life support.

    Hopefully this reality trip was fun for you and might seed a few things in your own iterations. Overall a few things to keep in mind on your own projects following Scrum:

    • No matter what tool you use, try to get some kind of burndown out of the progress (even if it's being drawn on a whiteboard). It's invaluable to know early on in a sprint what is going on and where things are headed.
    • If you finish a sprint with backlog items, make sure you're killing them off the first thing next sprint. Don't let them linger.
    • Likewise on technical debt, consider it like real debt. The longer you don't pay it down, the more interest and less principle you end up paying and it will cost you several times over.
    • If you're watching your sprint and by the end of the first week (say on a 2-3 week iteration) you're heading uphill, put some feelers out for why. Don't just admire the problem and hope it will go away. It might be okay to start a sprint not knowing what your tasks are (I don't agree with this but reality sometimes doesn't afford you this) but if you're still adding tasks mid-sprint and you're already not looking like you're going to finish, don't. It doesn't take a genius to figure out that if you can't finish what you're got on your plate you shouldn't be going back to the buffet.
    • Be the team. Your team is responsible for the progress of the sprint, not one individual so you succeed as a team and fail as a team. Don't let one individual dictate what is right or wrong in the world. As a team, if the sprint is going out of control, fix it. If a PM says "don't worry" when you can see the iceberg coming, don't sit back and wait for it to hit, steer clear because you know it's coming.


  • Welcome to Magrathea

    Our 11th episode of the best damn podcast in the greater Calgary area, Plumbers @ Work, is now online.

    In this episode, we talk about Silverlight, the Calgary Code Camp, Silverlight, GoDaddy refunds, Silverlight, Rhino Mocks, Silverlight, the Entity Framework, Silverlight, and Halo 2. We finally wrap up the show by talking about Silverlight.

    You can view the details and links for this podcast here or directly download the podcast into your favorite ad-ridden, battery draining, lightweight, podcast player here.

    Magrathea? Maybe I should have called this post "09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0" just to get in the game.

  • INVEST in your stories with SMART tasks

    An oldie but a goodie that I thought I would share with the class.

    For user stories:


    Stories are easiest to work with if they are independent. That is, we'd like them to not overlap in concept, and we'd like to be able to schedule and implement them in any order.


    A good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the story may acquire notes, test ideas, and so on, but we don't need these to prioritize or schedule stories.


    A story needs to be valuable. We don't care about value to just anybody; it needs to be valuable to the customer. Developers may have (legitimate) concerns, but these framed in a way that makes the customer perceive them as important.


    A good story can be estimated. We don't need an exact estimate, but just enough to help the customer rank and schedule the story's implementation. Being estimable is partly a function of being negotiated, as it's hard to estimate a story we don't understand. It is also a function of size: bigger stories are harder to estimate. Finally, it's a function of the team: what's easy to estimate will vary depending on the team's experience.


    Good stories tend to be small. Stories typically represent at most a few person-weeks worth of work. (Some teams restrict them to a few person-days of work.) Above this size, and it seems to be too hard to know what's in the story's scope. Saying, "it would take me more than month" often implicitly adds, "as I don't understand what-all it would entail." Smaller stories tend to get more accurate estimates.


    A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." "Testability" has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met. If a customer doesn't know how to test something, this may indicate that the story isn't clear enough, or that it doesn't reflect something valuable to them, or that the customer just needs help in testing.

    For tasks:


    A task needs to be specific enough that everyone can understand what's involved in it. This helps keep other tasks from overlapping, and helps people understand whether the tasks add up to the full story.


    The key measure is, "can we mark it as done?" The team needs to agree on what that means, but it should include "does what it is intended to," "tests are included," and "the code has been refactored."


    The task owner should expect to be able to achieve a task. Anybody can ask for help whenever they need it; this certainly includes ensuring that task owners are up to the job.


    Every task should be relevant, contributing to the story at hand. Stories are broken into tasks for the benefit of developers, but a customer should still be able to expect that every task can be explained and justified.


    A task should be time-boxed: limited to a specific duration. This doesn't need to be a formal estimate in hours or days, but there should be an expectation so people know when they should seek help. If a task is harder than expected, the team needs to know it must split the task, change players, or do something to help the task (and story) get done.

    From Bill Wake

  • Reconsituting Domain Collections with NHibernate

    We ran into a problem using NHibernate to persist our domain. Here's an example of a domain object; an Order class with a collection of OrderLine objects to represent the line in each order placed in the system. In the system we want to be able to check if an order exists or not so we use an anonymous delegate as a predicate on the OrderLine collection:

        7 class Order

        8 {

        9     private IList<OrderLine> Lines;


       11     public Order()

       12     {

       13         Lines = new List<OrderLine>();

       14     }


       16     public bool DoesOrderExist(string OrderNumber)

       17     {

       18         return ((List<OrderLine>)Lines).Exists(

       19             delegate(OrderLine line)

       20                 {

       21                     if (line.OrderNumber == OrderNumber)

       22                         return true;

       23                     return false;

       24                 }

       25             );

       26     }

       27 }


       29 class OrderLine

       30 {

       31     public string OrderNumber;

       32     public int Quantity;

       33     public string Item;

       34     public double Cost;

       35 }

    This is all fine and dandy but when we reconsistute the object from the back-end data store using NHibernate, it blows its head off with an exception saying it can't cast the collection to a list. Internally NHibernate creates a PersistantBag object (which implements IList) but can't be directly cast to a List, so we can't use our predicate.

    There's a quick fix we came up with which is to modify the DoesOrderExist method to look like this instead:

       16 public bool DoesOrderExist(string OrderNumber)

       17 {

       18     List<OrderLine> list = new List<OrderLine>(Lines);

       19     return (list.Exists(

       20         delegate(OrderLine line)

       21             {

       22                 if (line.OrderNumber == OrderNumber)

       23                     return true;

       24                 return false;

       25             }

       26         );

       27 }

    This feel dirty and smells like a hack to me. Rebuilding the list from the original one when we want to find something? Sure, we could do this and cache it (so we're not recreating it every time) but that just seems ugly.

    Any other ideas about how to keep our predicates intact when reconsituting collections?

  • GoDaddy and their crazy accounting system

    I got an email from GoDaddy today, where most of my domains are hosted, about a reduction of ICANN fees. I must say that GoDaddy is absolutely brilliant in crediting me the overpayment I've made as a result of the reduction of the fees:

    Dear Bil Simser,

    The Internet Corporation for Assigned Names and Numbers (ICANN®) recently agreed to reduce their Registrar Transaction Fee from $.25 to $.22. What does this mean for you?

    Good news. You have been credited $.03/yr for each domain name you registered or renewed dating back to July 1, 2006* -- $.15 has been placed into your Go Daddy® account with this customer number: 9999999.

    Your in-store credit will be applied to your purchases at® until it's gone or for up to 12 months, whichever comes sooner. If you have any questions, please contact a customer service representative at 480-505-8877.

    As always, thank you for being a Go Daddy customer.

    Bob Parsons
    CEO and Founder

    Wow. 15 cents for all my domains. What should I buy with this winfall first? A new laptop? A 42" LCD TV? I understand that it's the law and without sending this note out, I would probably be complaining about them stealing my $0.15 but it's akin to a bank sending you a cheque for $0.02 interest and is somewhat funny (even if others think its not)

  • Silverlight is here

    I watched the keynotes from the MIX07 conference and I have to say I'm rather impressed at Silverlight. I've been on the cusp with regards to WPF/E (and WPF for that matter) but after watching ScottGu talk about it and demo some pretty kick ass apps, I've gone over to the dark side.

    Maybe Microsoft won't say it out loud, but this is the Flash killer. I was always impressed with Flash because of it's rich nature. Watching Strong Bad cartoons in Flash was always fun. However I had a bad taste in my mouth with Flash. It was a giant tag on a web page, required a completely different language to learn, and was never really browser or search engine friendly. Sure, it looked good but that was about the extent of it.

    A couple of things that came out of MIX07 so far is not only the availability of Expression Studio, the web authoring tools that are sure to take the place of Flash Studio and Dreamweaver (which each member of MIX gets a copy of), but of the Silverlight Streaming service. Basically you can upload 4GB of content to the Microsoft data services and use it in a Silverlight-enabled app (web, desktop, or otherwise). Not bad for free as in beer.

    A few other cool things that came out of the keynote (once ScottGu got on stage) was remote debugging to a remote machine (he debugged from a PC to a Silverlight app running on a Mac, way cool) and the full .NET framework embedded into the Silverlight runtime enabled apps (not sure how to state this feature). So rather than the trimmed down compact framework, it's the full meal deal. There's also some great compatibility between Expression and Visual Studio Next (Orcas) as they use the same XAML files and same solution formats. You can basically build the UI that you want in XAML using Expression, then just save it and reload it in Visual Studio to use in your app. I was highly impressed with the Silverlight Airline app (hopefully source code will be available for this) which let you choose a set of dates, drag and drop an origin and destination, and with 3 lines of code Scott was able to show animation between routes and alternates when you selected each itinerary. Way cool and the way the web should be compared to the archaic way things are done now.

    Finally was the question of performance. Sure you can build awesome apps that look great but what about bandwidth? Apparently Microsoft has solved this problem so you'll start to see very right web sites using Silverlight, video enabled (even HD up to 720p), downloading in as little as 50k. This might be a bit of a bait and switch game as the app will be 50k but obviously the HD content will be larger. The point here is that it seems like Silverlight will know how to download content in a smart fashion so it's there when you need it, but you won't be staring at long loading screens. One example was with NetFlix where they streamed an HD movie with pretty good response time, from a Silverlight app. Impressive. Most impressive.

    You can install Silverlight here and download the SDK and other tools here. There's the 1.0 beta version and for your really on-the-edge guys, you can grab the 1.1 alpha releases. I'm now spending the rest of the night playing with Silverlight, getting boned up on WPF (finally) and putting together my own mashups so maybe watch for a few SL apps pop up here and there from me (as I tried to Frankenstein Silverlight and SharePoint together).

  • Calgary Code Camp 2007, A Visual Journey

    Just got the pics from James from the Calgary Code Camp we had this weekend so I figure I would take you on a visual journey. The Code Camp was a blast and we had a great turnout, with about 100 people showing up. Hope everyone had a great time. I had the best time co-presenting with John on XNA as we plowed through 2 sessions covering everything from starter kits, debugging, deploying, textures, meshes, audio, and everything else XNA.


    What a great crowd and hey, there's JP poking his head inthe door!


    The first prizes are given away. It's going to be a full day of schwag!


    Donald Belcham (aka the Igloo Coder) points to the word "changes". Maybe he's about to initiate some?


    Steven Rockarts bangs out his presentation on the Windsor Container.


    David Woods reflects on his work.


    Bruce Johnson from ObjectSharp stopped in on his way to Mix to talk about LINQ.


    Jean-Paul Boodhoo rides the monorail.


    Terry Thibodeau and his walk through mocks.


    John Bristowe gets some help from our XBox 360s during his DataDude presentation.


    James Kovacs takes a lap around TDD, mocks, and dependency injection.


    Daniel Carbajal takes us through WCF.


    D'Arcy Lussier can't be photographed with traditional equipment. I believe we need to get a special camera that can take pictures of the undead to be able to get a clear shot.

    We also had a kick-ass display for our XNA sessions. We brought 3 XBox 360s, 3 laptops, and a backup projector just in case.


    We had two projectors setup, one with the code on it and other with the Xbox output. We also had two laptops setup, I was doing the code on mine and John had his setup with the browser and powerpoint slides. All in all, we had about 18 cores running in our tiny room but it was a blast. Our setup was so large we couldn't get it in with one photo!



    You can view the entire Flickr set here. Sorry for the quality of the pics but things were rushed and a lot of times I didn't turn the flash on (so as not to disturb the presentations).

    I'll be following up with a post in a day or so with a wrapup of the event and download links for all the source code for our XNA sessions.

  • TFS vs. Open Source, the battle rages on

    Caught a thread between Roy Osherove and Oren Eini that has gone back and forth a few times, all about Team Foundation Server and open source tools (and the deficencies Oren points out in TFS).

    The discussion has been (mostly) centered around source control, however I think they're both missing on the main feature of TFS. That is extensibility.

    Yes, other packages out there are extensible by nature (Subversion for example) but require coding, architectural changes, hooking into events, all of which are nice but systems like this were not designed for it. Was subversion really designed at the start to be extensible so I could maybe have my storage be in a relational database rather than the file system?

    TFS by it's very nature was built around extensions. The Work Item itself is just a container, defined by the fields, rules, and relationships you create in the process template. There is no "default" Work Item. There is no concept of a "bug" or "feature" in TFS. It's just a Work Item and can be morphed into whatever you need it to be. Just look at the Conchago guys with Scrum for Team System, and how they turned Work Items into sprint and product backlog items and even through in a sprint retrospective item.

    I was asked recently if we could modify the bug template so we could track the steps to reproduce. I said we could use the description field, but decided to build out a new type of Work Item (using Joel's very excellent Process Template Editor) so that you could create a work item (a bug) and add as many steps to reproduce and and expected behavior field for kickers. The entire process took me an hour (and that included about 15 minutes to grok the templates as I've never done this before).

    I've tried these type of things on other systems and they're painful. Sure, some systems are quite helpful in adding new fields, changing the layout, etc. but none are things that you can add entire concepts that are new, not just net new (there is a difference). Could I crack open subversion to support a way to link checkins to an external feature list? Sure. Why would I when TFS has this already.

    As for modifying open source systems to do your bidding, you enter into a fork scenario. Unless the system supports a plug-in pattern and you can just add a new assembly (like say oh the TFS policy sub-system) I really can't do much with a tool even if I have the source code, unless I want to run the risk of being in a maintenance nightmare from Hell scenario. Do I really want to do diffs of new relases of NUnit with my own code to support new extensions.

    Luckily there are some open source systems that are built around an extensibility model but so is TFS. And while it might have deficiencies in various places I can plug in new features or introduce entirely new concepts to the repository so that I can make it match whatever business process I use.

    Is the source control system in Team Foundation Server extensible or replaceable? No, but I'm willing to live with a few problems while we get to version 3.0.

  • Calgary Code Camp 2007, no hotdogs but plenty of schwag!

    If you're around or about the Calgary area this weekend what should you be doing?

    1. Meeting up with Calgarians to talk about Al Gores presentation this week on his global warming show, The Inconvenient Truth?
    2. Checking out a semi-naked Rose McGowan being beat up by Kurt Russell at 120 miles per hour in Grindhouse?
    3. Enjoying the fresh air and sunshine with your family.


    You should get your geek-butts down to the Calgary Code Camp. It's this Saturday and we're ready to roll, so should you be.

    What can you do at the code camp? Not only see over a dozen speakers and sessions but perhaps walk away with some truely awesome door prizes.

    We're giving away about $20,000 (holy crap, that's a lot of stuff!) worth of software, hardware, books, and goodies from over a dozen sponsors including ComponentArt, Dundas, JetBrains and others.

    I'm giving two fun-filled-uber-cool sessions on XNA programming so we'll be building games, doing demos, and going deep dive on the XNA content pipeline, as well as checking out some cool tools for you to start writing games for the Xbox 360 (and we'll have our 360s there for some between session gaming, Gears of War anyone?)

    Oh yeah, did I mention this was... all... free! Free as in beer (or speech if you prefer).

    Be there!

  • Making up your mind about a name

    I've highly confused by Microsoft sometimes (like that's a surprise).

    Originally, we had some very cool names for new products/technologies. Indigo and Avalon. These became WCF and WPF (Windows Communication Foundation and Windows Presentation Foundation respectively). These names are so not only un-cool, they're a mouthful to say three times fast during a podcast.

    Then we had WPF/E (Windows Presentation Foundation/Everywhere) which is now identified by the much cooler name Silverlight.

    Wish they would make up their minds. Cool names or not. Which is it?

  • Being Kimberly Tripp

    This week the Calgary Code Camp is coming up where I'm giving two sessons on XNA Development (XBox 360 debugging from a laptop baby!) and I've been reflecting on my own presentation skills for the past while. As it is with being a busy-body, I've been heads-down building projects so my conference attendance has been down. The last conference I spoke at was DevConnections in Vegas back in November and with personal and professional commitments, I'm not looking to speak until next year at DevConnections again (sans the odd gig here and there like the Code Camp and various user group presentations or webcasts). This has given me some time to look at my presentation skills and ask "Do I really have what it takes?".

    I’ve only been presenting at code camps, TechEd, DevConnections and user groups for a couple of years now and I feel that I get pretty good scores. 80% of them are in the 7-8 range with a few (10-15%) being a 9 or 10 and some (5-10%) being a 3-4. I consider this pretty good but not great. Certainly not anywhere near a Kimberly Tripp, a Dino Esposito, or a Scott Guthrie. Is there a way for mere mortals to evolve from mediocre to great? My main goal here is that I feel that if someone pays top dollar for a conference, I really want to give them as much bang for their buck when they're taking time out of their schedule to sit down and wallow through what I have to say. It's only fair.

    Getting to the subject line of this post. Kimberly Tripp is IMHO by far the best presenter I've seen. Ever. She even tops people like Scott GuthrieScott Hanselman and others (sorry guys), all of which I have the utmost respect and admiration for. The question I've been mulling over in my noodle is how does one become a Kimberly Tripp? No, not how do I look good in a dress and pumps, but how does one get to become a speaker who consistently gets flawless scores and really gives you, the community, the value-add that you pay for at a conference? What is the secret?

    I did a bit of a poll from various speakers I know of to get their opinions on what it was to reach that upper echelon of the presentation platform. The answers I got were pretty on-par with what I've been thinking of so no surprise there. In short there's no cookbook here and no magic pill you can take and as Richard Campbell said, "it's a complicated subject".

    One thing that I want to mention here is that the very best speakers out there are making a living as a speaker. I have my day job and I speak as often as I can, but that equates to a few conferences a year at best. The top dogs out there are doing 60 shows a year with 4-8 talks per show. This has two effects: one is that they're extremely well practiced in their art and the other is that they're familiar to their audience. Thanks to Richard Campbell for pointing this out to me.

    Here are some tidbits I picked from speakers and combined with my own cup of reason here's some ideas on how to hopefully improve your presentation skills.

    • Spend time on your presentations. Rushing at the last minute is the last thing you should do as with Murphy's law, anything that will go wrong does. Screwing around rebuilding VMs (it happens sometimes) is not something you want to be doing hours before you're about to go on with your audience. I think it's okay to tweak things (see below on the difference between your presenation and the show notes) but complete overhauls or doing things on the fly is a no-no.
    • Follow your own rules. Julie Lerman has some safety nets and rules that she follows (as do I) like "thou shalt not code in public in a language that though dost not dream in". I think this is pretty key as you have to know your stuff inside and out in order to really be there for someone when they need an answer (or at least know where to look). Learn it inside and out and look at questions posted by the audience as areas that are things they're interested in. After all, we're here for presenting and sharing knowledge that is important to you, not the other way around.
    • Have passion for what you do. Passion and knowledge go hand-in-hand with presenting. If you're not passionate about your subject then you won't emit that to your peeps. On new topics I spend a few days before my presentations just getting deep into something not only to grok the topic but to really find the elegance (and ugliness) of something buried deep in the subject. This really helps me get excited about new topics and hopefully that shows in my work.
    • Watch everyone and build your own style. A disadvantage to being an MVP is that we're generally ahead of the curve when it comes to technology. At the last PDC I was rather bored as everything there was old news to me. At TechEd it was the same. However I try to get into see other people present, not for the content but the style and techniques that they emit. Learning from the best is the best way to learn, and you can't get that from a book. So if you're planning on getting into the speaking world spend some time watching the top dogs and seeing what makes their sessions that much better. What do you like about it? Then take it home, twist it and make it your own.
    • Give yourself time and use a little patience. I cook from time to time and it's never a good thing to just toss something in the pan and fry it on high heat (sometimes, but not always). Many times you cook it on medium heat, stirring it and letting the flavor seep in to add to the taste. Presenting is like that so give it time. The more you do it, the better you should get and you'll learn from your own mistakes, tweak a few things, and then come out next time with something even better.
    • Learn, learn, and then learn some more. If you speak at TechEd then you have access to the speaker coaching provided by Microsoft. Rocky Lhotka highly recommends taking advantage of this resource as these guys know all the tips and tricks and have the experience. I'm definitely going to see if I can do this next time round as it sounds like a great resource to tap into.
    • Take advantage of opportunity that presents itself, and make the opportunity happen if it doesn't. I've only been publicly speaking for a few years (and it shows) but others have been out there forever. Get involved with groups like Toastmasters (and similar groups) and look for opportunities to speak. Even if it's standing up in your own development group or department and talking about a cool new technology you see benefit from, it's a way to dust out those cobwebs and get the nervous bug out of your system. The more you do it, the more comfortable you'll feel.

    I wanted to mention one thing about doing last minute changes. I'm one of the worst people for that and usually update my presentations right up to the last day. Julie Lerman had a tip about this in a blog entry:

    With most conferences,speakers need to submit their powerpoints way in advance of the conferences. Attendees are provided with books filled with the printouts of the decks so that they can take notes during the conference. It is not uncommon with a new talk to fine tune it between that early preparation and the actual time you your presentation.

    Though this has only happened once, it struck me (and stuck in my brain) when an attendee wrote on an eval that it was a pain that the slides in my talk were different than the book.

    So this time around, rather than hoping that I'm going to remember in the middle of a talk and say "oh, I changed this slide a little (for your benefit)" I am just putting tiny little notes on the bottom of modified slides: "This slide is slightly modified from the original printed version".

    I found a follow-up by Billy Hollis on this (who is an additionally awesome speaker) that was a great tip:

    I solved this one long ago. Any text that's changed from the printed version I format with light green color. And I tell the audience that anything they see in light green is new or changed. With that visual cue, they don't seem to mind minor changes at all. What's confusing is knowing there is some change, but not being sure exactly how much.

    Hope these tips and ideas get something sparked for you and hope to see you out there speaking some day! 

    Many thanks to those that I bugged and pestered especially Julie Lerman, Rocky Lhotka, and Richard Campbell.

    P.S. Julie Lerman has a category on her blog here with lots of tips and her presentations mixed in. It's a good resource and a great read so check it out!

  • Taking a look at DotNetNuke 4.5

    While I'm a SharePoint guy thru-and-thru, I still like DotNetNuke think Shawn Walker and the community have done a bang-up job getting it to where it is today. It's now reached version 4.5 and includes some new features like support for Microsoft AJAX. I still struggle with module development for DNN and find writing SharePoint Web Parts (especially 2007) much easier, but PAs rule the planet for extending functionality where you don't have server access like you need for SharePoint.

    Here's a quick rundown on things I've noticed in the new version of DotNetNuke.

    The immediate thing you'll see is the improved header when logging in as admin or host.

    It's a lot cleaner and takes up a lot less real estate. Don't worry, the old header is there via an AJAX collapsible panel so it's just a click away:

    All the regular tasks are there, including a new Design view. This allows you to see the page but without the content in each module. The View vs. Edit mode is just like the old Preview mode so that's still there.

    What's new is the Solutions Explorer. This is a DotNetNuke marketplace type module that comes built-in via the Host menu:

    Looks very interesting and it loads the content each time you view it, so it's a portal to the live site to always keep you up to date.

    If you're like me I use the default accounts (host/dnnhost and admin/dnnadmin) however if you still leave those passwords intact for those accounts you'll see something like this when you login:

    It's a nice little feature to let you know your Host or Admin account could be compromised. I actually stumbled over a few DNN sites out there and for kicks would log in as admin, then send the admin and email telling him to change the password.

    Other small but nice changes I've noticed are like the host settings screen displays relative and physical paths where DNN is installed (sometimes nice to know on a hosting environment).

    Overall it's a great release, supports AJAX (Atlas, or whatever it is we're calling it these days) and looks like another solid version to build your community sites on.

  • Priority is a sequence, not a single number!

    This morning I opened up an email (and associated spreadsheet) that just made me cringe. Which of course made me blog about it, so here we are. Welcome to my world.

    Time and time again I get a list of stories that a team has put together for me to either review, estimate, build, filter, whatever. Time and time again I keep seeing this:

    As a user I can ...51
    As an administrator I can ...31
    As an application I can ...41

    No. No. No!

    Priority is not "make everything 1 so the team will do it". When you're standing in front of a task wall with dozens of tasks relating to various stories, which one do you pick? For me, I take on the tasks that are the highest priority based on what the user wants. If he wants Feature A to be first, then so be it and I grab the tasks related to Feature A.

    However I can't do this (read: will not do this) when someone puts *everything* to be Priority 1. It's like asking someone to give more than 100%. You simply cannot do it. Priority is there to organize stories so the most important one gets done first. How you define "most important" is up to you, whether it's technical risk, business value, etc. and what value that gives you.

    This is more like what priority should be:

    As a user I can ...51
    As an administrator I can ...32
    As an application I can ...43

    Another thing I see is priority like this:

    As a user I can ...5High
    As an administrator I can ...3Medium
    As an application I can ...4High

    Another no-no. I can't tell from all the "High" features what is the "Highest" one and we're basically back to everything being a 1 but now we're calling it "High". Unless you've had a mulitple core upgrade in your brain (or are someone like Hanselman, JP, or Ayende who don't sleep) you do things one at a time then move on. As developers and architects, we need to know what is the most important thing to start with based on what the business need is.

    With customers everything is important, but for planning sake it just makes life easier to have a unique list of priorities rather than everything being #1. When all is said and done and I have to choose between 3 different #1s in a list, I'll pick one randomly based on how I feel that day. And that doesn't do your customer any good.

    Okay, enough ranting this morning. I still haven't finished my first coffee and I still have a few dozen emails to go through.

  • Sessions and speakers for the Calgary Code Camp

    The speakers and some of the sessions are up on the Calgary Code Camp site.

    I'm presenting two sessions on XNA programming. John Bristowe and I will double-team at the camp going over the basics then we crank up the volume to 11 and write some games as we go deep into the XNA framework and tools.

    Last year was fun but this year is going to be even funner. Yes, funner. Funest. Fun to the nth degree. The mosted fun you'll ever have for $0.

    Be there.

  • DataTable vs. BindingList<T>

    We were having a discusion today about the merits of using DataTables vs. BindingList<T> (a generic in .NET 2.0) for loading up domain objects into the UI layer (say to display on a grid). My gut feel is telling me DataTables are evil and wrong, but I don't have a lot of hard evidence to choose one over the other. We brainstormed some pros and cons for each and they're listed below.

    • DataTable
      • Pros
        • Simple to implement, not much code needed
        • Can use select statements to retrieve values
        • Change events are managed within the object automagically
        • Display handling for errors built-in (when binding to grids)
      • Cons
        • Human errors (typos on column names for example) can be exposed at runtime and can't be easily tested
        • Need to implement DataViews in order to sort or filter
        • Can't test for type safety (or as Scott says DataTables are bowls of fruit)
        • Difficult to implement business rules or logic
        • Bloatware, lot of extra baggage for features you don't need
    • BindingList<T>
      • Pros
        • No mapping needed
        • Strongly typed and type-safe
        • Loosly coupled compared to a data table -> mapping columns -> domain object
        • More extensible via superclasses, interfaces, etc.
      • Cons
        • Must iterate through list to find items
        • Must implement some mechanism (like a decorator pattern) to respond to change events
        • More short term investment to learn and implement

    Are we wrong with our list or is there something that you've come across with your dealings with the two approaches?

  • What do you do to get motivated for the day?

    I live about an hour south of Calgary in a town called High River. Each day I was driving into the downtown core and parking. It wasn't the expense that was the issue, but the fact I was on the road for about 2 hours a day, every day and my routine felt like I was driving from one place to the next (which I was). I felt unproductive and like I was switching context from one state to another with nothing in between.

    Recently I decided that I would stop driving all the way into the downtown core and just drive to our community rail system at the south end of the city (about a 30 minute drive). There I would take the train into the core (about a 45 minute ride) and walk 2 or 3 blocks to the office. This would afford me the opportunity twice a day to wind down (or wind up as it were) as I got into the day. Everyone is always go, go, go these days but my attitude is to relax and enjoy the ride. This supported that.

    With my spare time I either crack open a book (currently re-reading Jimmy Nilsson's Applying Domain-Driven Design and Patterns) or get 30 minutes or so on the laptop coding or something. And of course I plug in my iPod and whittle away the time with tunes. I think everyone needs a little downtime, not only between projects but between days. If you're constantly moving from bed -> work -> dinner -> bed, you'll burn out.

    I'm not a music afficiando or anything (don't even have a geeked-out home theatre system) but I do enjoy music. My top 5 items on my iPod that I listen to each day is:

    • Mike Oldfield. When it comes to music, I live, breathe, and eat Mike Oldfield. Most people have no clue who Mike is, but I'm sure you'll recognize it. It's the music played at the end of The Excorist (aka Tubular Bells). If you heard it, you would probably go "oh yeah". Mike is a musical genius in my books and anything he does is gold (especially when he pairs up with Maggie Reilly on vocals). Easy listening and relaxing.
    • Bond. 4 drop dead gorgeous girls playing classical instruments to a boppy beat. What more can you say?
    • Enya. The celtic sound does it for me, and her voice is soothing, especially at 6 AM on a crowded train.
    • Sarah Brightman. Another muse that lets me ease into the day without frying my brain.
    • Loreena McKennitt. More celtic goodness from someone with a heavenly voice and Canadian to boot!

    Other stuff I'll listen to:

    • Podcasts. It's always good to catch-up on the train with an episode of DNR or Hanselminutes. I never grow tired of listening to those guys.
    • Peaches. A NSFW punk-girl band that kicks things up a notch. Another band like this is Evanescence. Good music with a little raunch and kick ass attitute. I'll throw Siouxsie and Banshees in here too.
    • Classic rock. Yeah, I'm a 70s rock guy so you'll find things like Queen, Kiss, and Electric Light Orchestra happily playing alongside New Age goodness. This also includes ABBA that I'll admit I listen to. Who doesn't like these guys?
    • Old TV themes. They're short and cute and take me back to my younger days and break up the emotional ride you might get from a deep instrumental piece. Fun and peppy.
    • Movie soundtracks. Some music just works at any time of the day, like both volumes of Kill Bill. Awesome music and really gets the neurons firing early in the morning.
    • Chemical Brothers, Moby and trance music. While I do like easing into the day sometimes it's good to mix in something spicy to kick it up a notch. These guys do it for me.
    • Johnathan Coulton. Johnathan who? I only listen to one song from this guy, Code Monkey, as it bridges the music world and my geek life together in 3 minutes. Brilliant!

    To kick off the morning I'll either down a coffee or two (or three or four...) but prefer a Red Bull or Rockstar (sugar free). Silly and probably damaging to some part of my internal organs (including losing about 10,000 brain cells each day), but it's that kickstart that I use to get going. 

    When I get to work I'm ready to face the day. I'll kick into work mode about halfway on my way there, thinking about the morning scrum, what I need to prepare for it, projects I'm working on, and accomplishments I intend to commit to for the team. This gets me into a mindset that lets me slip into business-mode rather than a quick shocking start like I used to have and I find it makes me more productive as a whole.

    So what's on your iPod? How do you start your day (or finish it) so you're not burned out and ready to face any challenge you might have?

  • Breathing new life into Tree Surgeon

    As I took over the maintenance of Tree Surgeon from Mike Roberts, there are millions (well dozens maybe) of ideas buzzing around in my head. We're tracking these ideas using the Issue Tracker on CodePlex. This will allow you to add new ideas as well as vote on existing ones. The strategy here (as is with all my CodePlex projects) is to have the team (which is building right now) focus the development on the most popular ones.

    I've added a few of my own to start (VB.NET support, VS2005 support, ClickOnce) and others have jumped in too (CIFactory structure generation [great idea], and MbUnit support).

    Please feel free to keep the discussions and ideas (via comments or work items) flowing as we evolve the Tree Surgeon tool to the next level.


  • Scrum Room = Fun Room!

    I like to have fun at work. Whether it's just messing around (by changing developers desktops to pictures of the Simpsons) or just focusing on a task, I think software development should be a fun thing and thus, be conducted in a fun atmosphere. After all, if you're not having fun then what's the point?

    A few months ago, we shifted things around and finally got a large enough room to setup our projects in what we call the Scrum Room (I like to call it the center of the universe, but that's a little egocentric as it doubles as my office). Unfortunately our environment isn't what I would prefer it to be (a large team room for all members to be co-located) but this isn't bad. A single "War Room" where we hold the daily scrums each morning for each project. I personally grind code, print out burndowns, and keep the ship steered in some direction (the direction changes all the time) here every day but besides holding daily scrums, the area is generally regarded as the "drop by and let's talk about [insert design pattern here]" room.

    First off, we have the daily scrum rules posted on the wall for everyone to see and for us to refer to from time to time as people forget. As I mentioned last week, if anyone strays from the rules during the daily scrum, the Scrum Witch lets out a Wilhelm scream (and we don't want that to happen). This wall also holds the burn down charts (created from Team System) of each iteration. I forgot to snap a picture of the entire burndown wall but you can catch one of them here. I'll post later a more detailed breakdown of the burndowns from one project as we're in the 6th month and there's been a lot of great activity and challenges with that project.


    Next is the start of daily scrum. This is a Japanese character a friend of mine got and I had to barter with to obtain (I traded a broken iMate Jas Jar for it). He's an alarm clock of sorts and when you press the button on his chest, music starts up and he tells you to wake up (in Japanese of course). Press the "snooze" button again and he tells you "thank you for waking up" (or something like that, my Japanese is non-existent) and the music stops. We would get into the habit of starting this guy up just to get everyone pumped for the day, although I haven't activated him for awhile now.


    Now we come to "The Wall". This is a large (about 20ft wide) wall in my office that houses 3 projects and their tasks. Each project has a 8-10ft section of the wall and we print out the splash screen for each project to identify it (of course the splash screen is the first thing done for the project, screw the requirements, we need a cool logo!).

    Under each project contains 3 sections with tasks. One for "Not Started", one for "In Progress", and one for "Done". The "In Progress" section is broken down with a small post-it showing 0% - 100% which is the completeness of the task. Under the "Not Started" section we break down tasks in an organized fashion so it's easier to find them when you're grabbing a task during your 5 minutes of fame at the morning scrum. Each team groups the tasks differently. One team has it defined by role (BA, QA, Dev, etc.), another has it defined by a feature, still another uses a line of business to collect tasks. Whatever works for each team is fine.

    It's the usual routine for task boards (in our case the entire wall is the board). Tasks move from the "Not Started" area to 0% in the "In Progress" area. As team members work on tasks, they move across the wall to 100% then drop into the "Done" area. It really does give a quick overview of where things are at. It's the easiest way to describe and display progress and the entire state of the iteration for anyone looking at it. This is how much we have to do, this is how much we've done. Highly effective and easy to understand.


    The banner at the top of the wall is labeled "The New Goodness" and was a phrase someone has mentioned awhile back (more than likely it was over a few shooters at the bar, but that's okay too). We liked it as it represented the new approach to software delivery (agile vs. waterfall) and seemed to reflect the atmosphere we wanted to take. Something new and something good. Sure the grammar is off but again, we're focused on fun here not English. The development manager also printed out a picture of a traditional rugby scrum so we felt it was appropriate for the wall (and some mornings it feels like this).


    There's a section over my desk where we've got some fun and motivational type stuff. A printout of a cartoon that Hugh MacLeod created (I'm not cool enough like Scoble to have a hand signed lithograph) that is a bit of a motto when you're in my office. We have the Scrum cartoon about the chicken and pig wanting to open a restaurant, courtesy of Implementing Scrum. Finally, there's the fish. This will take a little explaining.


    This fish is the award the team member gets for breaking the build. Originally we were thinking about bringing in a real (ala taxidermy) fish. The dev manager has 3 of these so we would pass out the fish and the team member who broke the build gets to display the fish on his desk all day. This wasn't quite practical (and we felt we would run out of fish and then what! Goats? Sheep? Cats?) so he hunted down a goofy picture of a kid holding a fish. If you break the build, he emails the fish to you and prints out a copy for you to pin up in your cube.

    Today we discovered some people were actually collecting their fish so maybe we'll have a contest at the end of the project as to who gathered the most fish. The idea is to spice up your team with ideas and promote fun within the team even when they do something bad like breaking the build.


    Well, that's the whirlwind tour of our Scrum Room. It doesn't have to be a stuffy board room or a boring task board, and if you're going to live in it every day (like I do) make it enjoyable!

    The message here is to have fun with your projects. Don't get so stressed out. So what, you lost a resource or two. So what, the requirements changed a day before you were supposed to deliver that feature.

    Breathe, relax, pick up, and carry on. It's just software.

  • Tree Surgeon has a new home... on CodePlex!

    About a week or so ago Mike Roberts posted a note that he was no longer going to be working in the .NET world as the Java world was taking over at his company. Mike is the author of many blog posts on setting up a development tree in .NET and these blog posts spawned a tool called Tree Surgeon. As Mike was no longer going to be working in the .NET space, he threw out the gauntlet for someone to pickup the maintenance for the tool.

    I picked it up as I think it's a great tool and can only improve with time. You can find the new home for Tree Surgeon here on CodePlex.

    I've setup all the documentation the same as the original site along with putting out version 1.1.1 (the last release). You can grab it various flavours:

    Source code is checked into CodePlex so you can grab the latest changesets from here.

    There's plenty of ideas for Tree Surgeon in the coming year, so I encourage you to visit the Discussion Forums and talk about what you're interested in seeing and keeping track of (and voting on) new features and bugs via the Issue Tracker.

    I've also changed the license for Tree Surgeon so it's now released under the Creative Commons Attribution-Share Alike 3.0 License (I'm not only a CodePlex junkie, I'm a CC one too).

  • Beware the Scrum Witch!

    I'm always happy when I see the Internet spread goodness, or at least what I perceive to be goodness. Last week or so I talked about the Scrum Witch, a horrific little screamer that we introduced to our daily scrums as things were getting out of hand. If a team member strays from our standard routine, we unleash the Scrum Witch on them, screaming and howling for everyone in the office to hear.

    The witch has been pretty successful as we're sticking closer to the routine and there's less gossip-talk during the scrum. Scrums end in under 10 minutes now (where they previously would linger for 15, 20 and even 30 minutes) and everyone is focused. I've only had to activate the Witch twice, and both times were with the same PM who charged off asking all kinds of questions (not even related to the current sprint!).

    Mike Vizdos over at Implementing Scrum has a cartoon this week called Work Naked on dysfunctional things he sees at the daily scrum and yes, the Scrum Witch made an appearance (although it would have been awesome if she made it into the cartoon, ahh someday...)

    The message here is simple. The daily scrum is your commitment to the team, not a report to the PM or Scrum Master on what you did. Be kind and respectful and the pain will last only a few minutes.

    Watch later this week for more scrum fun as I have a few posts on the go right now.

  • Another year, another MVP renewal

    I was a little nervous the last few days as everyone and his brother (Sahil, Frans, etc.) had all recieved their MVP renewal notices. I got mine this morning but apparently it was sent out yesterday, my pathetic mail client seems to have toasted it. Anyways, MOSS MVP for another year. This is my 4th year and so look for more SharePoint goodness the remainder of this year. Lots to come from this tired brain of mine. Congrats to all the new and renewing MVPs this round too. See you at the next summit!

  • Comparing expectations with Constraints using Rhino Mocks

    I've been more work with Rhino lately (and really digging it) and wanted to pass on a tidbit that might be useful to some of you.

    Here's the setup. You're implementing the MVP pattern and your view will contain a list of items (probably displayed in a grid, listbox, etc.). You start with a test where your presenter will initialize the view with a list of items from a backend system. You want your test to verify that the view gets updated (via the presenter) with the correct list.

    Let's setup our test using mocks. We'll mock out the view since the implementation isn't important to us right now. We only want to ensure that when the presenter is initialized (via a called to OnViewReady) it will 

    1. Call a backend system to get a list of items
    2. Set the view with those list of items
    3. We can ensure the view contains those items.

    Here's the test for that with excessive comments so it should be self explanatory:

       24 [Test]

       25 public void OnViewReady_ViewShouldContainItemsFromRepository()

       26 {

       27     // Create the mock objects

       28     MockRepository mocks = new MockRepository();

       29     IProjectListView listView = mocks.CreateMock<IProjectListView>();


       31     // Creat a real presenter and assign the mock view to it

       32     ProjectListViewPresenter listViewPresenter = new ProjectListViewPresenter();

       33     listViewPresenter.View = listView;


       35     // Setup our expectations for the view by getting

       36     // a list of projects from a repository (created via a factory)

       37     RepositoryFactory factory = new RepositoryFactory();

       38     ProjectRepository repository = (ProjectRepository)factory.GetRepository<Project>();

       39     List<Project> fakeProjectList;

       40     fakeProjectList = repository.GetAll();

       41     listView.Projects = fakeProjectList;

       42     LastCall.IgnoreArguments();

       43     Expect.Call(listView.Projects).Return(fakeProjectList);


       45     // Invoke the mock engine

       46     mocks.ReplayAll();


       48     // Call the real presenter that we want to test

       49     // This will call the repository to retrieve a list of

       50     // projects and update the view to include them

       51     listViewPresenter.OnViewReady();


       53     // Check the view to ensure it's been updated

       54     // with the list of projects from the repository

       55     Assert.AreEqual(3, listView.Projects.Count);


       57     // Teardown the mocks

       58     mocks.VerifyAll();

       59 }

    Other than the setup code (creating the mocks, presenter, etc.) which could all be done once in a Setup method there's a few problems with checking the list of projects from the view. While we did mock out the view, we're checking the count only. If we want to ensure that the list contains the projects we expect we might have to iterate through the list and compare the Project objects that are returned and that's a lot of work.

    There's an easier way to check lists with Rhino which measures up to the same amount of code but is much more effective at testing the contents of the list. We can use constraints on the expected call. Constraints is a list of constraint objects attached to your mock that tell you there are expectations that should match up to the constraint you define.

    Instead of assigning our fake project list to our view then setting the expectation via an Expect.Call method, we'll add a constraint to the mock. We don't need to assign the listView.Projects value so we can set it to null and the next line, LastCall.Constraints(...) will be our contraints we want to attach to that mock item:

       30 listView.Projects = null;

       31 LastCall.Constraints(new AbstractConstraint[] { List.Equal(fakeProjectList) });

    We still have to create the expectation, but now we use List.Equal(...) and pass in the fakeProjectList. This means when the mock runs it will ensure that the listView.Projects property not only contains the same number of items (like our original test) but the list is actually of the same type and is the same list (I think Rhino internally uses the Equals method on each item to compare them). So now our test looks like this:

       14 [Test]

       15 public void OnViewReadyM_ViewShouldContainItemsFromRepository()

       16 {

       17     // Create the mock objects

       18     MockRepository mocks = new MockRepository();

       19     IProjectListView listView = mocks.CreateMock<IProjectListView>();


       21     // Creat a real presenter and assign the mock view to it

       22     ProjectListViewPresenter listViewPresenter = new ProjectListViewPresenter();

       23     listViewPresenter.View = listView;


       25     // Set the expectations of the mock via a constraint

       26     RepositoryFactory factory = new RepositoryFactory();

       27     ProjectRepository repository = (ProjectRepository)factory.GetRepository<Project>();

       28     List<Project> fakeProjectList;

       29     fakeProjectList = repository.GetAll();

       30     listView.Projects = null;

       31     LastCall.Constraints(new AbstractConstraint[] { List.Equal(fakeProjectList) });


       33     // Invoke the mock engine

       34     mocks.ReplayAll();


       36     // Call the real presenter that we want to test

       37     // This will call the repository to retrieve a list of

       38     // projects and update the view to include them

       39     listViewPresenter.OnViewReady();


       41     // Teardown the mocks

       42     mocks.VerifyAll();

       43 }

    Note that we also don't have to do the Assert.AreEqual check at the end. Our call to VerifyAll will handle that when the mock is torn down.

    Like I said, the amount of code isn't that much different, you're just using a different call on setting up the mock. However you can add as many constraints as you like. For example you can ensure the list is of the right type, it contains that many items, and you can even tell it the first item has to have some pre-determined value. Constraints are stacked up and evaluated when you tear the mock down so make sure you put in the VerifyAll call.

    This is just an example of changing the way you might check a list of items using constraints but you can use constraints for any type. For example if you had a single property on a mocked object you can add a constraint that it was greater than or equal to a certain value, or that it met a certain type if you were say testing a sub-class. Check out the "Is" class to do things like Is.GreaterThan, Is.LessThanOrEqual, Is.Type, etc.

    Lots of possibilities here so dig in and play with constraints on your mocks. You might find something interesting and learn something new along the way!

    kick it on

  • Fool me once, fool me twice, April Fools Day has come and gone (again)

    No, this blog is not being sued. As some observant readers figured out yesterday, the cease and decist letter to shutdown my blog was a hoax. Astute readers pointed out the Milton, Chadwick, and Waters was the law firm from the movie The Devils Advocate and that John Milton was the character played by Al Pacino. It was the first movie with lawyers that popped into my head and seemed appropriate, although the law firm from John Grisham's novel "The Firm" would have sufficed.

    What most people didn't pick up on was that the hoax was actually setup by me. I was not the person on the recieving end of a prank as some people thought, you were. There was some news item on Saturday about someone having to change the name of their product and the idea for my blog popped into my head. I did a quick search to find an example letter and just filled in the blanks.

    For those of you that are expecting the blog to shut down by April 15, sad to say I'm still here and will be for some time. For those that started the "Save the Bil" fund, I thank you but please send you money to people who club seals for a living or some other deserving group. I don't need it.

    I don't know if I'm very, very, good at doing these April Fool blog posts or really, really, bad at them. Last years post of open sourcing Windows SharePoint Services got some people riled up (including unnamed sources deep inside the hallowed halls of Microsoft itself).

    We'll have to hang around til next year to see what my augmented mind can dream up.

  • CSS Reference Chart for SharePoint 2007 - PDF Version

    Heather Solomon, the MVP queen of SharePoint style, has put together a fabulous CSS reference for SharePoint 2007. This covers both Microsoft Office SharePoint Server (MOSS) and Windows SharePoint Services (WSS) and displays the name of the style along with an example and where the style is defined in what file. Brilliant.

    I've put together a PDF version of the reference guide if you're into that. You can download the 12 page reference guide from here. Heather's original online version can be found here.

  • Cease and desist - Have to find a new name for my blog

    It seems I am no longer able to keep this blog name. I received this email today:

    From Milton, John S. Apr 1 08:29:30 2007

    Subject: Cease and Desist Letter for your site "Fear and Loathing"

    From: "Milton, John S."


    Dear Mr Simser:

    I am the senior partner with the law firm of Milton, Chadwick, and Waters based out of New York City, NY and represent the estate of Hunter S. Thompson. It has come to my attention that you have made an unauthorized use of my clients copyrighted work entitled "Fear and Loathing" (the "Work") as the name for your website. The estate reserves all rights in the Work, first published in 1970. Your work entitled "Fear and Loathing" is essentially identical to the Work and clearly used the Work as its basis.

    As you neither asked for nor received permission to use the Work as the basis for your website nor to make or distribute copies, including electronic copies, of same, I believe you have willfully infringed my rights under 17 U.S.C. Section 101 et seq. and could be liable for statutory damages as high as $150,000 as set forth in Section 504(c)(2) therein.

    I demand that you immediately cease the use and distribution of all infringing works derived from the Work, and all copies, including electronic copies, of same, that you deliver to me, if applicable, all unused, undistributed copies of same, or destroy such copies immediately and that you desist from this or any other infringement of my rights in the future. If I have not received an affirmative response from you by April 15, 2007 indicating that you have fully complied with these requirements, I shall take further action against you.

    Very truly yours,

    Milton, Chadwick, and Waters

    Attorneys at Law

    While I don't consider myself an "A" class blogger, I guess too many people who were looking for content from the late Mr. Thompson keep arriving at my site and scratch their heads about this whole SharePoint thing.

    So I need to find a new name for my blog. I turn to you, happy reader on this fine day for your help. What should I rename my blog to be? Looking for any suggestions here no matter how wild so feel free to leave them in the comments section below. If I don't come up with a good name by April 15th I'll shut this blog down.


  • Dolphins, Humans, and Digital Wristwatches

    Yeah, the titles are getting sillier and sillier but that's a direct result of being locked up in a room with John Bristowe for 3 hours consuming pizza, filtered Calgary water, and podcasts. James has taken over the all important but seldom overlooked duties of splicing together the mess that we call a podcast, Plumbers @ Work and put episode 10 online.

    Lots of great discussion about OR/M tools like NHibernate and the ADO.NET Entity Framework (they really need a cool, catchy, codename) and game development and stuff. JP is MIA but he'll be back.

    You can catch the episode here on our podcasting site Plumbers @ Work and download it directly here to your favorite MP3 device, like an Etch-A-Sketch.

    10 whole episodes of greatly outdated material and bits that nobody will ever listen to again. Maybe someday we'll get big and strong and start wearing the big-boy pants and have some useful to say, in the meantime be sure to kill an hour listening to this.

  • Groking Rhino Mocks

    If you've been in a cave for awhile, there's a uber-cool framework out there call Rhino Mocks. Mocks are tools that allow you to create mock implementations of custom objects, services, and even databases, to enable you to focus on testing what you're really interested in. After all, do you really want to be bogged down while your unit tests open up a communication channel to a web service that might be there, only to test a business entity that happens to need a value from the service?

    Rhino is a very cool framework that Oren Eini (aka Ayende Rahien aka the busiest blogger in the universe next to Hanselman and ScottGu) put together. It recently hit version 3.0 and is still going strong. If you don't know anything about mocks or are interested in Rhino now is your opportunity. Oren has put together an hour long video screencast (his first) that walks through everything you need to know about mocks (using Rhino of course). I strongly recommend any geek worth his or her salt to sit down for an hour, grab some popcorn, and check out the screencast. You won't be disappointed.

    You can download the screencast here (35mb, which includes the video and the source code for the demo). You'll need the Camtasia TSCC codec to watch the video which is available here. More info and a copy of Rhino itself can be found here.


  • Introducing... the Scrum Witch!

    My co-worker came by today and dropped off something at my desk. He calls it the Scrum Witch. Here she is:

    The Scrum Witch

    She's a crazy one alright (and loud). We've had some challenges getting everyone to follow the daily Scrums as side discussions start happening, water cooler gossip, etc. and 10 minute standups turn into 30 minute meetings. We're also all about fun and trying to enjoy our jobs.

    So anyone who starts to stray from the Scrum rules (a watered down version from various places like here that we've printed out and put up in the Scrum Room), we unleash the "Scrum Witch" on them. A small button on her back lets out a screaming wail to let the speaker know we're off on a tangent and a reminder to get back to business.

    We'll see how it goes over the next few weeks (or until the entire team gangs up on the Scrum Master and beats him to death with, in which case this might be my last post).

  • The old age home for saved Groove account files

    I was digging Groove, Microsoft's acquisition from Ray Ozzie and co. before he joined the evil empire and now part of Office 2007, way back in 2005 when I got a copy of the latest version. I also took a copy of my license file (the .grv file) and as a good citizen, backed it up. It sat on an external drive and there it stayed for months. The last saved copy was back in October of 2006.

    I finally got around to trying to use the new Groove 2007 (we have a workspace on there for the Plumbers@Work gang) I was rather disappointed with this friendly error message:

    Hmmm. Okay, so my license file is too old. Hate to tell you, but I don't have a newer one so this message is about as useless as it gets. Please import a more recently saved version of this account. Besides perhaps being grammatically wrong, what if you don't have a "more recently saved version", which is the pickle I appear to be in.

    A few questions in the MVP groups yielded only one unflattering and hopeless response.

    "Your screwed"

    So I'm asking any Groovers (Groovers? Groovies?) out there. Am I? Is there a way out of this madness? I can't seem to find any contact info on the Groove site and even with all my ungodly MVP powers (read:none) I can't seem to find a way to get Groovin' again.

  • 3 Team System Goodies

    Stumbled over 3 different goodies for Team System and CodePlex that will be very valuable to me. Hopefully these will be useful to you too.

    CodePlex Source Control Client

    This is a command line tool which allows users to work offline (in the "edit-merge-commit" fashion). It's not as full featured as what you can get from the web experience or Visual Studio, but will help doing quick edits. This is the initial release so expect a few issues and bugs out of the gate but it does give us something that we can't get from Visual Studio, anonymous access. So non-developers on a project can pull down the source code with the command line client. Next up should be a Tortoise-like interface that will make it even easier to use (but I can dream can't I?)

    Team System Prescriptive Guides

    This is a plethora of whitepapers, best practices, videos and standards that you can just use in your organization if you're using Team System. No mess, no fuss. Contains guidance on structuring your solutions, custom check-in policies, branching, and migrating from VSS. Very handy!

    Web Access for Team System

    This is probably the biggest surprise as I didn't see it coming (but then I'm asleep half the time). MS aquired a company that makes a web based client for accessing Team System (TeamPrise) so now they've got a web client for accessing Team Systems (should work with CodePlex but I haven't tried it yet). Best of all, it's free! Great for non-Windows clients accessing Team Systems if that's useful for you.

    And I haven't even got through all my feeds yet!

  • Returning Generic Lists from NHibernate

    This was an "aha" moment for me this morning as I was building a small spike project to figure out how we can incorporate NHibernate as our data access layer and still be a happy loosely coupled Smart Client.

    Beyond the fact that I have to create a class library to hold all my interfaces for my domain (as I need to inject my data provider into the domain repository and can't have circular references to NHibernate, but that's another story) I was finding it frustrating that I was getting ArrayLists back from NHibernate.

    Here's a method that gets me back a list of customer objects from the NHibernate session:

        1 public List<ICustomer> GetCustomers()

        2 {

        3     List<ICustomer> customers = new List<ICustomer>();

        4     ITransaction tx = null;


        6     try

        7     {

        8         tx = Session.BeginTransaction();

        9         customers = (List<ICustomer>)Session.CreateCriteria(typeof(ICustomer)).List();

       10     }

       11     catch (HibernateException)

       12     {

       13         if (null != tx)

       14         {

       15             tx.Rollback();

       16         }

       17     }


       19     return customers;

       20 }

    Typical stuff and the example you see everywhere. However I like dealing with a generic List<ICustomer> object rather than an IList (some will argue it's the same thing). The code above compiles fine, but when you run it you get an error as it cannot convert an ArrayList to a List<ICustomer> no matter how hard it tries. A quick check on the Session class in Reflector revealed to me there was a generic List<T> method. One line of code change was all that was needed and voila:

        1 public List<ICustomer> GetCustomers()

        2 {

        3     List<ICustomer> customers = new List<ICustomer>();

        4     ITransaction tx = null;


        6     try

        7     {

        8         tx = Session.BeginTransaction();

        9         customers = (List<ICustomer>) Session.CreateCriteria(typeof (ICustomer)).List<ICustomer>();

       10     }

       11     catch(HibernateException)

       12     {

       13         if(null != tx)

       14         {

       15             tx.Rollback();

       16         }

       17     }


       19     return customers;

       20 }

    Subtle but the change in liine 9 from List() to List<ICustomer)() gets me a collection of List<ICustomer> objects coming back from NHibernate. Silly I know, but something to watch out for as all the examples out there say call List() to get an IList back but this way you can get a generic list of whatever objects you want.

    kick it on
  • Fixing Product Backlog items in Scrum for Team System

    For those of us who use Conchango's process guidance package for Visual Studio Team System, Scrum for Team System, life is good. Scrums are easy to track, burndown charts are cool to put up on the wall, and it's a breeze to see immediately where you are anywhere anytime. However there's been some rumblings with the tool for awhile but Conchango has come to the rescue with a fix.

    The problem reared it's ugly head for me a month or two ago. I would look at the reports and whatnot on how much work remaining (in hours) there was on our Product Backlog Items (PBIs) for a given sprint. In our case, we had 600 hours left (which seemed about right). However when I looked at all the Sprint Backlog Items (SBIs) and totalled the hours remaining on the tasks, it only came to about 450 hours. This wasn't right and was really affecting our burndown charts and general feel of how much work was left before the sprint ended. Basically the work remaining in a PBI is inconsitent with the total work remaining for all the linked SBIs.

    It is possible in “normal operation” for this to happen. The following are the most likely causes of this behaviour:

    1. Where a SBI is unlinked from a PBI and the SBI still had work remaining prior to being unlinked. Please reduce the Work Remaining to 0 and save prior to unlinking.
    2. Where an SBI is marked as deleted (v1.0, v1.1). This would not cause resynchronisation, and this can be resolved by installing Scrum for Team System 1.2. 
    3. Where load conditions on the server cause the “eventing service” to timeout, the work remaining in this case would not be updated.
    4. Where a PBI Work Remaining is edited from within Excel (unfortunately using the Excel add-in for TFS has the potential to overwrite a “read only” field).

    After some discussion in the forum and a bit of testing of their tool, my problem is fixed. The command line tool is called the "Scrum Consistency Check" and will identify and resynchronize the work remaining on any project in your Team System that you think might be "out of sync".

    It's a command line tool, but you run it from anywhere. The syntax for the tool is:

    ScrumConsistencyCheck.exe [server] [project] [refresh (true|false)]

    The server is your TFS server name. The project is what project you want to interrogate/update. The refresh is if you want to perform the update or just get a report of what the status is.

    Here's a sample output from one of my projects that was suffering:

    Check.exe myserver "BRMS" false
    Server: myserver
    Project: BRMS
    Refresh out-of-sync PBI's: false

    Number of PBI's in Team Project: 228


    Inconsistent backlog items:
    PBI 273, Work Remaining: 8 hours, Children: 13
            Inconsistent work remaining, difference: -8 hours
            SBI: 772, Sprint: 5 State: Done, Work Remaining: 0 hours
            SBI: 756, Sprint: 4 State: Done, Work Remaining: 0 hours
            SBI: 748, Sprint: 0 State: Not Done, Work Remaining: 0 hours
            SBI: 346, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 282, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 281, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 280, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 279, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 278, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 277, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 276, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 275, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 274, Sprint: 2 State: Done, Work Remaining: 0 hours
    PBI 678, Work Remaining: 12 hours, Children: 2
            Inconsistent work remaining, difference: -12 hours
            SBI: 869, Sprint: 4 State: Done, Work Remaining: 0 hours
            SBI: 709, Sprint: 4 State: Done, Work Remaining: 0 hours
    PBI 235, Work Remaining: 2 hours, Children: 6
            Inconsistent work remaining, difference: -2 hours
            SBI: 396, Sprint: 3 State: Done, Work Remaining: 0 hours
            SBI: 321, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 320, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 319, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 318, Sprint: 2 State: Done, Work Remaining: 0 hours
            SBI: 317, Sprint: 2 State: Done, Work Remaining: 0 hours
    PBI 676, Work Remaining: 16 hours, Children: 0
            Inconsistent work remaining, difference: -16 hours
    PBI 998, Work Remaining: 4 hours, Children: 0
            Inconsistent work remaining, difference: -4 hours
    PBI 220, Work Remaining: 59 hours, Children: 0
            Inconsistent work remaining, difference: -59 hours


    Hours lost (less in PBI than SBI): 0
    Hours lost (less in SBI than PBI): 106


    Finished!  Press <enter> to continue.

    Once you run the tool and pass in "true" to refresh the out-of-sync PBIs you should see this:

    Check.exe myserver "BRMS" false

    Server: myserver
    Project: BRMS
    Refresh out-of-sync PBI's: false

    Number of PBI's in Team Project: 228

    Inconsistent backlog items:

    Hours lost (less in PBI than SBI): 0
    Hours lost (less in SBI than PBI): 0

    Finished!  Press <enter> to continue.

    You can download the tool from here. For support of the tool, please use the forum here. Hope this helps!

  • Return of the Plumbers... finally

    Well, 3 of us anyways. Yes, after 4 months of delays and vacations and concealed lizards of many kinds we're back with episode 9 of Plumbers at Work, our .NET community podcast that I do with John Bristowe, James Kovacs, and Jean Paul Boodhoo. JP is away in Edmonton during this podcast, but James, John and I blast out another episode. You can listen to the episode in your MP3 player directly from here. We're taping episode 10 this Wednesday with the full suite of plumbers that hopefully will be online shortly.

    P.S. I'm generally a loudmouth and barking out my stuff on the podcast, but this episode I seem to have been muted down quite a bit. Need to yell a little louder into the mic next episode.

  • Adding a splash screen to a CAB application

    Been awhile since I blogged as I've been sort of out of it missing the MVP Summit and all. Here's a simple way to add a splash screen to your Composite UI Application Block (CAB) based applications. It's pretty simple to implement a splash screen. This is a basic form that will popup with a logo or whatever of your choosing while the application loads. The code below is based on applications generated with the June 2006 version of the Smart Client Software Factory, but the idea is the same and can be applied to any CAB application.

    First, create the splash screen. This will just be a simple Winform you add to your Shell project. Call it ShellForm and give it a splash image to display. it helps if you change a few properties to make it more "splashy":

    • Change the StartPosition property to CenterScreen
    • Change the ShowInTaskbar property to False
    • Change the FormBorderStyle property to None

    Now drop a picture box on the form and load up your image. Any image will do, but you'll probably want to size the splash screen to match the size of the image (otherwise some shearing might occur).

    Now we need to modify two files. ShellApplication.cs and SmartClientApplication.cs. In ShellApplication.cs all you need to do is change the call to the base class of SmartClientApplication to accept your ShellForm. Change the declaration from this:

       22 class ShellApplication : SmartClientApplication<WorkItem, ShellForm>

    to this:

       22 class ShellApplication : SmartClientApplication<WorkItem, ShellForm, SplashForm>

    SplashForm is the name of the class you created for the new form. Finally we get down to the meat of the splash screen. In SmartClientApplication.cs we need to do two things, recognize the new parameter being passed into the class and get the splash screen going.

    First add a generic to the declaration of the SmartClientApplication class as TSplash:

       25 public abstract class SmartClientApplication<TWorkItem, TShell, TSplash> : FormShellApplication<TWorkItem, TShell>

    Then initialize it like the WorkItem:

       25 public abstract class SmartClientApplication<TWorkItem, TShell, TSplash> : FormShellApplication<TWorkItem, TShell>

       26     where TWorkItem : WorkItem, new()

       27     where TShell : Form

       28     where TSplash : Form, new()

    Add a private member variable to hold the splash screen (using the generic type "TSplash"):

       30 private TSplash _splash;

    Create the object in the constructor:

       35 public SmartClientApplication()

       36 {

       37     _splash = new TSplash();

       38     _splash.Show();

       39     _splash.Update();

       40 }

    After the shell gets created, we want to kill off the splash screen. We'll do this in the AfterShellCreated method of the SmartClientApplication class by adding an event handler when the Shell gets activated. Change your AfterShellCreated method to look like this:

       46 protected override void AfterShellCreated()

       47 {

       48     base.AfterShellCreated();

       49     Shell.Activated += new EventHandler(Shell_Activated);

       50 }

    And create the event handler. The handler will remove the Shell.Activated event and dispose of the Splash form:

       57 private void Shell_Activated(object sender, EventArgs e)

       58 {

       59     Shell.Activated -= new EventHandler(Shell_Activated);

       60     _splash.Hide();

       61     _splash.Dispose();

       62     _splash = null;

       63 }

    That's it! A cool looking splash screen for your CAB application in about 10 minutes.

    Note: There was a long thread here on the GDN forums (moved to CodePlex) on doing this. That technique works as well, and gives you the ability to intercept the "loading" of the application as it goes through it's paces. We're using it for one app, but the technique above is a more simple approach that just gets the job done so you might find it easier to implement.

  • On being a "SharePoint" expert

    Recently Rocky had one of his many pearls of wisdom, that of the software development world becoming a specialization due to the complexity of the industry. Let me tell you that a) I agree with Rocky 100% (and more if anyone could agree more than 100%) and b) this is true especially so for SharePoint developers and 2007 (the version, not the year).

    Actually, let me quantify that. What is a SharePoint developer? Is it a ASP.NET developer who knows a lot about the SharePoint API or is it a SharePoint developer who knows a lot about the ASP.NET API? The answer is yes.

    Take me for example, I really didn't do a lot of ASP.NET development (about a year or so since B1) so other than little apps, the odd web service, etc. I really didn't have an in-depth experience being an ASP.NET developer. When I got the SharePoint itch I scratched it with what little COM+, ASP and structured development techniques I knew (we're talking back in STS and SPS 2001 days, before the .NET version). With v2 and 2003 came .NET and more knowledge of how IIS worked, ASP.NET server controls and all that goodness. Now here's 2007 and we're dealing with ASP.NET 2.0, security and membership providers, master pages, user controls, workflow, and a million other little tips, tricks and gadgets that would drive anyone batty.

    It's just too much for any one brain to handle (except maybe Hanselman, but we all know he's not human anyways). And there's more to come! Upgrading from ASP.NET 1.1 to 2.0 was a huge shift for SharePoint (in fact a complete flip of the architecture, literally) but moving to 3.0/3.5 isn't going to be that much of a big deal. It's just a different version of the API, a set of new dlls, some Atlas thrown in. Basically a service pack, not a full blown release. With that will come all kinds of things. How about LINQ for SharePoint? We already have people writing PowerShell servlets that will treat SharePoint sites as folders you can navigate, so querying a SharePoint list shouldn't be any more difficult than using LINQ to query a list of objects.

    The future is here and moving fast. Being a SharePoint expert isn't just about knowing all the technologies, layers, and tiers that encompasses SharePoint because frankly that's not realistic. SharePoint is just another layer in the stack, another tool in the toolbox, for us "developers" to work with. Whether we choose to weigh more heavily on the SharePoint API or the ASP.NET one, it's just a matter of what we're trying to accomplish. Being a SharePoint expert is about knowing what's available and making use of it, and getting the guys who really know this stuff inside and out to build it for you (or if you're that guy, build that part yourself).

    I think gone are the days where being a SharePoint expert meant you knew every nut and bolt of every piece of the machine. Today, you're lucky if you can get your head wrapped around one part of it.

  • Lies TFS Told Me

    This morning we suffered an air-conditioner failure in one of the server rooms. As our TFS server was deployed to a server with the words "dev" in it, naturally they thought they could just shut it down. After all, dev means development so it's not critical. I don't blame them however it caused a few issues. This one was shared to me by one of my developers so hopefully it'll help some of you out there in TFS land.

    So apparently if the TFS server goes down while you're working on a file Visual Studio will allow you to work on the file locally. Makes sense. However, while editing the file, the server does not know that you have updated the file. Thus, when you reconnect to the server (or in this case the server comes back from the dead) TFS doesn't realize the file has been modified and will not save the changes to the repository when you complete a check-in.

    To get TFS to check-in the modified file you must specifically go to each of the modified files and select Check Out for Edit. Then, select None - Allow shared checkout and click the Check Out button from the Check Out dialog that appears. Finally, check-in (commit) the files back to the source repository. 

    The one other symptom is that your local files are not set to read-only like all other files when you try to update from the repository (Get Latest Version).

    Something to watch out for in case your server goes bye-bye in mid-development.

  • A refactoring that escapes me

    We had a mini-code review on Friday for one of the projects. This is a standard thing that we're doing, code reviews basically every week taking no more than an hour (and hopefully half an hour once we get into the swing of things). Code reviews are something you do to check the sanity of what someone did. Really. You wrote some code 6 months ago and now is the time to have some fresh eyes looking at it and pointing out stupid things you did.

    Like this.

    Here's the method from the review:

       1:  protected virtual bool ValidateStateChangeToActive()
       2:  {
       3:  if ((Job.Key == INVALID_VALUE) ||
       4:  (Category.Key == INVALID_VALUE) ||
       5:  (StartDate == DateTime.MinValue) ||
       6:  (EndDate == DateTime.MinValue) ||
       7:  (Status.Key == INVALID_VALUE) ||
       8:  (Entity.Key == INVALID_VALUE) ||
       9:  (String.IsNullOrEmpty(ProjectManagerId.ToString()) || ProjectManagerId == INVALID_VALUE) ||
      10:  String.IsNullOrEmpty(ClientName) ||
      11:  String.IsNullOrEmpty(Title))
      14:    {
      15:      return false;
      16:    }
      17:    return true;
      18:  }

    It's horrible code. It's smelly code. Unfortunately, it's code that I can't think of a better way to refactor? The business rule is that in order for this object (a Project) to become active (a project can be active or inactive) it needs to meet a set of criteria:

    • It has to have a valid job number
    • It has to have a valid category
    • It has to have a valid start and end date (valid being non-null and in DateTime terms this means not DateTime.MinValue)
    • It has to have a valid status
    • It has to have a valid entity
    • It has to have a valid non-empty project manager
    • It has to have a client name
    • It has to have a title

    That's a lot of "has to have" but hey, I don't make up business rules. So now the dilemma is how to refactor this to not be ugly. Or is it really ugly?

    The only thing that was I was tossing around in my brain was to use a State pattern to implement active and inactive state, but even with that I still have these validations to deal with when the state transition happens and I'm back at a big if statement. Sure, it could be broken up into smaller if statements, but that's really not doing anything except formatting (and perhaps readability?).

    Anyone out there got a better suggestion?

  • Well, that was a short trip

    Unfortunately, due an emergency I have to cancel my trip to the MVP Summit. Bummer. A week of frolicking with other MVPs, missing out (again) on Jeff's party (220+ people so you know it just has to be good, or at least crowded), a week of in-depth sessions with Microsoft product teams, and generally having a grand old time. Ah well. There's always 2009.

    To my fellow MVPs and plumbers that are having a good time right about now, and the softies I was going to mame with some up close and personal paintball damage, have a great time and feel free to make fun of me or talk about me behind my back. I know you wanna.

    In the meantime I guess I'll just blog about SharePoint and whatnot this week seeing that everyone else will be talking about parties and things they can't blog about. Guess I'll be the odd man out. More audience for me, and you know, it is all about me!

  • MVP Summit - Day 0 - Lift off!

    And so it begins. Another conference, another series of wordy photo blog entries from my twisted sense of humor over the next week. This week 1700 MVPs are gathering in Seattle/Redmond for our kindof-annual Summit (happens every 18 months or so). This time round big Bill Gates is giving the keynote on Tuesday and there's going to be plenty of fun, networking, and surprises this week so stay tuned.

    With that I have all I need. My laptop, a moose, and my passport to prove that I'm not a terrorist.


    It's a few hours before I leave for the airport. Stupid me, I decided to go cheap with the flight and pick a stop-over in Vancouver. So a short flight to Vancouver (90 minutes) then a 3 hour lay over, then an hour flight to Seattle. What was I thinking?

    For those of you hunting me down, I'll be at Jeff's party tonight as soon as I drop my bags off at the hotel (I touch down at 7 and the party kicks off at the same time). I'll be snapping shots of all your favorite MVPs in various state of debauchery (Rory, this means you!) and posting later tonight. I'm staying at the Grand Hyatt Seattle and will post info about what room you can charge your mini-bar too, etc. later when I can catch a breath.

    The week is all NDA content so us MVPs won't be posting much info however last summit we were allowed to talk about something in Office 2007 (the save to PDF) which got nixed anyways. Maybe they'll be something we can post this time round, but I doubt it since the next version of Office/SharePoint is too far off.

    My Flickr site will contain all the goodies I snap over the week. We've also created a tag for the SharePoint MVPs to swap photos so Flickr pics should be tagged with mvp07 for the summit. I also have a Flickr set for the summit here which will grow through the week. Expect at least one blog entry a day, but I'll try to squeeze in maybe 2 or 3 with pics as the conference is near our hotels so we can head back to recharge during the day (or night).

    So feel free to live vicariously though my blog entries and pics and we'll talk to you later!

  • 2007 time zone update, will it be worth it?

    March 11 is almost here and the crazy time zone updates are upon us. If you're looking for info on Windows operating system updates, check out the KB article here. Personally I can't follow it. At this point I'm not sure if my XP SP2 box is updated or not? Or if it will update? I'm a pretty techy guy but some things I just don't give a rats ass about, and this is one of them. Normally it just updates but tonight I don't know if it will so I would have to do it manually. Then in 4 weeks when the automatic DST adjustment kicks in, I'll have to manually adjust the time again? (and twice more in October/November). What bothers me is that if Microsoft has the time zone adjustment already in the operating system, why can't they issue an update to handle the new one (or maybe they did, again, not paying attention here).

    I've seen so much hype about this and have to wonder is this going to be worth it? I mean, the 3 week adjustment now and 1 week adjustment in October is apparently going to save gobs of cash. Is it? Is there someone going to measure this? Is this even measurable? 6 months from now I'm sure someone will come out and say it is, but how true will it really be. Give me a bunch of stats and I can spin them any way to appear to be value-added. That's what statistics and numbers do. They're the truth but as Obi-wan said, from a certain point of view.

    Personally I don't think it's going to save the billions of dollars anyone thinks it will but maybe that's just one geeks opinion.

  • Expand your horizons, get introduced to C# 3.0

    Scott Guthrie has an excellent summary post of some key features in .NET 3.0 that's coming in the next release of Visual Studio (codename Orcas). His post is short, sweet, and shows how you do something today and how it can change in the future.

    This covers automatic properties, object and collection initializers (all of which I've been really digging in my test environment). Even with mocks, writing test code is that much easier in 3.0 (or is it 3.5 now? I always loose track) and the syntax isn't as cryptic as LINQ (which I'm still wrapping my noggin' around).

    Check out Scott's post here to get introduced to the new C# features.

  • Communities need care and feeding

    A long time ago in an office far, far, away I started up the SharePointKicks site. It was a bright idea spurned on by the popularity of DotNetKicks. Community driven content like Digg, etc. but focused on SharePoint. The early days I was submitting a lot of entries (mostly my own) and hunting down the choice ones out there written by you. As with any community site it needed something. A community. One that could contribute and nurture and make it grow.

    That really hasn't happened.

    SharePointKicks is a great site, or can be. Lately it's been pretty barren of new content and it hasn't shared the popularity of it's sister site DotNetKicks. Sure people still post, but I know there are tons of new SharePoint 2007 blog entries that are gems. That's what the kicks site is there for. Pulling those gems out of the quagmire called the net and letting them bubble up to the surface through your "kick" votes. The more popular something is, the higher up the charts it is and there it stays.

    It's sad to see a community wither and fade away like this. I'll admit I haven't spent a lot of time lately seeding it myself as I felt the SharePoint community would pick up on the concept and run with it. That's not happening and I'm not sure why. Maybe I'm wrong in my premise that SharePointKicks was a good idea, or maybe everyone is too busy (like I've been) to contribute.

    In any case, SharePointKicks will live as long as it's host keeps it alive. Like I said there are people contributing, but it's a trickle compared to the SharePoint content I read each day. If you can, please keep it in mind to contribute somthing (content people, not money). Either your own blog entries or someone elses. Hopefully with your input the site can continue to grow instead of stagnate which it's been doing for a few months now.


  • What's in your OSS box?

    Following on the heels of Jeremy Miller and JP, here's a list of my open source tools in my tool chest. My list may be surprising to some.

    • Enterprise Libraries. Some people hate 'em and there's people that blast them for being "too big" but it works. Free logging, exception handling (via policies I dictate through an XML file), and other goodies. Version 3.0 adds some business validation framework and even more stuff.
    • Composite Application UI Block (CAB). A library that provides a framework for building composite applications. It lets me modularize things and not worry about the plumbing to make things talk to each other (thanks to an easy to use event broker system). It also includes ObjectBuilder to boot which is a framework to build dependency injection systems.
    • Smart Client Software Factory. Another framework (and collection of guidance packages) that jumpstarts building Smart Client applications. Basically provides a hunk of code I would normally have to write to locate services, load modules, and generally be a good Smart Client citizen.
    • NAnt. Can't stand MSBuild and wouldn't give it the time of day. NAnt is my savior when I need to automate a quick task.
    • NUnit. Again, MSTest just dosen't measure up when it comes to integration with my other tools and most everything is written these days with NUnit examples. Okay, so NUnit isn't as powerful as say MbUnit but I just love the classics.
    • CruiseControl.NET. The ThoughtWorkers are awesome dudes of power and CC.NET is just plain simple (see my struggle with Team City recently).
    • TestDriven.NET. A great tool that I can't live without.
    • NCover/NCoverExplorer. I just love firing up TestDriven.NET with covage and seeing 100% in all my code. Where I'm lacking, it points it out easily and I just write  a new test to get my coverage up.
    • Subversion. I have a local copy of Subversion running so I can just do quick little spikes and maybe file the code away for a rainy day on an external drive.
    • TortoiseSVN. And CVS I guess when I have to access CVS repositories out there in internet land. I'm still waiting for a TortoiseTFS.
    • RhinoMocks. I started writing mocks last year and haven't looked back since. While it does take some going to set things up, if you start writing mocks and doing TDD with them you'll end up with a better looking system (rather than trying to mock things out after the fact). Rhino is the way to go for mocking IMHO.
    • Notepad++. Lots of people have various notepad replacements, but I prefer this puppy.
    • WinMerge. Great tool for doing diffs of source code or entire directories.

    This is what's in my toolbox today and I use on a daily basis (and there's probably more but I haven't had enough Jolt tonight to bring them out from the depths of my grey matter). I've glossed over and checked out various other tools like NHibernate, Windsor, iBatis, and even Boo but they're not something I use all the time.

    While I don't have as many as the boys, I think I have what I need right now. What's surprising looking at the list is that some of my stuff is Microsoft which just flys in the face of any comments from people that "We only use Microsoft" means you *can* use OSS tools even if they're from the evil empire. Even with the MS list of items I use, I'm covered with things like dependency injection and separation of concern although the MS tools don't fare nearly as well as say Windsor or StructureMap, they still do what I need them to.

  • Tuesday Night Downloads

    Two downloads I thought I would toss out there for your feed readers to consume.

    Visual Studio 2005 Service Pack 1 Update for Windows Vista

    During the development of Windows Vista, several key investments were made to vastly improve overall quality, security, and reliability from previous versions of Windows. While we have made tremendous investments in Windows Vista to ensure backwards compatibility, some of the system enhancements, such as User Account Control, changes to the networking stack, and the new graphics model, make Windows Vista behave differently from previous versions of Windows. These investments impact Visual Studio 2005. The Visual Studio 2005 Service Pack 1 Update for Windows Vista addresses areas of Visual Studio impacted by Vista enhancements.

    Sandcastle - March 2007 Community Technology Preview (CTP)

    Sandcastle produces accurate, MSDN style, comprehensive documentation by reflecting over the source assemblies and optionally integrating XML Documentation Comments. No idea what's changed in this CTP but now that NDoc is dead and buried, like Obi-Wan, this is our only hope.

  • NetTiers gets a Wiki (a real one)

    One of my more favorite tools is NetTiers. Calling a template a tool might be a stretch but it's my primary reason for using CodeSmith. NetTiers is basically a set of CodeSmith templates that, when pointed at a database, gives you a nicely separated layered set of .NET projects for accessing your data. Basically an auto-generated data access layer. One class for each table in your schema, one property for each column, methods for stored procs, wrappers for accessing aggregates like getting all records (by primary or foreign keys) and other good stuff. I've used it for my data access layer in a few projects and even used the autogenerated website either as a starter for sites or entire sites where all I needed was a set of pages for data maintenance.

    Documentation is always a bear for any tool (free or otherwise) and NetTiers is no different. The templates generate 10,000 lines of code, hundreds of classes (depending on how many tables you have) and it's a lot to take in. The forums they have are good but it's hard to find information (even with search) and information quickly becomes out of date without any care and feeding. Wikis can help (but not solve) that by turning to the community for contributions. I'm a firm believer that it's easier to have 10 monkeys write 10 pages than 1 person writing 100 pages of documentation (and we all know who the monkey is here).

    So now the NetTiers team has setup real wiki here just for that purpose. Signup for an account and start contributing if that's your thing. This is an improvement over the previous "wiki" they had setup but that effort wasn't quite optimal. The wiki was closed to the public and only contained a handful of canned pages. In any case, the new one is out in the open and ready to go. So if you're a consumer or contributor (or both like me) check it out!

  • The "need" for Scrum Tools

    I got an email from Mike Vizdos about a blog post he was writing up. Mike is the author of a blog called Implementing Scrum. It's a great blog where he posts a cartoon to describe a Scrum concept in a very acceptable way (see below for an example). It's an excellent communication mechanism. I find myself printing out Mike cartoons all the time and putting them up around my office (aka the Scrum room) to hit home the Scrum concepts to people.

    Mike recently blogged about Scrum tools and mentioned my own Scrum Tools Roundup post (thanks Mike!). His post talks about the value (or lack of) Scrum tools and I partially agree with him. There are times I've said exactly "Please make sure you update tool X so that we can report our burndown to [someone who is not even in the room]."

    I personally just print out the burndowns (we use the Conchango Scrum Plugin for VSTS) and put them on the wall along with a splash screen for the project and the post-it notes that represent the tasks (our task board). We still use VSTS for tracking, but the wall is the conversation piece. Each morning (currently 2 Scrums soon to be 3) we get together and do our daily standup with everyone sort of huddled around the "wall" (a giant 14 foot wall I have in my office that holds all the post-its). People talk about what they did and what they're working on, move stickies around, and all is well.

    I agree with Mike in that if you're burning 50% of your time in a project on maintaining a tool, a backlog in a tool (VSTS, spreadsheet, or otherwise) then you're spending about 49% too much time. However as the Scrum Master I keep the tool up to date, the PM uses it (constantly) to report progress to a heavenly body, and I personally try to get the team to not get hit too much with any administrivia tasks. The most any team member should do is to a) change the state of a task to In Progress or Done and b) knock down the work remaining on tasks as they're doing them say from 8 hours to 4 (or 0).

    That shouldn't take more than a few minutes a day before the daily standup. Really.

    If you're donig more than for sure you're spending too much time and you should read Implementing Scrum on a more frequent basis (and maybe change how you work).

  • A Plugin for a Plugin

    Isn't life amazing? These days you can download a plugin (ReSharper) and write a plugin for it (VstsUnit). That's what my fellow plumber James Kovacs did. He was using VSTS for his unit testing but unfortunately the built-in Test Runner for ReSharper didn't support VSTS unit tests, only csUnit/NUnit. So armed with the ReSharper SDK and a few brain cells to kill, he went off to build a plugin for a plugin. Basically it allows you, with ReSharper installed, to run Visual Studio Team System based tests via the test runner. I personally use TestDriven.NET which supports anything you throw at but if you're a ReSharper guy and are stuck with VSTS rather than NUnit for your testing tools, take a look at this freebie. You can download it here.

  • What a craptastic day

    And here I was going to blow away a few holes in some drug lords in Crackdown this morning but was faced with this:

    Guess it's a call to 1-800-4MY-XBOX and an exchange (or buy a new one which I might have to do). The catch? I think I'm about 1 month over my warranty. I swear to god they must have a timer chip in the XBox as it's been a little flakey the last few weeks and now this.

    Sucks to be me.

  • Team City and Windows, no documentation and not a happy combination

    Nothing like Friday morning to inflict more pain and suffering from our continuous integration process. Recently we purchased a gaggle of ReSharper licenses from JetBrains. For whatever reason, the person making the deal decided to buy the bundle that included Team City (apparently it's cheaper to buy the bundle with TC than to buy individual ReSharper licenses, but the logic behind that move escapes me). I've seen Team City in the past and it looked interesting but without a demo you can get your hands on, I didn't give it much thought.

    Since it's touted as the "Most Intelligent Integrated Team Environment" I went to try this out as a potential replacement for our CruiseControl.NET setup. Boy, was that a mistake.

    The install went pretty smoothly and I tried it out on my laptop with a local setup to start. I have to admit that TC has a nice setup. Install software, open site in browser, start configuring projects. That was kind of the first problem. After the install the system just sat there. I wasn't sure if it was done or not or what I was supposed to do next. I took a look in my Start menu for perhaps a Team City menu with at least a README or something but that was a barren sea of nothingness. Finally I just launched my browser and pointed it to the port I installed TC on and was presented with a screen to enter license information. Aha.

    Next was setting up a test project. I didn't want to get into a full blown, download source code task just yet so created a new project that I had a solution file for and found that Team City supported MSBuild. So I just pointed it at the .SLN file hoping it would work. Nope. It complained about some node that it didn't support or some such silly thing. I know .SLN files can be launched with MSBuild (we do that now with CC.NET) but looking at the errors, it appears that TC has some kind of wrapper around MSBuild so I guess it can only support specific types of MSBuild files.

    This is where the next problem came up. Documentation. There was no documentation shipped with the product and going online there's only a FAQ with a few gems of wisdom available. There is a forum, but you know what's thats like. Post a message, wait a few days, etc. There are places to submit bugs and questions and the response can be pretty good, but again why isn't there any documentation to get setup? The only documentation there is seems very primitive, as in, it replicates what you see on the screens but really doesn't do a good job on what to do when something goes wrong.

    Giving up on the MSBuild configuration to launch a solution file, I changed my test project to something that uses NAnt. That seemed okay but then the agent didn't understand what NAnt was and again I went digging to find out what was needed (besides picking it in the list of build configurations). I stumbled across (on another FAQ pages) that you need to create an environmental variable (in TC) called NAntHome and point it to the root where NAnt is installed. Fair enough. Once I figured out where to put this information (in a config file, hey I thought we don't have to setup config files with TC?) it was able to recognize where NAnt was and could launch my project. Looked good but I didn't set it up to run any tests so it was just doing the build for me.

    Next was to get rid of the userid/passwords. Again on the FAQ it says you can change over to NT Authentication and sure enough, on the TC install there's an option to flip it over. You do it and it tells you that you're about to be logged off. No problem. When the page reloads, it's now telling me to create a new Administrator account (which I had already done when it was first setup). However I can't seem to create an account using DOMAIN/USER now using any kind of combination of entries.

    I submitted a "HELP ME" message to try to revert back to non-NT authentication and got the response but now I'm stuck. Maybe my concept of "using NT authentication" is flawed (at least for this product). I assume that I can turn it on and users won't have to enter their usernames or password to access the system. If I try to add a user to the system while I have it up in non-NT mode, I could try to enter DOMAIN/USER for the name but password is a mandatory field and I'm certainly not going to enter my password here nor ask other people to give me theirs (or get them to create users for themselves which seems like a silly thing). If I flip it over to NT authentication I'll be at the same place again.

    Support wasn't very helpful here. The first response was:

    "NT auth enables authentication using users registered on the machine where TeamCity is installed (system users)."

    Reading between the lines means I have to add a user manually to the machine from my domain? I tried to clarify what I was looking to accomplish, namely creating new users in TC with NT names. The next response was even less than useful:

    "Users are created in the user management on your machine, please refer to your OS documentation for details."

    Yeah, right. Of course that's what I'm asking. I like totally asked you how to create users in Windows. Again, no documentation on how this mechanism works or what to do here.

    Finally I decided to hook it up to source control and pull down a copy of the system. Another problem is that the current release (1.2) only supports CVS, Subversion, Perforce and VSS. No support for TFS. More Googling and I found a EAP release with TFS support and downloaded it. Now I'm one to go in and just run an installer, but if there's a README file or something (especially telling me about upgrades) I read it in case I need to do anything special. Nope. Not in this case so I just ran the installer.

    Now I'm currently looking at a Tomcat screen (their web server for TC) producing an error and lots of Java gobbly-gook in the log file saying a bean can't be created or something. Not a happy camper with the "upgrade". I'm sure I could just blow away the previous version with a full uninstall and install the TFS build, but what if I had real projects configured here? I've submitted a message to the guys with my details and hoping they'll get back to me but it doesn't look very good for this product. Here's the initial response to my submission:

    "Passed on to the developers, support team doesn't deal with the EAP versions."

    Okay, I can understand that. EAP versions are new(er) and maybe not supported by the main people. Still the response is basically useless to me along with the tool at this point. I'm not even going to both submitting a question about how do I use MSBuild with a solution file. I'm sure the answer will be something like "Configure MSBuild on your server" or some other nonsense.

    Team City looks like a great product and I'm sure in the Java world and on a Linux or Mac setup it might be the cats meow. I personally just found it painful to get things to work on Windows and .NET projects. CC.NET isn't a picnic for setting up and adding new projects (compared to Team City) but at least it always works and there is documentation out there and a pattern to doing things. Once you setup one project you can practically setup any other.

    Basically someday I would hope to see CC.NET implement some of the things TC does, although I'm sure that's not doable from a competition perspective (free vs $$$). Setting up CC.NET and it's projects can be a pain. I have the process down to a few hours for a new CC.NET setup from scratch (that includes setting up all the tools, downloading, etc.) and new projects can be added in under an hour (less if there's a smart build file in the project). Pulling from Subversion, VSS, and TFS is a no-brainer in CC.NET these days and works flawlessly. There are some GUI tools out there that will edit the CC.NET config file for you, but they're in the embryonic stage and it's easier to just edit the files with Notepad++.

    Maybe someone will come up with modification to the CC.NET web dashboard to make it a little smarter like TC and allow you to at least create projects, etc. from it but until that happens we're regimented to editing XML files and such (which isn't a horrible thing).

  • Is there a cloning machine in the house?

    What a crazy month it's been and I'm just still trying to catch up here. Work is insane as I'm playing lead architect on 3 projects, ScrumMaster on those 3 projects, planning for 2 new projects (including putting on my infrastructure hat to do a 2000 user Vista upgrade). All this while trying to juggle the MOSS 2007 release of my SharePoint Forums, the first release of the Knowledgebase, SharePoint Builder, a SharePoint community site, and get ready for the MVP Summit coming up in a couple of weeks (plus a couple of other projects in the wings that are just starting). I seem to have burnt my candle on both ends, went to the cupboard and used up all the other candles there and still haven't finished. Not sure when I'm going to sleep this weekend but it'll be an interesting few days.

  • VB.NET Version of Web Service Software Factory now available

    In addition to the 800,000 RSS feeds I subscribe to I also keep an eye on downloads from Microsoft. This is so I don't miss anything that happens to come out and become the new SharePoint or something. However monitoring downloads from Microsoft is like herding cats. They come in spurts and don't make much sense to me.

    For example recently the following downloadables got updates: Guidance Automation Toolkit, Guidance Automation Extensions, and the Web Service Software Factory. What's odd about it is that they were updated in the last day or so, but are still versions from back in June 2006. So what? I'm getting updates because someone edited an entry on a site somewhere. Or is there a real update here or not.

    You think something like this was easy. Notify people when a "new" version is available, not just an update to the page that contains the old version. But then I'm prolly asking too much.

    BTW, the Visual Basic.NET flavor of the Web Service Software Factory (the title of this blog post) is new and available here.

  • A few good men (or women)

    I'm looking for a few good men (or women, but no lizards unless you're that cool one from that Geico commercial). I need a few good experienced intermediate to senior .NET developers in the Calgary area for projects. Short term, long term, or full time, it's flexible but availability needs to be immediate.

    As for qualifications a good, solid working base of .NET 2.0 is required along with good development practices (layered applications, separation of concern, thin UI, that kind of stuff). We use the following tools, technologies, patterns and practices so you need a good knowledge of some (or all) of these:

    • Scrum
    • .NET 2.0
    • Composite Application UI Block (CAB)
    • Enterprise Libraries
    • Smart Client Software Factories (SCSF)
    • Test Driven Development (TDD)
    • Visual Studio Team System
    • Design Patterns/Software Factories
    • Domain Driven Design (DDD)

    If you're qualified, interested, and avaialble and in the Calgary area feel free to email me with a note about yourself (attaching your CV is optional) and we can chat. Thanks!

    This blog entry was brought to you by the letter N, H, L and the number 7.

  • SharePoint Code Camp this Saturday

    Please join Robert Holmes and other presenters at an all day event in Waltham, MA on Saturday, February 24th, from 8:30 to 6:00 PM presenting all aspects of the new SharePoint products, WSS 3.0 and MOSS 2007. The main focus will be on the new aspects of the product line, however we will start with an introduction to Team Services (WSS), and work up from there, and our goal is for anyone present for the whole day will leave with the knowledge of what is possible with the product, and the ability to sit down and get started on a new implementation of a SharePoint solution, or extend an existing one. You can review the schedule to see what specific areas are of interest to you, or else spend the whole day and learn all there is to learn.

    You can find the code and slides here:

    You can see the content only site, no registration, at:


  • ASP.NET 2.0:1, Bil:0

    I just spent the better part of the evening chasing down the dumbest error I've ever seen. ASP.NET decided to litter my web.config file in a web app I'm setting up with a reference to stdole.dll (not a web dll nor is it anything I reference directly or even need). I finally tracked down someone who came up with a resolution and explanation here.

    Of course I couldn't figure out which assembly that was causing the problem, so I just threw my GAC'd version of stdole.dll into the bin folder of the website. I'm using a few dlls including Enterprise Libraries, but nothing fancy. In any case, if you ever see this crazy message when deploying a website:

    Parser Error Message: Could not load file or assembly 'stdole, Version=7.0.3300.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.

    You might want to check out the link above. Either a) manually edit the web.config file [which I'm sure I'll forget one of these days during deployment] or b) toss the file into your bin directory or c) hunt down the culprit and set "Specific Version" to false.

    Man, what a @!#!!!%!&! PITA this stupid, stupid, stupid thing is. I'm not sure if I'm peeved at MS for coming up with such a wonderful design, or for me for killing my brain all night over it. Now I feel as foolish as this and this.

    Sigh, I need a hug (or a beer, not sure which).

  • Versioning the Microsoft way... with NAnt

    I was inspired by a recent blog entry by Jeff Atwood here about how Microsoft versions their products and how the build number is significant. I thought it would be good to post a walkthrough of how to build your own versioning system ala Microsoft but using NAnt. I'm sure some budding geek out there could convert this to MSBuild, but you know my love of that tool so NAnt it is.

    First off, NAnt has a great facility for generating that AssemblyInfo.cs file that every project has. It's the asminfo task and basically looks like this:

    <?xml version="1.0"?>
    <project name="Test" default="UpdateAssemblyInfo">
        <target name="UpdateAssemblyInfo">
            <asminfo output="AssemblyInfo.cs" language="CSharp">
                    <import namespace="System.Reflection" />
                    <import namespace="System.Runtime.InteropServices" />
                    <attribute type="AssemblyTitleAttribute" value="ClassLibrary1" />
                    <attribute type="AssemblyDescriptionAttribute" value="" />
                    <attribute type="AssemblyConfigurationAttribute" value="" />
                    <attribute type="AssemblyCompanyAttribute" value="" />
                    <attribute type="AssemblyProductAttribute" value="ClassLibrary1" />
                    <attribute type="AssemblyCopyrightAttribute" value="Copyright (c) 2007" />
                    <attribute type="AssemblyTrademarkAttribute" value="" />
                    <attribute type="AssemblyCultureAttribute" value="" />
                    <attribute type="ComVisibleAttribute" value="false" />
                    <attribute type="GuidAttribute" value="f98c8021-fbf1-44ff-a484-946152cefdb8" />
                    <attribute type="AssemblyVersionAttribute" value="" />
                    <attribute type="AssemblyFileVersionAttribute" value="" />

    This will product a default AssemblyInfo.cs file that looks like this:

    using System.Reflection;
    using System.Runtime.InteropServices;
    // <auto-generated>
    //     This code was generated by a tool.
    //     Runtime Version:2.0.50727.42
    //     Changes to this file may cause incorrect behavior and will be lost if
    //     the code is regenerated.
    // </auto-generated>
    [assembly: AssemblyTitleAttribute("ClassLibrary1")]
    [assembly: AssemblyDescriptionAttribute("")]
    [assembly: AssemblyConfigurationAttribute("")]
    [assembly: AssemblyCompanyAttribute("")]
    [assembly: AssemblyProductAttribute("ClassLibrary1")]
    [assembly: AssemblyCopyrightAttribute("Copyright (c) 2007")]
    [assembly: AssemblyTrademarkAttribute("")]
    [assembly: AssemblyCultureAttribute("")]
    [assembly: ComVisibleAttribute(false)]
    [assembly: GuidAttribute("f98c8021-fbf1-44ff-a484-946152cefdb8")]
    [assembly: AssemblyVersionAttribute("")]
    [assembly: AssemblyFileVersionAttribute("")]

    Notice however a few things. First is the Guid. We had to hard code that which might be okay, but lets dig into NAnt scripting by replacing it with a real Guid. NAnt also has the ability to let you write embedded code (C#, VB.NET, etc.) via the <script> task, so let's write a small task to do that. We'll just have it generate a new Guid and set a new custom property in the NAnt script that we'll use in our asminfo task. Create a property in the NAnt script to hold our Guid:

    <property name="project.guid" value="f98c8021-fbf1-44ff-a484-946152cefdb8" />

    Then use that property in our GuidAttribute:

    <attribute type="GuidAttribute" value="${project.guid}" />

    Finally here's the task to generate a Guid via NAnt (make the default UpdateAssemblyInfo task dependent on this one):

    <target name="CreateUniqueGuid">
        <script language="C#">
                    public static void ScriptMain(Project project) {
                        project.Properties["project.guid"] = Guid.NewGuid().ToString();

    Great. We now have a NAnt script that will generate a new version file with a unique Guid everytime. Next we want to tackle the versioning issue.

    As described by Jensen Harris here, the Microsoft Office scheme is pretty simple:

    • Take the year in which a project started. For Office "12", that was 2003.
    • Call January of that year "Month 1."
    • The first two digits of the build number are the number of months since "Month 1."
    • The last two digits are the day of that month.

    Using this we'll need to setup a couple of properties. One is to hold the year the project starts, the other is the build version we want to set:

    <property name="project.year" value="2003" />
    <property name="build.version" value="" />

    Now we could write a lot of NAnt code as there are functions to manipulate dates, but it's much easier using the <script> task and some C#. Here's the NAnt task to generate the build number using the Microsoft Office approach:

    <target name="GenerateBuildNumber">
        <script language="C#">
                <import name="System.Globalization" />
                <import name="System.Threading" />
                    public static void ScriptMain(Project project) {
                        Version version = new Version(project.Properties["build.version"]);
                        int major = version.Major;
                        int minor = version.Minor;
                        int build = version.Build;
                        int revision = version.Revision;
                        int startYear = Convert.ToInt32(project.Properties["project.year"]);
                        DateTime start = new DateTime(startYear, 1, 1);
                        Calendar calendar = Thread.CurrentThread.CurrentCulture.Calendar;
                        int months = ((calendar.GetYear(DateTime.Today)
                            - calendar.GetYear(start)) * 12)
                            + calendar.GetMonth(DateTime.Today)
                            - calendar.GetMonth(start);
                        int day = DateTime.Now.Day;
                        build = (months * 100) + day;
                        version = new Version(major, minor, build, revision);
                        project.Properties["build.version"] = version.ToString();

    We get the version in the NAnt script as a starter (since we're only replacing the build number) and then assign values to it (they're read-only in .NET). Then this is written back out to the property as a string.

    If this is run today (February 17, 2007) it's been 49 months since the start of 2003 and today is the 17th day. So the build number is 4917. 

    Here's the finaly output from this NAnt script:

    using System.Reflection;
    using System.Runtime.InteropServices;
    // <auto-generated>
    //     This code was generated by a tool.
    //     Runtime Version:2.0.50727.42
    //     Changes to this file may cause incorrect behavior and will be lost if
    //     the code is regenerated.
    // </auto-generated>
    [assembly: AssemblyTitleAttribute("ClassLibrary1")]
    [assembly: AssemblyDescriptionAttribute("")]
    [assembly: AssemblyConfigurationAttribute("")]
    [assembly: AssemblyCompanyAttribute("")]
    [assembly: AssemblyProductAttribute("ClassLibrary1")]
    [assembly: AssemblyCopyrightAttribute("Copyright (c) 2007")]
    [assembly: AssemblyTrademarkAttribute("")]
    [assembly: AssemblyCultureAttribute("")]
    [assembly: ComVisibleAttribute(false)]
    [assembly: GuidAttribute("a6e7ff79-63ba-443f-8bc3-0c4b43f43ffe")]
    [assembly: AssemblyVersionAttribute("1.0.4917.0")]
    [assembly: AssemblyFileVersionAttribute("1.0.4917.0")]

    And here's the full NAnt script:

        1 <?xml version="1.0"?>
        2 <project name="Test" default="UpdateAssemblyInfo">
        4     <property name="project.guid" value="f98c8021-fbf1-44ff-a484-946152cefdb8" />
        5     <property name="project.year" value="2003" />
        6     <property name="build.version" value="" />
        8     <target name="UpdateAssemblyInfo" depends="CreateUniqueGuid, GenerateBuildNumber">
        9         <asminfo output="AssemblyInfo.cs" language="CSharp">
       10             <imports>
       11                 <import namespace="System.Reflection" />
       12                 <import namespace="System.Runtime.InteropServices" />
       13             </imports>
       14             <attributes>
       15                 <attribute type="AssemblyTitleAttribute" value="ClassLibrary1" />
       16                 <attribute type="AssemblyDescriptionAttribute" value="" />
       17                 <attribute type="AssemblyConfigurationAttribute" value="" />
       18                 <attribute type="AssemblyCompanyAttribute" value="" />
       19                 <attribute type="AssemblyProductAttribute" value="ClassLibrary1" />
       20                 <attribute type="AssemblyCopyrightAttribute" value="Copyright (c) 2007" />
       21                 <attribute type="AssemblyTrademarkAttribute" value="" />
       22                 <attribute type="AssemblyCultureAttribute" value="" />
       24                 <attribute type="ComVisibleAttribute" value="false" />
       26                 <attribute type="GuidAttribute" value="${project.guid}" />
       28                 <attribute type="AssemblyVersionAttribute" value="${build.version}" />
       29                 <attribute type="AssemblyFileVersionAttribute" value="${build.version}" />
       30             </attributes>
       31         </asminfo>
       32     </target>
       34     <target name="CreateUniqueGuid">
       35         <script language="C#">
       36             <code>
       37                 <![CDATA[
       38                     public static void ScriptMain(Project project) {
       39                         project.Properties["project.guid"] = Guid.NewGuid().ToString();
       40                     }
       41                 ]]>
       42             </code>
       43         </script>
       44     </target>
       46     <target name="GenerateBuildNumber">
       47         <script language="C#">
       48             <imports>
       49                 <import name="System.Globalization" />
       50                 <import name="System.Threading" />
       51             </imports>
       52             <code>
       53                 <![CDATA[
       54                     public static void ScriptMain(Project project) {
       55                         Version version = new Version(project.Properties["build.version"]);
       56                         int major = version.Major;
       57                         int minor = version.Minor;
       58                         int build = version.Build;
       59                         int revision = version.Revision;
       61                         int startYear = Convert.ToInt32(project.Properties["project.year"]);
       62                         DateTime start = new DateTime(startYear, 1, 1);
       63                         Calendar calendar = Thread.CurrentThread.CurrentCulture.Calendar;
       64                         int months = ((calendar.GetYear(DateTime.Today)
       65                             - calendar.GetYear(start)) * 12)
       66                             + calendar.GetMonth(DateTime.Today)
       67                             - calendar.GetMonth(start);
       68                         int day = DateTime.Now.Day;
       69                         build = (months * 100) + day;
       71                         version = new Version(major, minor, build, revision);
       72                         project.Properties["build.version"] = version.ToString();
       73                     }
       74                 ]]>
       75             </code>
       76         </script>
       77     </target>
       79 </project>


  • How to drive a developer insane

    Further to my post about the two dozen add-ins I have loading into Visual Studio (wonder what Scott's startup looks like) I have no less than *3* Refactor menus available to me.

    First there's the built-in refactorings from Visual Studio (including an additional one, refactor to resource that I added via the VS PowerToys):

    Refactor menu from Visual Studio

    Next there's ReSharpers Refactor menu that has a million options:

    Refactor menu from ReSharper

    Finally there's Developer Express' Refactor! Pro menu which is context sensitive so the options change based on what element you pick in the Editor (but I'm sure it's got as many as ReSharper or more):

    Refactor menu from Refactor! Pro

    Round that out with GhostDoc (Document This), the test options (TestDriven), a bunch of other options ReSharper and who knows what else threw in there, plus an addin that let's me jump to a method in Reflector, and all those silly "Create Unit Tests..." options that Team Suite added for which I swore I would never use.

    It's enough to drive a guy batty (which really does explain my state of mind after a day working in this environment). Never really noticed it before but Holy Options Batman, that's a crapload of stuff to deal with.

    Excuse me while I reboot my brain.

    P.S. there is an answer to this madness which makes complete sense. Collapse all "Refactor" menu options into a single menu and filter the refactorings to only show what I can do (like how DevExpress does it, good job guys!). Don't show me stuff that I can't do (even if it is grayed out). It's useless to me if I can't click on it. 

    In fact, do that to the entire context menu system in Visual Studio.

    Orcas anyone? Can you hear me guys!

    Wake up and give us code monkeys some screen real estate back please!

  • What's in your Visual Studio?

    I noticed tonight my splash screen for Visual Studio was getting a little out of control (in addition to taking some time to load):

    Visual Studio Team Suite Splash Screen

    I run Team Suite (just because I can) and I've got a lot loaded (including a new experiment, running ReSharper *and* CodeRush/Refactor Pro together) but I think I'm at the limit of my splash screen length and the add-ins it can display.

    Actually some add-ins (like GhostDoc and TestDriven.NET) don't show up on the splash screen so here's a complete list of what's in my Visual Studio:

    • Microsoft Visual Studio 2005
    • Microsoft .NET Framework
    • Microsoft Visual Basic 2005
    • Microsoft Visual C# 2005
    • Microsoft Visual C++ 2005
    • Microsoft Visual J# 2005
    • Microsoft Visual Studio 2005 Tools for Applications
    • Microsoft Visual Studio Tools for the Microsoft Office System
    • Microsoft Visual Web Developer 2005
    • Microsoft Web Application Projects 2005
    • Microsoft Visual Studio 2005 Team Edition for Software Architects
    • Microsoft Visual Studio Team Edition for Database Professionals Version 2.0.50727.251
    • Microsoft Visual Studio 2005 Team Edition for Software Developers
    • Microsoft Visual Studio 2005 Team Edition for Software Testers
    • Microsoft Visual Studio 2005 Team Explorer
    • CodeSmith 4.0
    • Crystal Reports for Visual Studio 2005
    • DXCore for Visual Studio 2.0
    • Enterprise Library Configuration Editor 3.0 Jan 2007 CTP
    • Extensions for Windows WF
    • Microsoft Recipe Framework Package 8.0
    • Microsoft Visual Studio 2005 Team Explorer - ENU Service Pack 1 (KB926601)
    • Refactor! for Visual Studio 1.0
    • ReSharper 2.5 (build #326)
    • SQL Server Analysis Services
    • SQL Server Integration Services
    • SQL Server Reporting Services
    • DevExpress Tools
    • GhostDoc
    • TestDriven.NET 2.0.1948 Professional

    Anyone else have this problem? Wonder what will happen if I install more add-ons? Maybe the makers of laptops need to focus more on vertical space rather than horizontal (or I could turn it on its side, but makes it hard to type that way).

  • Being a responsible aggregate blogger

    Sometimes things make my blood boil. Other times I just let it pass and move on. As I was trying to catch up on my feeds tonight (only 6182 to go for this month) I stumbled across this post by Andrew Stopford:

    Commercial UI components in OSS

    The use of commerical UI components in a OSS project is a topic of debate and contention. The standard winforms/webforms components are limited and using commerical UI components allows for a far greater degree of UI functionality and in an eighth of the time. It's such a mine field that certainly in the .NET space they are not used. You run the risk of a ever changing UI, a quote from this post on the use of commerical UI components in RSS Bandit.

    I read the quote and it seemed to be very familiar. Wait a minute, it was mine from this post on being a responsible open source developer I wrote back in December.

    The problem I have with the post that Andrew links to is it goes to some aggregate site (the World of ASP.NET) but if you visit the page from the link above, you have no idea who wrote it. In fact there's no link back to my original content.

    My blog content is licensed under the Creative Commons license and you're free to use it, but there needs to be some professionalism about "borrowing" someone elses content. I mean, at least have a link, credit, or at least a mention of where it came from if it's not your own thoughts.

    I've come across other sites where people just downright steal content and even claim it their own which is a whole 'nuther problem. Just as a friendly reminder for anyone who wants to link or quote my content (and you're quite welcome to) please direct people back to the source. It helps me understand if anyone is interested in my message and provides me with a conduit to the outside world in case I'm mistaken (hey, it can happen) or someone wants to leave me for dead in a dumpster.

    Just be a little responsible when you blog, that's all I ask.

  • Shane Perran is a wiener!

    Congrats to Shane Perran with his splash screen entry for SharePoint Builder as it was the fav (both mine and people who emailed me about the entries).


    Shane will recieve a special package from me at this years MVP Summit, including the fabulous home version of SharePoint Builder along with Rice-A-Roni, the San Francisco Treat.

    Thanks to everyone who entered!

  • Stuck between a Ribbon and a hard place

    I'm stuck. While some people will disagree with me, I like the Ribbon. It's a cool idea and looks sharp. As a side note, I convinced a customer we would incorporate it into their business app as the corporatation was moving to Office 2007 this year anyways, and they would be stuck with it. We'll be the first app using it and the customer is quite happy after seeing what it does and how they can use it. Anyways, back to my problem.

    SharePoint Builder is a free, Smart Client app that is coming out shortly (I think the alpha release is a couple of days away). The Ribbon concept kind of works with it as I can envision Ribbon buttons for different types of site definitions the editor handles (sites, lists, features, etc.) with Ribbon groups inside for items in those files, and the ability to add, edit, and delete them. This paradigm works with the Ribbon with the alternative option using traditional menus and toolbars (which works as well but isn't as sexy looking or functional).

    The trouble is finding a library out there to implement the Ribbon control. There are tons of libraries available now that provide the Ribbon control (DotNetBar, DevExpress, etc.) and they all work equally well. If I did have a choice of a commercial package I would probably go with DevExpress. I have a copy of it, it follows the Microsoft Guidelines for the Ribbon pretty closely, works well and is easy to implement. Therein lies my trouble. SharePoint Builder is a free project, and open source to boot. I'm not about to offer up a free, open source tool that relies on an expensive (or inexpensive for that matter), closed, commercial control. I simply won't do it.

    The only free, open source Ribbon I found out there was a GotDotNet project however it was pretty far off from the MS guidelines for Ribbon apps. This might spark a debate as most people don't agree that control authors should follow the guidelines put out my Microsoft however they're published and out there and if I'm going to use a Ribbon control in an app of mine, I'm going to do so with one that's as close to the guidelines as possible. The GDN project is too far off from being compatible and while it sort of looks like the Ribbon, it certainly doesn't behave like it (which I think is the main point of the Ribbon control concept anyways).

    So it doesn't look like a Ribbon is going to work here unless someone else has some thoughts. I'm already swamped in work so really don't have time to build my own open source Ribbon to offer this project (and everyone else once it was built) and the ones out there don't measure up to the guidelines. A free copy of a commercial package isn't the issue as I don't want others to not be able to build the application, and a trimmed down "donation" copy of a commercial package isn't going to work either as it's only as good as long as the company is out there (and there's the problem of what happens if/when it breaks or needs an update). I guess menus and toolbars it is for this project.

  • Crash Recovery

    A couple of weeks ago I had my first car accident in 24 years of driving. The car was a write-off and we've gone out and bought a replacement, as one does in this situation.

    Here's what the old car (a 2004 Suzuki XL7) looks like (not mine but an incredible simulation):


    Here's what I did to it:

    Rear end smash up

    What's left of the front of my car

    And here's what replaced it:


    A 2006 Pacifica. Leather interior, heated seats, dual climate control, lotsa buttons. Works for me.

  • Scrum tasks, tasks, and more tasks

    I find that I'm often letting my teams know some norms around Scrum and the process of a task-oriented system so I thought I would throw these out to the world. We sometimes loose sight of these simple things and get wrapped up in bigger, more grandiose ideas so consider this a Wednesday morning reminder.

    • Don't assign tasks to other people. You don't want people telling you what to do because frankly I don't function that way and I don't think people work well in this manner (in general). One of the key principles of Scrum is self-organized teams meaning teams that decide what they need to do to get the job done, not others telling them.
    • Don't claim tasks as your own but don't do work on them. I often see people stake claim to tasks in Team System (our Scrum tool) and set them to In Progress but they remain that way the entire sprint. It deflates the value of the system and just makes it look like the team is busy (to outsiders) but we're not really doing anything (or you can't tell what you're doing).
    • Don't create huge tasks that take days to implement. My rule is generally under 4 hours. If you have an 8 hour task what you're saying is that, uninteruppted, it will take me the entire day to work on this. In reality you'll be interuppted, they'll be meetings, phone calls, coffee breaks and 8 hours of effort will probably take 12 hours of duration which means you have one task that spans 1 1/2 days and maybe even 2. So as a team member, you're saying to the your team mates, leave me alone for 2 days while I do this in isolation. That's not condusive to a good team effort nor is it helping anyone. If you see a task that's more than 4 hours, ask yourself if it's really 1 big task or 2 small ones? The other advantage to smaller tasks is that you might break something up into two 4 hour tasks and each one can be done by two different people, meaning you actually deliver value faster and isn't that what we're trying to accomplish here?
    • If you already have something "In Progress" don't pick up a new task. I've been told that I have ADD and while I might (I do have many projects on the go, both at work and home) I try to manage it. Picking something and saying "I'm working on this" then turning around the next day and saying the same thing on something else just looks like you're trying to look busy. It's okay to grab a couple tasks at a time or grab a new one if one task is just waiting on something or needs a small collaboration say with the customer, but knowing that a task has 2 hours of work left, yet grabbing a new one and starting it without finishing the first is just plain dumb. Why would you do that? Would you wash half your car, then go and cut the lawn knowing your car is covered with soap? Even with ADD I wouldn't.

    How do you eat an elephant? Easy, one bite at a time.

  • SharePoint Builder Splash Screen Entries

    Here's the first round of splash screen entries for SharePoint Builder. I won't tag who did what yet.





    The last one is my fav so far as it shows innovation and is different (that's a hard hat in case you didn't get it). Note that you don't have to include the "Version 1.0" stuff in your entries as that will change so it's better to leave it blank. Remember to submit your entry soon via email and you can comment on the entries on the blog here.

    Keep 'em coming!

  • Splash me baby, splash me!

    I'm looking for a good graphic design that can beat this:

    SharePoint Builder Splash Screen

    Specifically a splash screen for my SharePoint Builder tool. Requirements are simple: an image of some size (it's a splash screen remember so 1024x768 might be a bit large) that represents what the program does, namely helps you build SharePoint configuration files.

    SharePoint Builder

    Think you're up to the challenge? The reward is my undying gratitute, your name in lights, and the warm fuzzy feeling that you contributed to something useful in the universe. Not bad for a Sunday afternoon huh.

    So break out your crayons and send me your ideas (final or otherwise) via email. Let's give this 2 weeks from today and see if anyone comes up with something cool. I'll post the entries here and maybe we'll hold a public vote to pick the best.

    Thanks for helping out!

  • Best Practices with FitNesse

    Google doesn't seem to show me any best practices that anyone has put together with FitNesse, which is odd (or my Google skills suck) so I thought I would capture a bunch here that I've come up with to start.

    • Keep your wiki test pages small
      • Having a wiki test page with 20,000 tests doesn't do anyone any good. First off, it'll take days to load. Second, it's a sea of gobbly-gook looking at it in edit mode and next to impossible to find that one line that you need to edit. Do your team a favour and keep them small. Like development practices, I like to keep my FIT pages down to a screen or two at most.
    • Organize your tests so they make sense
      • FIT supports test suites and they're useful, especially for organzing things. Having one table with hundreds of scenarios might be good (more tests are always good) but if they're difficult to read or navigate they'll be cumbersome to maintain. Use a single responsiblity principle when it comes to organizing your tests and try to group them together into some kind of organizational category or unit that makes sense to the tests and users. For example if you have FIT tests that are testing accounts, consider writing a table for each type of account; a table for account fail conditions; a table for special conditions; etc.
    • Use "Friendly Names"
      • Rather than seeing complete namespaces and fixture names, use friendly names with spaces. So this:
        can become:
        "|My Fixture|"
        Also you can use the !import table to remove the fully qualified namespaces. This makes it more appealing to look at when you're showing value to the customer of these tests.
    • Seed your test pages and engage the authors
      • Getting started with FIT and actually having tests can be a big hurdle. To be setup for success, have the entire team not only go through creating the fixtures and tests but work with the authors about what scenarios they can test. For example not everyone thinks about the failure conditions that should be tested (using the fail[] markup) so make sure the authors of the tests know all the different things they can try. The best scenario you can ask is a perfect test author who knows all the things to test, but when adopting FIT you might need to do a FitNesse 101 with them and get them introduced to different ideas around testing, not just entering tables over and over again. Quality not quantity is what you're looking for in the end.

    Feel free to add your own!

  • Where did my SDK go?

    If you've downloaded the recent release of the MOSS and WSS SDKs, you may be scratching your head wondering where they installed to? After all, they're just some compiled help files and samples right?

    Seems that in their infinite wisdom, some young and bright engineer at Microsoft broke Phil Haacks first rule of hardware design (which bodes the same for software design). The computer belongs to me, not you!

    It seems the SDK installers will hunt down and find the last writable drive in your system and install there. No questions asked. So if you have a removable drive or something (even a USB key) plugged in, it'll be there. The install paths for each SDK are as follows:

    • WSS: \Windows SharePoint Services Developer Resources
    • MOSS: \2007 Office System Developer Resources

    Check on your highest drive letter in these paths for the files to see if they ended up there. When I first installed mine, I had a USB drive plugged into F: so it ended up there, after pulling all my external drives, it ended up on C:. I don't have a network drive available right now, but I would assume it might install there if you have write privledges.

  • MOSS and WSS RTM SDKs available now

    How's that for posts with the most acronyms? It might be old news to you by now, but MS has finally released the RTM versions of the SDK documentation for both MOSS and WSS. Randall's blog has all the nitty gritty details here. He's also got a great resource called the top 10 resources for SharePoint developers (which is actually 14, who said softies could count?) so check that out here.

    As for the SDKs they're final now but wait, there's more... these are the first ones outside of the original core set to be Community Content enabled. Yup, the SDKs allow you (with a passport account and the sacrifice of a willing virgin) to contribute content to any entry, any class, any method, and well... anything in the SDK. So watch for this MVP to bide his time updating the documentation as I can find time, as will others I'm sure. Very cool indeed.

    You can download the SDKs directly here:

  • Domain Driven Design for C# 3.0

    Or maybe 4.0? Paul Gielens got my brain jump started this morning with a post about DDD support in the C# language. Domain Driven Design can be hard to grasp and even harder to implement. Conceptually it's easy and makes sense (at least to me). We all read the books and get the theory, but when the rubber hits the road the biggest question anyone has is "how do I write code that reflects this?". Everyone has ideas and if you take a quick search of the DDD mailing list, you'll see dozens of code snippets all talking about the same thing but implementing it differently. Sure, software development and pattern is like that. Take any one pattern and give it to 3 people and you'll get 5 different interpretations. However as we continue to dance around implementation, we get confused by the terms and sometimes miss the ball.

    Here's a DDD type implementation using C# constructs so the intent isn't all that clear here:

        1 namespace Car

        2 {

        3     class Wheel { ... }

        4     class Position { ... }

        5     class CarRepository { ... }

        6     class FuelTransfer { ... }

        7 }

    Now given the spark that Paul mentioned, here's the same code written in DDD style:

        9 domain Transportation

       10 {

       11     aggregate Car

       12     {

       13         entity Wheel { ... }

       14         valueobject Position { ... }

       15         repository CarRepository { ... }

       16         service FuelTransfer { ... }

       17     }

       18 }

    If you've read DDD and internalized Eric Evans' theory around things like aggregates, boundaries, entity and value objects you'll get this. You'll see the domain visually (or at least I do) and understand it. No longer would you have questions like "is x an aggregate of y and where should service z live?". The code tells all. Brilliant.

    I love C# but it does have its limitations in that it's just simply regurgitating the language it was based on and has typical constructs like class, namespace, etc. I'm sure (but haven't thought how you could do it) that say a language like maybe Ruby could support this but that's an answer for those smart types like John Lam to answer (although with John at the mothership now and working on the CLR team, anything is possible).

    I think it's an excellent idea, my only wish is that something like this could come true someday!

  • Lambda? Llama? We don't need no stinkin' llamas...

    Actually this post has nothing to do with llamas but Flickr never fails me:

    Llama en Buenos Aires

    I've been personally putting off trying to get my head around Lambda expressions in C# 3.0 for awhile as I'm trying to get everything wrapped up with what came with 2.0 however a little gem has come out of the quagmire we call the internet. My good friend Howard Dierking, not a llama but a certification guy and all around cool developer over at the evil empire, has put together an awesome explaination of Lambda expressions and C# 3.0 by walking us through what we do in 2.0 and now C# 3.0.

    His explanation and example makes perfect sense, cuts down the clutter of delegate code, is easy to understand, and if you don't get it now you'll really be in trouble when the .NET 3.5 framework hits. Check out Howards blog entry here and judge for yourself.

  • Today could be the first day of the rest of your life

    It's been a week since I blogged and mostly it's been staying busy with work, trying to get on top of all the projects I have and most importantly remember the passwords for all my VMs that I seem to have now forgotten. However today was a bit of an experience I never had before. I got into a car accident. 24 years of driving and I've never actually had an accident. Call me a wuss, but I drive the speed limit and when the road conditions are bad, I slow down and try to keep my cool. Or so I thought.

    As I was heading into work this morning I hit a patch of black ice and slush and away I went. My Suzuki XJ7 wiggled along the highway and slammed head first into the concrete barrier that separates the northbound and southbound traffic. It was a hard enough hit to deploy the air bag and shook me up for a second or two. Then I realized I was perpendicular to oncoming traffic and all I could see was headlights. No, it wasn't a "My life flashed before my eyes" moment. It was just a "This sucks" moment as I saw the oncoming traffic. Lucky for me it was 5:30am so traffic was light and the roads were bad (with a light dusting of snow) so everyone was travelling pretty slow (I was only doing 70-80km/h when I hit the barrier as opposed to the 110km/h speed limit). I got the vehicle off the road and discovered there were two other vehicles involved as they swerved to avoid me and ended up hitting each other.

    No injuries, although my thumb hurts like the dickens right now (I think I clenched the steering wheel when I hit the barrier) and my chest is sore when it hit the airbag/steering wheel. The other guys had no injuries and damage was pretty minor to their vehicles (back bumper torn off and dent on the side panel of one truck). My damage was a little more severe as the front end was all crushed in (hood buckled, bumper, lights, grill, etc.) and the back bumper was ripped off when one of the trucks swerved to hit me on the side and caught my bumper with his. All in all, I have a pretty good suspicion that my truck is a write-off but they might be able to fix it.

    So a sucky way to start the morning and the rest of the day was being mad at myself for what happened and sorting out all the insurance crap and whatnot (including trying to track down a rental car which is increasingly difficult in this city these days). On the positive side, besides being alive and kicking it opened up the idea of buying a new vehicle so we're now looking at something like a Jeep Compass which looks pretty sexy.

    Like I said, this wasn't a life altering event for me. It was an accident (and my first). Yes, with a slight twist of events in the timeline of what went down I could have been writing this from the great blog space in the sky or a hospital bed but I'm not so you move on. However it does remind me to remember to live life to it's fullest each and every day. Slow down, enjoy what you have and what's going on around you, and relax. There are probably bigger things in the universe to worry about so don't get your panties in a knot over something silly like property damage.

  • Web Service Software Factory update released

    I'm a factory kinda guy. Ever since I read Keith Short's book I was enamoured with the concept of how factories work. Microsoft took the ball and got things going with the various software factories they're releasing and now they've updated their Web Service Software Factory.

    The Service Factory is a cohesive collection of various forms of guidance that have been build with the primary goal of helping you build high quality connected solutions in a more consistent way with less effort. The Web Service Software Factory comes in the form of a guidance package, that allows guidance to be automated from inside Visual Studio 2005 through the use of a wizard-based dialog than can be modified (by an architect perhaps) to fit the needs of a specific solution.

    You can grab the latest release here. As with other factories, you'll need the Guidance Automation Extensions installed to run the factory (and it only runs on the professional SKUs of Visual Studio, not Express).

  • Using Dependency Injection with CAB

    I was working through a problem tonight regarding dependency injection and CAB. CAB provides a facility to inject services and whatnot into other class using ObjectBuilder, Microsoft's DI framework. ObjectBuilder isn't the same as a DI/IOC container like Windsor Container or Spring.NET (or Jeremy Millers excellent StructureMap) but more like a framework for building containers. However in CAB it serves the purpose we need.

    Let's say I have a service that performs lookups and returns me lists of items from some backend system. I would like to use this LookupService in various modules but I don't want the modules responsible for creating the service (especially since I only want one of them and don't want to deal with singletons) and I want an easy way to ensure the service is loaded and ready to go when I need it. Here's where CAB will help you with this.

    First let's look at our service implementation:

    public class LookupService : ILookupService


        public List<KeyValuePair<int, string>> Items




                List<KeyValuePair<int, string>> items = new List<KeyValuePair<int, string>>();

                items.Add(new KeyValuePair<int, string>(1, "Item 1"));

                items.Add(new KeyValuePair<int, string>(1, "Item 2"));

                return items;




    This is a straight forward service that returns a generic List<> of KeyValuePairs<>. I might use this in my UI in a combo box or whatever, but it's just a lookup of items. The implementation here is hard coded, but you could just as easily have this call out to a database, do an asynchronous web service call, whatever you need.

    To share the service, I'll use the Infrastructure.Module project in my SCSF generated solution. This module gets loaded first and using SCSF I have it set to be a dependency so whenever the system loads any module I'll load this one first, ensuring my service is there. Here's my ProfileCatalog.xml that shows the dependency.

    <SolutionProfile xmlns="">

        <Section Name="Services">


                <ModuleInfo AssemblyFile="Infrastructure.Module.dll" />



        <Section Name="Apps">


                <Dependency Name="Services" />



                <ModuleInfo AssemblyFile="Project.dll" />




    The module dependency is part of SCSF so it won't exist if you're just using CAB. In my profile catalog, the moment the Project.dll module loads, it will first load it's dependency module(s) from the Services section of the XML file. You can have as many services as you want here and they'll load in reverse order that they're listed in the file.

    To instantiate the service and make it available, I have to load it up and add it to the RootWorkItem and it's list of services. This is done in the ModuleController.cs in the Infrastructure.Module project:

    public class ModuleController : WorkItemController


        public override void Run()








        private void AddServices()


            WorkItem.RootWorkItem.Services.AddNew<LookupService, ILookupService>();



        private void ExtendMenu()




        private void ExtendToolStrip()




        private void AddViews()




    If I were to load it up like a regular WorkItem and only use this code:

    private void AddServices()


        WorkItem.Services.AddNew<LookupService, ILookupService>();


    Then I would be loading it into the services for this module only, which is great, but I want this for all modules to use so I add it to my RootWorkItem. RootWorkItem is a property of any WorkItem that refers to the one and only root item created by the Shell. This way I know there's only one and I can access it from any module anywhere.

    Once it's been added to the WorkItems list of Services, I can inject it into any module I need. I'll inject it into my presenter class as that's where I'll use it. The presenter will call the service to get it's values, and set the View with those values to update some GUI element (implementation of the View isn't shown but it just takes the values and binds them to a listbox or whatever you would use them for). I can inject it into the Presenter class two different ways. First, I can use the [ServiceDependency] tag in a parameter passed to the constructor of the Presenter:

    public class ProjectListViewPresenter : Presenter<IProjectListView>


        private ILookupService _lookupService;


        public ProjectListViewPresenter([ServiceDependency] ILookupService lookupService)


            _lookupService = lookupService;



    Not that nowhere do I have to call the constructor, this is done with the AddViews method in the ModuleController and it knows that it needs a type of ILookupService to inject during construction. The constructor sets a private member variable of type ILookupService to the value passed in. ObjectBuilder knows it needs to get an object of that type and will find it using the ServiceLocator service, which is constructed by the Shell. The second way is I can set a property and decorate it using the [ServiceDependency] tag like so:

    public class ProjectListViewPresenter : Presenter<IProjectListView>


        private ILookupService _lookupService;



        public ILookupService LookupService


            get { return _lookupService; }

            set { _lookupService = value; }



    This is the same effect and is done whenever the object is created. Use one technique, not both as they'll both be called. Even though it's the same service object, it's just a waste to do it twice. Finally I just use the service in a method in my presenter when it's ready to update the view:

    public class ProjectListViewPresenter : Presenter<IProjectListView>


        private ILookupService _lookupService;



        public ILookupService LookupService


            get { return _lookupService; }

            set { _lookupService = value; }



        public override void OnViewReady()


            View.Items = LookupService.Items;




    The end result is that I have a loosely coupled service that's injected into my presenter and provides my view with the services it needs. You can use either technique to set the service in the presenter and the great thing is that using something like Rhino mocks, you don't need to create the implementation of the service so writing presenter tests is a breeze with this technique, as you can setup whatever conditions you want for your tests.

  • 52 Game Ideas from a Game Designer

    I don't know how I missed this. Grant you, being the Renaissance developer that I am I dabble in everything. SharePoint, Agile, Scrum, CAB, Mobile, Games. You name it and I've either written software for it, or want to. It's been a long time since I did full time game programming but I do follow it these days and recently have been haunting the hallows of the XNA world when I moved my XBox to my office so I could try out some console programming.

    One of my favorite games of all time (at least created in the last 5 years) is Stubbs the Zombie. It's brilliant in that it lets you be the zombie and go off and eat brains. I'm all for shooting zombies in the head ala George A. Romero and going all Tom Savini on the screen, but to be the zombie. Man, that stuff just writes itself.

    So it was a pleasant surprise that I found the lead designer the of the game, Patrick Curry, had a blog. What was even more surprising is that last year he posted a new game idea. Every week. Yup, 52 new game ideas that he just threw out there for all to see. The list is quite impressive and shows this guy thinks out of the XBox. It's a fun read with each entry giving the high level concept, the platform it's intended for, a description, and some thoughts. The community responses are interesting as well as people come up with extensions to Patrick's idea. Anyway, if you're into this stuff or have nothing better to do on your Saturday afternoon then check out the list here

  • Using MSBuild with Smart Client Software Factories

    The Smart Client Software Factory (SCSF) is an awesome tool. It comes in the form as a guidance package from the patterns and practices guys and kicks off your initial Smart Client app with various services, several projects, and a shell application all built on top of the Composite Application UI Block (CAB).

    I have found one problem with the current version of SCSF and that's when you generate the initial solution and try to build it using MSBuild. Create a solution using the factory and try building the .sln file with MSBuild. You'll get a host of errors about projects referencing projects that don't exist. Here's some sample output:

    SmartClientSolution1.sln : Solution file warning MSB4051: Project {90BC9A2E-DF32-4D50-AB7A-2967B8F5D8D9} is referencing a project with GUID {BE39A9ED-D4C6-42E7-91D6-63D9B1D185C6}, but a project with this GUID was not found in the .SLN file.

    I believe this might be because the .csproj/.sln file is generated before the GUIDs are. It doesn't have a problem in the IDE because it references projects by relative file path, but when you try to build a solution using MSBuild (like via an automated build server) the build fails.

    Just to clarify this. The GUIDs in the solution file and csproj files are correct however where each project references another in the csproj file it contains both a reference location and a GUID. It's that GUID that's incorrect.

    Here's the section in each .csproj I'm referring to:

    <ProjectReference Include="..\Infrastructure.Interface\Infrastructure.Interface.csproj">
    <ProjectReference Include="..\Infrastructure.Library\Infrastructure.Library.csproj">

    You can fix this without a problem. To do so just open up the .csproj file and in the ItemGroup section, paste in the correct GUIDs for each project it's referring to from the original .sln file. The references that need to be fixed are:

    • Infrastructure.Library referencing Infrastructure.Interface
    • Infrastructure.Module referencing Infrastructure.Interface
    • Shell referencing Infrastructure.Interface
    • Shell referencing Infrastructure.Library

    Once you've updated the GUIDs in the .csproj files, you'll be good to go for automated builds of your CAB projects. I've logged this as an issue here on the new CodePlex site so hopefully they'll get to fixing this as it was a real pain to find.

  • My Top 10 Films of 2006

    Seems like you do this every year, but a friend asked me what mine were. Here they are:

    1. The Departed
    2. El Laberinto del Fauno (Pan’s Labyrinth)
    3. Cars
    4. Pirates of the Caribbean
    5. Letters from Iwo Jima
    6. United 93
    7. The Illusionist
    8. The Prestige
    9. The Da Vinci Code
    10. Superman Returns

    You could question Superman Returns but I really liked the homages Brian Singer did as a nod to Richard Donner and his amazing version. Cars really did make my feel good, Pirates was a good escape and despite complaints about how off the Da Vinci Code was, it was still fun. However Scorsese really nailed it with The Departed. Oscar nods to him and some of the cast. These are only films I've seen. If I had got out to see A Scanner Darkly, that might bump something like The Illusionist or The Prestige.

    So what's your 10?

  • Development mottos to keep the team spirit alive

    Today we were talking over some things as we start up the next sprint (sprint #4, 1 maybe 2 to go!). The conversation turned towards the fact that one guy on the team was away on holiday but he was the "report guy" and currently one report was causing problems. Unfortunately we've run astruck of the code ownership problem that I really wanted to avoid, namely being dependent on one resource for one slice of the application. Not a good place to be in and something you want to avoid. As we were chatting this set of syllables came streaming out of my mouth:

    "If all you do is what you know, you'll never grow."

    Plain, simple, and short. Now maybe I heard this somewhere before but after saying it, it made me think (and laugh) and we all had a good chuckle over it. However it really does ring true. Sure everyone has individual skills and some people are better at database programming vs. UI design vs. domain modeling, but in the end a well-oiled team needs to be able to respond to any problem. In any module. In any area of the code. While you might be great at writing the same thing over and over again, it isn't challenging let alone pushing any kind of thought-provoking edge so how do you expect to move with the times?

    We wrote the quote on the whiteboard and it resonated. It's one of the mantras we're adopting to always remember not to paint ourselves into a corner and be dependent on any one resource and to tread into those uncharted areas of the your capacity. So get out a little bit, explore your code and if you've never written a view using MVP, now's as good as time as any to learn.

    Later in the conversation we came up with another one (we thought we were on a roll). It started with the U.S. Army's "Be all that you can be" motto and became this:

    "Test all that you can test."

    Yeah, a little corny but then my co-worker Dale came up with this widget:

    "If you don't write tests... you're dead."

    It was getting later in the day and things started to go downhill but walk with me on this one. We laughed at it but then combined the two statements:

    "Test all that you can test and if you don't write tests you're dead."

    While it may sound a little chilling and morbid for a software development project, after we wrote it on the whiteboard it just clicked. The team was dead in the water if we don't write tests. We're currently facing new functionality and dealing with some bugs coming up and doing lots of fairly aggressive refactoring. You can't successfully refactor parts of your codebase if you don't know what it does or what the downstream impact is, so when you come upon muddy waters where there's no tests our mantra is to write one then move on. While we do have about 70 good tests (after 3 months of development) we could have more and we've been procrastinating to write more tests (especially for the presenters) for a long time. Pressures from the users, not enough time, blah, blah, blah, blah, blah. So with this little motto and some team re-enforcement of asking everyone at the Scrum when they mark something as "Done" we'll ask if they have tests for it. If not, we're not going to consider it done. Hopefully that'll get us back on track and be productive at the same time.

    Finally, later today we had a technical overview and discussion of a new Smart Client project I'm leading up. It was the usual wallow through all the tools, techniques, technologies, process, and standards that we're going to do (with a load of the usual acronyms [CAB, SCSF, TDD, DDD, BDD, FIT, etc.] all thrown up on the whiteboard for discussion). About halfway through as we were talking about testing and specialization of code, I threw the two statements up on the whiteboard again. 

    "If all you do is what you know, you'll never grow."

    "Test all that you can test and if you don't write tests you're dead."

    A completely different team with varying levels of experience on the various tools and processes, yet everyone agreed with what we had written. Yup, that's what we have to do for this project to be successful (well that plus deliver a quality solution to the customer).

    Anyways, just something to think about. Even if something sounds corny it's a morale builder and an item to talk about. Remember to keep these little team building exercises fun and light and hey, who knows, you might come up with a little motto for your team to keep things interesting!

  • Handling Multiple Environment Configurations with .NET 2.0

    Dealing with connection strings and configuration information is something developers do every day. The problem is compounded when you have to worry about local database connections, QA/Test deployments, and finally your release into production. Luckily with .NET 2.0 configuration is easier and the ability to externalize configurations combined with a little craftiness on your part will make builds easier to deal with.

    Let's say you have 4 environments to deal with. Your local machine, an automated build server, a test deployment for QA to do user acceptance testing, and the production release. Each one of them has different needs and will access information like database connections in a variety of places. By default when you create a new project (any project) in Visual Studio you get two configurations, Debug and Release. Debug has symbolic debugging turned on, Release doesn't (plus there are some other optimizations but we don't care about those right now).

    In 2.0 application configuration is easier with a little something like this:

      142     <connectionStrings configSource="ConnectionStrings.config"/>

    Rather than having our connection string all spelled out in App.Config or Web.Config, we can redirect it to a completely separate file. As the connection strings may change from environment to environment you don't want to have to edit this file manually, especially if it's being checked into source control so you're always changing it. This means the next time your buddy pulls down the file he'll have whatever change you made (maybe after a deployment to Test) and have to change it back. Worse yet is you might forget to edit this during a deployment (there's no way to force you to) so the next this is you deploy your Smart Client app with a connection to "localhost". Not cool.

    So here's a strategy that might work for you. Create a config file for each environment you have then during a pre or post build step (doesn't matter which) in your main forms build you can copy the appropriate file as needed. This way your App.Config or Web.Config file doesn't change but the right settings are picked up from the file(s) you need.

    First create the configurations you need. This is done by right clicking on the Solution and selecting Configuration Manager. Then in the "Active solution configuration" drop down, select "<New...>" to create them. You can accept using the default Debug and Release configurations if you want, but I find that "Debug" isn't very informative as I might want a debug version both on my local machine and my automated test or shared dev server. Here's a configuration I use that might suggest something that works for you:

    • localhost - Based on Debug but indicates to me I'm building and running locally
    • AutomatedBuild - Based on Debug or Release (your choice) but used for the automated build server. For example you may not want the automated build to deploy any web or report projects so it's good to have a new configuration here.
    • Test - Based on Debug and used for your testers. You might want to base this on the Release configuration (depending on how savvy your testers are) and you might want to name this something more appropriate to your environment (like QA or something)
    • Production - Based on Release and meant to be the build you deploy to your customer

    Note these are all new configurations rather than renamed ones. Why? If you simply rename a configuration that's done at the Solution level. The project configuration names in each project will still be Debug and Release so it's confusing when you're setting up what configuration of a project is matched against the build. Trust me, just create new configurations and you'll be fine.

    Now that you have your configurations setup, create a new file for each configuration. This will hold the connection strings for each configuration. Name them using the following convention:


    For example if you had localhost, Test, and Production as your configurations you would have the following new files in your main project:

    • localhost.ConnectionStrings.config
    • Test.ConnectionStrings.config
    • Production.ConnectionStrings.config

    The contents of each of these files is dependent on the environment but will look like this:

        1 <connectionStrings>

        2     <add name="LocalSqlServer"

        3         connectionString="Data Source=localhost;Initial Catalog=DatabaseName;User Id=sa;Password=password;"

        4         providerName="System.Data.SqlClient" />

        5 </connectionStrings>

    Finally to make it all work, go into your main project and setup a pre or post build event that looks like this:

    copy "$(ProjectDir)$(ConfigurationName).ConnectionStrings.config" "$(ProjectDir)$(OutDir)ConnectionStrings.config" /Y

    What does this do? It grabs the configuration file for the appropriate build and copies it to your output directory with a common name. That name is what's referenced in your App.Config or Web.Config as shown above.

    Now when you build your system, depending on each configuration, it will copy the appropriate file with the correct settings but nobody needs to edit the main configuration file as it always references a common name (ConnectionStrings.config). Cool huh?

    Note you can use this for a variety of sections in config (there are restrictions). Check the API documentation on what you can and can't externalize but connection strings and app settings are one of them which is good enough for 90% of the universe. 

    Of course, after all this is said and done there is another way to handle your environment issues. Just use NAnt or MSBuild, but then that's a whole 'nuther blog post isn't it?


  • Back to the Future

    Many bloggers these days are writing up their New Years plan. What are they going to accomplish this year. Sounds like a good idea, but I wanted to put a bit of a spin on it. Rather than write out a detailed, technical list of tasks I want to do I thought it would be more effective to write a future letter to myself.

    Here's how it works. You write up an informal letter to yourself, to be read by yourself this time next year. In it, you state in past tense what you accomplished the previous year (2007). This time next year, you open the letter, read the email, visit the blog entry, whatever and see how you did. So rather than have a sticky list of tasks, you've got a friendly view of what you accomplished (or set out to accomplish) which hopefully brings more meaning to the meat of things.

    As this is my first venture of this type, next years letter will be more detailed and have more meaning than the first time around but overall I think it's a better approach to the whole blogging New Years Resolution that's going around now.

    Okay, so here's mine.

    Dear Bil,

    Congratulations on finally getting the MOSS 2007 version of the SharePoint Forums out there (and killing off those nasty bugs people have been reporting). It's your most popular SharePoint tool to date. Following up with the forums you did a good job with the Knowledgebase as many people found value in that (and you were able to use a lot of code and concepts between the two). And the trifactor was releasing SharePoint Builder which had been lingering for awhile but proved to be a valuable tool for the community.

    You finally put to rest a new SharePoint community that needed to get out there and it was a great launch, with updates every few months to keep the community needs met and fresh. All this work was not without benefits as in April you got re-awarded the MVP award in SharePoint for the 4th year running.

    It was a busy year travelling as you made your way to the MVP Summit in March, got at least one conference in during the year and then there was PDC. While you didn't present or speak there, it was an important event no matter and you hooked up with a lot of people and showed off what was to come in 2008 with SharePoint. You did manage to make up the year with at least one web cast or user group presentation each month on something new either in SharePoint, Team System, Agile, or Scrum.

    Finally, while you were a blogging monster creating many entries all with some content to offer the community, you did find time for the family as that always does come first. Congrats and here's to a great 2008!

    Until next year, we'll see how close to the mark I came.

  • SharePoint Builder now setup on CodePlex

    Yeah, I'm a CodePlex junkie. First the forums, then the knowledgebase, now SharePoint Builder. SharePoint Builder, my red-headed step-child of a project that spawned from a cigarette conversation with AC (and others, they were doing the smoking) somewhere in Redmond.

    SharePoint Builder

    I've setup SharePoint Builder on CodePlex and it will be released under the same license the other projects are, namely the Creative Commons Attribution-ShareAlike 2.5 license. Basically, do what you will with the tools. Credit me for any derivitive works and feel free to use it commercially. I'm pretty easy.

    As mentioned before, SharePoint builder (very different than the image above right now) is built on top of .NET 2.0, C# (code will be available at release) and uses the Composite Application UI Block (CAB) as well as Enterprise Libraries and a few other Microsoft goodies. You'll need the Guidance Automation Extensions and Smart Client Software Factory installed for the source code release.

    No commercial libraries will be used, but I would really like to try building it using a Ribbon control as the paradigm fits well, however I don't want to tie anyone to a library that they have to pay for. Not sure what I'll do there as there are currently no free, open source Ribbon implementations and I'm not about to build one myself. 

    No date on release yet as I'm wrapping up the forums and knowledgebase now along with another project I promised someone but will be doing small commits of the source over the next few weeks to get things into the system.

    Also I'll be setting up the release on a public web server for ClickOnce installs in case you don't want to download and build it yourself but rather just use it.

    Hey, I told you it was going to be a busy year. Second day and I've already done 8 posts, tagged 5 people, and updated 3 ongoing projects. Just wait until tommorow :)

  • SharePoint Forums and Knowledgebase Release Date Updates

    Just wanted to let everyone know I've updated the plan for the release of the SharePoint Forums and SharePoint Knowledgebase Web Parts. These are the dates as I can commit to them now (crazy holiday schedule and stuff that just prevented me from getting the job done):

    SharePoint Forums v2.0.0.0

    • Prod: Feb-12-2007

    SharePoint Knowledgebase v1.0.0.0

    • Beta: Jan-29-2007
    • Prod: Feb-26-2007

    Not a lot of information on the Knowledgebase Web Part yet (other than this post). Feel free to log your own features/enhancements in the Issue Tracker yourself and vote to shape the feature list!

  • Vote for your most wanted issue on the SharePoint Forums Web Part

    The guys at CodePlex updated the site some time ago implementing a "digg" like feature where you can vote for an issue and bump it up in priority. It's really slick and helps keep things in sight and what's important to you. So if you get a chance to visit the SharePoint Forums Web Part site on CodePlex here over the next while, take a look through the Issue Tracker and vote on your favorites. This is for tracking bugs but also includes new features and enhancements.

    Now I just need to bust my butt to get this (and other things) done asap (like the long overdue 2007 update and some bug fixing that has to be done).

  • Five degrees of separation

    There's a tagging game going on in the blog-o-world. I traced the origins (I think) back to Jeff Pulver here. Basically someone tags and you have to come up with 5 things about yourself that relatively few people know. You then go and annoy the crap out of 5 of your friends who blog to do the same. Glenn Block tagged me. Yeah, it's 2007 and we're practicing digital chain-letters but oh what the heck, it's almost January 2nd and I still haven't finished my blogging day just yet.

    Here's 5 completely silly things you may (or may not) know about me (or care):

    1. I used to be big in graphic design doing movie poster (for movies that would barely show up on IFC), animation (for films I'd rather not admit I was part of), and drawing comic books (indy back when indy was cool, never did the big Marvel or DC titles).
    2. I was part of a special effects studio based out of Oakville, Ontario where I did matte paintings and special effects. We won an award one year from Much Music for the effects on a music video I did work for.
    3. I drive the biggest, gas-guzzling vehicle known to man (a Dodge RAM 1500 Megacab) and I'm proud of doing my part for the environment (I also have a Suziki SUV so I generally take up the slack up for the gas quotient per person in Alberta). Go ahead naturalists, have at me!
    4. I've never broken a bone in my body but did have my appendix removed a few years ago (just before it was about to explode in a rather Stubbs the Zombie like move).
    5. My name "BIL" is spelled this way because back in the 80's when video games and quarters were all the rage, the games only supported entering three letters when you got a high score. I got a lot of high scores and would only be able to spell "BIL" instead of "BILL" (curse those Atari programmers!). So it just stuck.

    There. Tag. You're it. I ceremoniously tag the following people that have crossed my path in the past and will now pay for it, or face the wrath of the broken tagging game:

    • Jean-Paul Boodhoo - Developer Craftsman Extraordinary, in any language or tool
    • James Kovacs - Intense 64-bit .NET guy and recent Architecture MVP awardee
    • Kate Gregory - Awesome C++ skills and a fellow Canuck
    • Joel Semeniuk - My Winnipeg Team System guru
    • Keith Richie - Former Microsoft SharePoint guy, new Mindsharp SharePoint guy

    Go forth you uber-geeks and thou shalt write 5 things about yourself, not 3, and definitely no more than 5, and then tag 5 other people with the same fate.

    Who knows, maybe someone can make some money from this silly game then we can talk again. Here's a thought, someone could build a diagram of everyone who tagged everyone else. Then we'll see how close me and Mr. Gates really are.

  • SCSF Community has moved

    One of my favorite geek communities (if you want to call it that) is the group out of the Patterns and Practices guys who make the Smart Client Software Factory. This is a great package built on top of the Composite Application UI Block and created using the Guidance Automation Toolkit to create an excellent way to get your Smart Client projects off the ground (I've started 3 Enterprise applications using it so far).

    They've moved their site over to CodePlex (hurray!). I really detest (read:hate, scourn, abhorr, loathe, spite) GotDotNet and it's workpaces. I mean, everytime someone sends me a link it's a crapshoot whether or not I'll even get to the site by going to the link. When I go to links on GDN it's tied to Windows Live IDs or something but when I go to these links I end up at the basic page. I have to reload the page to actually get to the real destination after Windows Live logs me in (or something silly like that). The search navigation is confusing and have the time I get lost and end up somewhere I shouldn't be. Yeah, in a word I hate GDN and am very happy these guys are moving to CodePlex.

    The new site actually covers the SCSF, CAB, and a few other blocks like the Updater block (which needs to be updated to work with SCSF for example). It's nice to have everything together in one spot and I'm sure they'll add more as they grow. They have locked the forums on the old GDN site (which explains why I'm not getting any new feeds) and have committed to moving all the content over (code, forums, etc.) which is a nice touch for finding information (the search on CodePlex actually works).

    You can find them on their new CodePlex home here.

  • Keith, not Dennis

    Ritchie. It's a name in the geek world that has deep rooted meaning for those of us old enough. Dennis Ritchie, along with Brian Kernighan created the C programming language way back when.

    Anyways, long time SharePoint guy Keith Richie, not to be confused with Dennis (but it can be confusing because Keith; Kernighan; Ritchie; Richie; get it?)  announced he would be leaving Microsoft and heading over to the Mindsharp world. Wow, that group just keeps getting bigger and better with Todd, Bill, AC, and now Keith.

    Keith is the most excellent author of the SharePoint Utility Suite which I'm sure all of you use on a daily basis (I do).

    Congrats to Keith. Microsoft is losing a great resource, but there's balance in the SharePoint universe as he's still out there and will hopefully be producing great tools as usual.

    You can find his new blog here with his announcment of his old busted MS departure and new hotness MS arrival here.

  • Stats, stats, stats and more stats

    What else would you do on a blog in the new year than to post stats? Here's what this blogs 2006 roundup looked like:

    Number of posts in 2006: 262
    Number of views on all posts: 1,184,018

    Over a million views for the year. Not too shabby. 

    Note: Views are both aggregate views (RSS readers, etc.) and web page views so there might be some duplication but I'm not posting stats for correctness here, just popularity of posts.

    Top 10 Posts:

    1. Folders bad, metadata good! (24,112 views)
    2. DotNetNuke vs. SharePoint, the big showdown (20,544 views)
    3. The Big Dummies Guide to setting your SharePoint Virtual Environment (17,560 views)
    4. Tired of SharePoint Discussions? (announcement of SharePoint Forums Web Part - 11,253 views)
    5. SharePoint Forums, go get 'em (the release of the Forums Web Part - 8,751 views)
    6. 3-tier Architecture with ASP.NET 2.0 (8,509 views)
    7. Composite UI Application Block - Soup to Nuts - Getting Started (8,293 views)
    8. SharePoint Forums Web Part (8,141 views)
    9. The Lighter Side of being an Architect (8,051 views)
    10. SharePoint Forums Language Pack (7,342 views)

    While I'm not versed in the fine art of statistical analysis, it doesn't take an Acme anvil to fall on my head and show me the SharePoint Forums Web Part was quite a popular topic. Something (along with other SharePoint Web Parts) that I need to focus on early in the year to finish off the remaining work on it (that 2006 baggage I was talking about earlier).

    So not a bad year for me anyways. I'm no Oren Eini (aka Ayende, this guy is a blogging engine unto himself), Hanselman, or Osherove who post like gangbusters and every post is a gem, but I'm happy with the activity I've had and I hope you've had fun reading it all. Here's to a new year of posts that will hopefully enlighten, entertain, and enrage you to no end.

  • Anonymous delegates for event handling

    Let's start off the new year with a question. Traditionally here’s how you would code up an event handler for say a button on a Windows form (ignore the fact there are presenters and such and how they got initialized). 

       56         protected override void OnLoad(EventArgs e)

       57         {

       58             btnClose.Click += new EventHandler(btnClose_Click);

       64             _presenter.OnViewReady();

       65         }

       67         void btnClose_Click(object sender, EventArgs e)

       68         {

       69             _presenter.OnCloseView();

       70         }

    That's all well and fine. The form loads, the event handlers are registered and the view is called. There's a method for each event setup (by default in the form of ControlName_EventName) so you might have dozens of additional methods in your view code depending on how complex the form is.With .NET 2.0 we can use anonymous delegates so the code above can become this:

       56         protected override void OnLoad(EventArgs e)

       57         {

       61             btnClose.Click += delegate { _presenter.OnCloseView(); };

       64             _presenter.OnViewReady();

       65         }

    Just setup the delegate in the load method for the form. Same rules apply as in the presenter method will get called when the button is clicked.I like the second method as in it's cleaner and I don't have to clutter up my view code with lots of methods that only pass thru to a presenter. Even with dozens of events on dozens of controls I would only have one line for each in my OnLoad method. It's not just reducing the number of lines of code but readiblity that's key here.

    So any thoughts on these two approaches? Preferences? Ideas? Comments? Small unregistered marsupials?

  • First Post!

    It's officially 2007 here in Calgary so here's my first post for the year. Kind of weird as I've been thinking it was 2007 for the last few weeks but then at the end of the year it always slows down and becomes a Christmas, shopping, family, and partying focus. No partying for me this year (maybe I've been too many times around the Sun for that) and I'm starting off the year already with a backlog of things I need to get done from last year. On top of that I'm headed to the office this morning so I don't get slammed tomorrow and have everyone sitting on their hands waiting for the ScrumMaster to finish up organizing the tasks as I have 3 Scrums on the go, 2 projects that I'm serving as lead Architect, and the plethora of other tasks in my inbox.

    I'm looking at it as a challenge and something that I'm going to share with you guys every step of the way by moaning and complaining about every little thing. It'll be fun, they'll be a lot of great discoveries along the way, and there's plenty of awesome technologies, tools, and projects that are coming from yours truly so sit back, relax, and watch the screen.

    Yeah, it's going to be a busy year.