Archives

Archives / 2003 / February
  • Pair Programming

    Today was a great day! After spending most of yesterday and this morning chasing down a deployment bug that turned out to be a mis-registered COM object (worked in DEV, but not in QA) I spent the rest of the day pair programming with a new employee. We decided as part of his orientation that we should develop an extension to the ConfigurationSettings.AppSettings to be aware of the environment the code was running in. This has been a desired feature for some time now as we spent too much time twidling config files after deployment. After consulting with a couple of other developers on the design we plowed ahead, test-first.

  • Shuttle

    Like many I am saddened by the loss of the crew of Columbia. It is one of those defining tragedies in the collective memory. I still remember the exact place I heard about President Reagan being shot, the Challenger explosion, 9/11 and now the Columbia.

  • More nAnt

    I was playing with nAnt today to try and auto-generation version/build numbers so I wouldn't have to keep track of them. Luckily (via Google) I came across John Lam's nifty version task. Just an aside for those wanting to use John's task or write your own - the naming convention for task assemblies is "*Tasks.dll", note the plural. If you don't name your task correctly (and place it in the bin folder) nAnt won't recognize it.

  • A New News Aggregator

    NewsGator is a great tool for keeping up with all the interesting blogs out there. It is a News Aggregator that integrates into MS Outlook which is my primary communications tool. I was using Aggie - a nice tool in itself, but it didn't fit into my day as seamlessly as NewsGator. Many thanks to Greg Reinacker for creating this great tool.

  • Better Builds

    Did a complete deployment in 40 minutes today. World record!

  • Learning nAnt

    I've volunteered to be the build master/coordinator the last 2 iterations of our web product. As we've gotten more sophisticated we've pushed some of the functionality off to Windows Services based applications. This has the net affect of quadrupling the deployment headache we've been fighting for the last few months.
    To address this headache with a little automation aspirin I turned to nAnt based on tones of positive feedback from others who use it and the little I used Ant in the past. My first order of frustration is the documentation is nearly useless from a beginners perspective. It assumes you know everything and just need to look up the attribute for this or that. Once I figured out what was happening from the source code (the blessings of open source) things began to fall into place fairly quickly.

  • Abandoning the blow by blow

    I've decided to abandon the day by day account (obviously) since the teams are in a nice flow now. Instead as interesting things or events happen I'll write about them.

  • Iteration #2 - Day #3 Tuesday 1/7/2003

    The continious integration issue raised its head again in the form of deployment issues. Simply put our current process stinks, it takes over an hour to create the build and 3-4 hours to deploy to Dev and QA and that is a simple website + database deployment, it doesn't include any other services. The first order of business was to automate the website build process and dedicate a build machine and build master. After fiddling with nAnt for awhile I discovered the devenv.exe command lets you build a solution without haveing to redefine it in nAnt, a big bonus. I then created an installation (deployment?) project for dev and QA. I only created the QA install project to support our current shared Dev/QA web server, that will be changing soon so that Dev and QA have completely seperate environments. The end result is we can build a Dev installer for the website in about 5 minutes now. The tasks remaining are automatically getting the latest source from VSS and labeling the source. I'd like to automate the VS IDE since it has its own ideas about which files belong to a project and how that is integrated into VSS. Eventually I'd love to have automated version control, builds, unit tests, functional tests (FIT) and deployment. We'll see.

  • Iteration #2 - Day #2 Friday 1/3/2003

    I had a very interesting conversation with our customer and the QA team today. Most of it centered around communications between the customer, developers and QA. The issue that started the whole conversation was that the customer and a developer discussed a story and agreed to build it a certian way. At the completion of the iteration one of the features was not implemented the way the customer remembers specifying it. However, since there are no formal requirements documents he dosen't have anything to point at to say you missed this. To further complicate matters the omission wasn't noticed until the next iteration had started causing the customer to wonder if he could insist that the omission be corrected even though it wasn't oficially part of the current iteration.

  • Iteration #1 - Day #1 Thursday 1/2/2003

    We slipped into our second iteration with ease today. The processes are starting to feel normal for the team and we are getting the daily heartbeat. Soon we should get into the iteration heartbeat as well. We spent some serious time getting our DBA up to speed today (vacations and other tasks have kept him out of the loop until now, something that we should have addressed sooner but didn't). We also have a good discussion on how we are going to incrementally add unit tests to our existing code base, add the necessary helper function to test code that currently requires lots of database setup because of the existing architecture (to be changed as we touch that code some day).

  • Iteration #2 - Day #-1 Tuesday 12/31/2002

    Today we had a major deployment issue. It took 2 of us all day to find a strange set of "bugs" that were a result of not getting a couple of stored procedures into our dev and QA builds. Afterwards we all sat down to see how we could avoid this kind of thing in the future, especially for production releases. Here is what we came up with:

  • Iteration #2 - Day #-2 Monday 12/30/2002

    Another interesting fact came out of our discussion with our customer. He was feeling nervous about progress since our integration was late and couldn't determine the true state or quality of the project. We determined that our binary approach to story completion didn't reflect what people needed to know. So we changed our approach to list the number of test cases that passed and failed as a stacked graph with time as the X axis. This should give the stakeholders a better feel for how complete the iteration is.

  • Programming Fun

    I was talking to one of our developers last week and he made a comment that didn't strike me until yesterday. He said he was actually having fun again! This made me realize that by placing responsibility for tasks in the correct hands, everyone can better enjoy their jobs. I recall other projects where the development team was given high level guidelines and left to it. Everything was fine until we got done with the "fun" stuff and were trying to make decisions about features etc, where we didn't have the slightest clue. The developers were working on whatever they wanted, and not much was getting accomplished.

  • Iteration #1 – Day #10 Friday 12/20/2002

    Today was the last official day of the iteration. We didn't get everything signed off by our customer representative (QA), but we did complete all the technical tasks generated from the story. Additionally we got a few bonus features as people noticed that an hour or two would pull things together. I'm not sure this is the best thing to do since it wasn't on our official list, but as most of the developers know what features are desired and it wasn't blatant gold plating we can let it slide. Our director and PM decided to slide the end of the iteration to 12/31/2002 to line up with another related project and because of the difficulty of having a 2 week iteration over the Christmas holidays.

  • Iteration #1 – Day #9 Thursday 12/19/2002

    Today was a regular working day. We came, we coded, we went home. We did make a couple of design decisions and the design of a couple of components is changing as we better understand what we are building. I keep waiting for someone to complain about constantly changing interfaces, but people are just stubbing out the return results for know. I should check in to see what the tests look like. Mock objects are something we have on tap for a training brownbag and seems like a technique that could be useful.

  • Iteration #1 – Day #8 Wednesday 12/18/2002

    Yesterday our customer was (rightly) getting nervous about not having seen a build yet, so the team committed to producing a build today and followed through with flying colors. I'm still getting some questions along the lines of "why build when we know all the functionality isn't there yet". If we can get the build process somewhat more automated I think these questions will go away. Our customer really likes being able to see incomplete functionality so he can make changes before we've committed a lot of time. QA also likes being able to see the product earlier in the cycle so they know their tests are accurate and complete as well as giving earlier feedback. All in all just the way agile should be working, we just need to straighten out some kinks.

  • Iteration #1 – Day #7 Tuesday 12/17/2002

    Brief status: bullpen space is ready, desktops are installed, servers built. Hooray! A couple of us actually worked in the "pod" as it is called, but no pairing yet. At this point it looks like this is not going to happen this iteration.

  • Self configuration of objects

    Mark Strawmyer has a nice article on using .NET attributes to assist the self configuration of objects. Interesting approach, and one I'm sure I can apply in many areas as I think more about how to leverage attributes. In my current thinking (revolving around writing testable code) I'm having difficulties with the whole area of configuration. We are trying to write code that can have automated unit and acceptance tests (customer tests) and since the dev, qa, integration and production environments all have different configuration settings one can't just grab everything out of VSS and run the test (the goal) since the tests will break without the correct configuration files.

  • Iteration #1 – Day #4 Thursday 12/12/2002

    Brief status: no bullpen yet, hardware arrived - everything should be ready EOD Monday. Today I spent time resolving the need to share information without resorting to MS Word files in VSS (which never works since there are too many layers to go through to get the information).

  • Iteration #1 – Day #3 Wednesday 12/11/2002

    Another standup and all is still well on day #3. We still don't have our bullpen and the hardware hasn't been ordered. But in the tradition of "do the simplest thing..." we are moving forward.

  • Iteration #1 – Day #2 Tuesday 12/10/2002

    Another standup and all is still well on day #2. We still don't have our bullpen yet, but I am assured it will happen by tomorrow. Hardware has been approved and ordered and should be here by the end of the week baring any supplier shortages. The hardware we can work around until we get serious about processing our test data. In fact we are temporarily borrowing some servers from the dev lab and QA until the dedicated stuff arrives.

  • Iteration #1 – Day #1 Monday 12/9/2002

    We had our first daily standup today and everything went smoothly. Everybody has well defined tasks to do to get the project started. Our bullpen facilities are still up in the air, but should be resolved soon. In any case not much coding is happening today, should be more tomorrow. I am interested to see how the first rounds of pairing go.

  • Iteration #0 Fiday 12/6/2002

    With everyone in place with scheduled the first ever XP style planning game. I should mention that I decided to use XP as a concrete place to start since much has been written about XP and the team could read up off line

  • An Agile Preface

    The company had already touted the virtues of iterative development and automated unit testing. However, the implementation of these practices was sporadic at best and nonexistent at worst. Iterations were thinly veiled waterfall practices with documentation updates starting to occupy large chucks of the iteration. Unit testing fared no better with only a couple of developers actually writing tests.

  • Building an Agile Team

    Part of the reason I changed jobs recently was to be able to build an IT department based on agile principles. In order to better recreate the experience I've decided to post my impressions, what worked what didn't and things I'd like to adjust on our next iteration.
    The company I work for now has an internal IT department that creates internal and customer facing products. When started the company was full of cowboy coders and heroics. This gave way fairly quickly to waterfall-ish style process which when I was brought on was starting to overwhelm the department to the point of jeopardizing important projects, mitigated only by individual heroics.
    The Director of Engineering (a friend of mine) understood what was happening, but didn’t have the experience, or the time to personally lead the implementation of agile processes to bring his shop back on track. I did have some experience in leading agile teams and was ready to take on the bigger challenge this represented. After many hours of after hours discussions circumstances made it possible for me to come on full time and enact the proposed changes. One of the benefits of having the director work this change ahead of time was that by the time I came on board most of the political battles (executive-wise) had been fought. Now all that was left were the rank and file, a somewhat more challenging effort, but one I had dealt with before.
    With the background out of the way, on with the show!

  • Unexpected Data

    While evaluating a product that massages my data (very large quantities - think multiple terabytes) I was becoming frustrated with the fact that the product couldn't handle my sample data set. Even with the vendor rep on-site we never got the entire data set processed so we could evaluate what the product actually did. I didn't think this should be a big deal, the data set wasn't that large (couple of megabytes of ASCII text), but was proprietary so I couldn't be sending it off to have their techies pour over it to see why their product kept choking on it.
    I do remember my consulting days when applications would suddenly stop working and after hours of debugging find out that there was some unexpected data sending things out of kilter. In fact after awhile I started looking for "bad" data first when presented with certain scenarios.
    Everything was really brought home when evaluating a competitor's product where everything worked right the first time on the same data set. The first product is more mature, has more features, more tuning options, etc, etc. But it doesn’t work on my data, even with their assistance. Because they assumed that all the incoming data would be cleansed and perfect and they skimped on the ability for their product to handle unexpected data, they are going to lose out on licensing revenue (a lot of revenue).
    Defensive programming wins out over features.
    One my peers today was pointing out a manual process that is a chokepoint. The manual process is there because some of our programs don't handle unexpected data very well. His now famous comment was "If we wrote better progams, we wouldn't have to do that". Amen.
    How does one go about writing "better programs"? We are embarking on a crusade to bring automated unit testing to all our projects. A part of this will include having the QA engineers teach our developers about writing good tests including basics such as boundry testing so that unexpected data problems tend to go away as a class of problems.

  • More SOAP

    I relearned one of the reasons why we do RPC style architecture. I am working on a solution spike with an expensive (license and hardware requirements) piece of software and wanted someone else to play with it to get a sense of the data that was being manipulated. So I created an installer and used it to put the software on another machine, easy as pie. Of course it didn't work right away because all the 3rd party dependencies weren’t captured correctly.
    After futzing for about an hour, running back and forth between machines everything was working fine. Except now another person wanted to play with it as well. I didn't want to kill an hour or more every time I revised the spike so my users could see what had been done.
    Luckily Thanksgiving came upon us and I was forced away from my machine for a couple of days and clarity returned. My first thought was to bag it all and redo everything as a web page, but it seemed like wasted time to recreate all my previous GUI work and run into who knows how many snags to save a couple of hours. Then I remembered I could have my cake and eat it too by only deploying my rich GUI (via .NET no-touch deployment) keeping my customers up-to-date and putting all the problem API calls and configuration files on my development box where I could keep an eye on them behind a SOAP interface.
    Now granted I haven't yet made this change, and I haven't decided if SOAP or .NET remoting is the best choice. Nevertheless I can see several advantages, not the least of which is that I can abstract my interface to the service I am building and (with SOAP) can trivially create proxy classes to manage the communications. Anything that saves me coding time is tops in my book.

  • Integration

    The company where I currently work uses a large number of third party components, which is not unusual for someone in our space. In fact it would be far to expensive to develop and maintain many of these components ourselves. The one unifying aspect of these components is that they are all ActiveX/COM based and play well with our development environments.
    However, we are looking at some new technology that is dependant on Java RMI where the COM integration solution is using the "java" moniker with the CreateObject method. i.e. Set o = CreateObject("java:Java.Util.Date"). Unfortunately this requires using the Microsoft JVM which doesn’t support RMI out of the box (there is a patch available from IBM). Also MS is not maintaining the JVM and Java 1.4 and above is not supported. The final nail is that object creation with monikers doesn’t seem to be supported in .NET.
    Another product in the same space supports COM directly, but in testing with .NET the Interop seems to be leaking memory at an astounding rate, making it non-viable.
    Where to go from here? Our next solution spike is to investigate the "greatest application integration technology ever invented by man" - otherwise known as SOAP. The hope is to isolate the peculiarities of each product in an environment that it is compatible with, and additionally insulate ourselves from change a little.

  • Leaving blogger

    Blogger introduced me to weblogging, but I am ready to move on. Scott Watermasysk has kindly provided space on his .NET hosted solution. I should be moving most of my content from blogger in the next week.