Wayne Allen's Weblog

pragmatic agility

  • Iteration #1 – Day #3 Wednesday 12/11/2002

    Another standup and all is still well on day #3. We still don't have our bullpen and the hardware hasn't been ordered. But in the tradition of "do the simplest thing..." we are moving forward.

  • Iteration #1 – Day #2 Tuesday 12/10/2002

    Another standup and all is still well on day #2. We still don't have our bullpen yet, but I am assured it will happen by tomorrow. Hardware has been approved and ordered and should be here by the end of the week baring any supplier shortages. The hardware we can work around until we get serious about processing our test data. In fact we are temporarily borrowing some servers from the dev lab and QA until the dedicated stuff arrives.

  • Iteration #1 – Day #1 Monday 12/9/2002

    We had our first daily standup today and everything went smoothly. Everybody has well defined tasks to do to get the project started. Our bullpen facilities are still up in the air, but should be resolved soon. In any case not much coding is happening today, should be more tomorrow. I am interested to see how the first rounds of pairing go.

  • Iteration #0 Fiday 12/6/2002

    With everyone in place with scheduled the first ever XP style planning game. I should mention that I decided to use XP as a concrete place to start since much has been written about XP and the team could read up off line

  • An Agile Preface

    The company had already touted the virtues of iterative development and automated unit testing. However, the implementation of these practices was sporadic at best and nonexistent at worst. Iterations were thinly veiled waterfall practices with documentation updates starting to occupy large chucks of the iteration. Unit testing fared no better with only a couple of developers actually writing tests.

  • Building an Agile Team

    Part of the reason I changed jobs recently was to be able to build an IT department based on agile principles. In order to better recreate the experience I've decided to post my impressions, what worked what didn't and things I'd like to adjust on our next iteration.
    The company I work for now has an internal IT department that creates internal and customer facing products. When started the company was full of cowboy coders and heroics. This gave way fairly quickly to waterfall-ish style process which when I was brought on was starting to overwhelm the department to the point of jeopardizing important projects, mitigated only by individual heroics.
    The Director of Engineering (a friend of mine) understood what was happening, but didn’t have the experience, or the time to personally lead the implementation of agile processes to bring his shop back on track. I did have some experience in leading agile teams and was ready to take on the bigger challenge this represented. After many hours of after hours discussions circumstances made it possible for me to come on full time and enact the proposed changes. One of the benefits of having the director work this change ahead of time was that by the time I came on board most of the political battles (executive-wise) had been fought. Now all that was left were the rank and file, a somewhat more challenging effort, but one I had dealt with before.
    With the background out of the way, on with the show!

  • Unexpected Data

    While evaluating a product that massages my data (very large quantities - think multiple terabytes) I was becoming frustrated with the fact that the product couldn't handle my sample data set. Even with the vendor rep on-site we never got the entire data set processed so we could evaluate what the product actually did. I didn't think this should be a big deal, the data set wasn't that large (couple of megabytes of ASCII text), but was proprietary so I couldn't be sending it off to have their techies pour over it to see why their product kept choking on it.
    I do remember my consulting days when applications would suddenly stop working and after hours of debugging find out that there was some unexpected data sending things out of kilter. In fact after awhile I started looking for "bad" data first when presented with certain scenarios.
    Everything was really brought home when evaluating a competitor's product where everything worked right the first time on the same data set. The first product is more mature, has more features, more tuning options, etc, etc. But it doesn’t work on my data, even with their assistance. Because they assumed that all the incoming data would be cleansed and perfect and they skimped on the ability for their product to handle unexpected data, they are going to lose out on licensing revenue (a lot of revenue).
    Defensive programming wins out over features.
    One my peers today was pointing out a manual process that is a chokepoint. The manual process is there because some of our programs don't handle unexpected data very well. His now famous comment was "If we wrote better progams, we wouldn't have to do that". Amen.
    How does one go about writing "better programs"? We are embarking on a crusade to bring automated unit testing to all our projects. A part of this will include having the QA engineers teach our developers about writing good tests including basics such as boundry testing so that unexpected data problems tend to go away as a class of problems.

  • More SOAP

    I relearned one of the reasons why we do RPC style architecture. I am working on a solution spike with an expensive (license and hardware requirements) piece of software and wanted someone else to play with it to get a sense of the data that was being manipulated. So I created an installer and used it to put the software on another machine, easy as pie. Of course it didn't work right away because all the 3rd party dependencies weren’t captured correctly.
    After futzing for about an hour, running back and forth between machines everything was working fine. Except now another person wanted to play with it as well. I didn't want to kill an hour or more every time I revised the spike so my users could see what had been done.
    Luckily Thanksgiving came upon us and I was forced away from my machine for a couple of days and clarity returned. My first thought was to bag it all and redo everything as a web page, but it seemed like wasted time to recreate all my previous GUI work and run into who knows how many snags to save a couple of hours. Then I remembered I could have my cake and eat it too by only deploying my rich GUI (via .NET no-touch deployment) keeping my customers up-to-date and putting all the problem API calls and configuration files on my development box where I could keep an eye on them behind a SOAP interface.
    Now granted I haven't yet made this change, and I haven't decided if SOAP or .NET remoting is the best choice. Nevertheless I can see several advantages, not the least of which is that I can abstract my interface to the service I am building and (with SOAP) can trivially create proxy classes to manage the communications. Anything that saves me coding time is tops in my book.

  • Integration

    The company where I currently work uses a large number of third party components, which is not unusual for someone in our space. In fact it would be far to expensive to develop and maintain many of these components ourselves. The one unifying aspect of these components is that they are all ActiveX/COM based and play well with our development environments.
    However, we are looking at some new technology that is dependant on Java RMI where the COM integration solution is using the "java" moniker with the CreateObject method. i.e. Set o = CreateObject("java:Java.Util.Date"). Unfortunately this requires using the Microsoft JVM which doesn’t support RMI out of the box (there is a patch available from IBM). Also MS is not maintaining the JVM and Java 1.4 and above is not supported. The final nail is that object creation with monikers doesn’t seem to be supported in .NET.
    Another product in the same space supports COM directly, but in testing with .NET the Interop seems to be leaking memory at an astounding rate, making it non-viable.
    Where to go from here? Our next solution spike is to investigate the "greatest application integration technology ever invented by man" - otherwise known as SOAP. The hope is to isolate the peculiarities of each product in an environment that it is compatible with, and additionally insulate ourselves from change a little.