April 2010 - Posts
With the launch of VS2010 this week it seems like a good time to talk about some of the work that has been going on with Gallio to integrate with VS2010. This work will be a feature of the next release, no beta yet but you are welcome to try the nightly builds (all normal risks apply etc). Just like VS08 you can use the VS Test Runner to run Gallio tests (such as MbUnit) in the same way you can MSTest.
With Gallio installed the Test View window shows a Gallio (in this case MbUnit) test loaded (note the icon). If I go ahead and run this test I can see it working in the Test Results window.
In VS2010 you can collect additonal data that a test can include (system data, intellitrace data etc). If I set VS to collect system data and run the test I can click the 'Test run completed' link and see that it is included.
If I also right click in the Test Results window I can select "View Test Results Details" and a Gallio test results window will load up.
Note that Gallio treats the collector data as attachments so you can go ahead and view the attachment data right from the report.
The first port of call in changing legacy code is a safety net, without one your fingers will get burnt. Make your safety net a high level functional test over the major areas of the application. Automate the test, plug it into your CI builds and run it every night. The test should act as a final fail safe as you work.
In my last post I suggested drawing up a plan, the plan might be based on what you and the team already know of the code and your experiences so far in coding, fixing and refactoring attempts on the code base. Your plan might need further detail when you start to get into the finer detail of the codebase and into areas you and team may have never have been into before.
There are a great deal of ways you can do this, simple debugging and breakpoint for example but the detail learned can be lost if only ever used by it's self. Another approach is logging intercepts to log method calls as and when they occur. Using something like log4net can allow you to log to what ever medium you want with relative ease, if you don't want to leave logging calls around then a clean solution is combine with PostSharp. A third approach is a static analysis tool such as NDepend which allows you to really drill into the codebase. A fouth approach is to use code profiling tools like ANTS or dotTrace.
Combining the above can be a very powerful way of getting what the code is doing, for example tracing the code to understand the paths through the code that is taken and using static analysis tools to explore those classes\methods dependencies further.
The final step is documenting what you find, the more you find the more may go onto your plan. Your findings may change the order of your plan if you find objects are worse than you first thought.
Blackfield applications are a minefield, reaking of smells and awash with technical debt. The codebase is a living hell.
Your first plan of attack is a plan. Your boss (be that you, your manager, your client or whoever) needs to understand what you are trying to achieve and in what time. Your team needs to know what the plan of attack will be and where.
Start with the greatest pain points, what are the biggest areas of technical debt, what takes the most time to work with\change and where are the areas with the higest number of defects. Work out what classes\functions are mud balls and where all the hard dependencies are. In working out the pain points you will begin to understand structure (or lack of) and where the fundmentals are. If know one in the team knows an area then profile it, understand what lengths the code is going to.
When your done drawing up the list then work out what the common problems are, is the code hard tied to the database, file system or some other hard dependency. Is the code repeating it's self in structure\form over and over etc. From the list work out what are the areas with the biggest number of problems and make those your starting point.
Now you have a plan of what needs to change and where then you can work out how it fits into your development plan. Manage your plan, put it into a defect tracker, work item tracker or use notepad or excel etc. Mark off the items on your plan as and when you have attacked them, if you find more items then get them on your plan, keep the movement going and slowly the codebase will become better and better.
I'm going to start a series on working with legacy code based on some of things I have learnt over the years. First I define my terms for 'legacy', I define legacy as (as someone on twitter called it) not brownfield but blackfield. Brownfield can be code you did yesterday, last week or last month etc. Blackfield tends to be a great older (think years old) and worked on by a great deal many people. Sure brownfield can also be legacy code but often has far less smells and technical debt, due to it's age the problems are often far worse and far harder to treat. I'm not sure how many posts I'll write for the series or how long it will run for but I'll add them as and when they occur to me. Finally if you are working with the kind of codebase I describe then Michael Feathers 'Working with Legacy code' is a great resource.