Extreme JS

JS Greenwood's WebLog on architecture, .NET, processes, and life...
XP StoryStudio v0.99b released

Having gathered initial feedback from the v0.99 release, v0.99b is now available, with the following minor improvements:

  • Iterations can be deleted.  In v0.99, this feature just linked to the Story Delete, and wasn't implemented
  • Date fields now have pop-up calendars
  • Installer now supports choice of whether to run/not run tests following installation
  • Installer now supports not configuring the database
  • Installer will now work without SQL Server being installed on the local machine

v0.99 is now no longer available for download.  However, v0.99b is available from www.xpstorystudio.com, or directly by clicking here.

To perform an upgrade from v0.99, first uninstall the existing version (this will NOT remove your existing database), then run the v0.99b MSI installer and select the database upgrade version.

Posted: Jan 18 2005, 09:42 PM by jsgreenwood | with 14 comment(s)
Filed under: ,
Active and passive risk

Attitudes towards risk are something that I've been meaning to write about for some time now.  To me, there're basically two types of risk - active and passive (I'm no risk expert in the "risk analyst"/academic sense, so ignore all of this if it's obvious)...

  • Active risk is that which is deliberately taken on - for instance the choice to develop a new product that may (in theory) fail in the market.  Or the rewrite of a piece of software due to burgeoning support costs.
  • Passive risk is that which is inherent in inaction - for instance, the choice not to update an existing product to compete with others in the marketplace.  Or the decision not to rewrite a piece of software, despite burgeoning support costs.

Both these types of risk can be measured in the same way - the cost, and the potential return/loss.  Yet people seem to have very different attitudes towards them.  Passive risk is seen as a necessary evil that's often ignored.  Whereas active risk is seen as something to be avoided, regardless of the potential payback (and likelihood thereof).

I think, in corporate life, the problem lies in what people are measured/judged on - the decisions that they DO make (active risk), rather than the ones they DON'T make (passive risk).  It's easier to blame indecision on someone else than it is a choice you made yourself.  Unfortunately, I've seen many cases where the passive risk is huge; easily enough to cause a company to go under sometimes (and actually causing it to, in at least one company I've worked with).

In theory, this all comes down to a corporate risk register, and ensuring that it's both complete, and has well-defined accountabilities for each item.  Unfortunately, I've rarely seen this really working, with numerous passive risk items dropping out of sight due to an unwillingness to take on responsibility.

Posted: Jan 08 2005, 07:05 PM by jsgreenwood | with 1 comment(s)
Filed under:
Agile planning tool - XP StoryStudio - available for download

Whilst I've not had time to do all the documentation I was planning to over Christmas, I have got round to bundling up the installer and updating www.XPStoryStudio.com.

So, available for download now (from the aforementioned URL), is v0.99 of XP StoryStudio - the freely distributable XP planning tool created by myself and Dave Fellows, with the help and support of several other kind souls at Egg.

From the documentation enclosed within the installer:

This is a functionally complete, pre-release version of XP StoryStudio
intended for limited distribution to test installation and integration issues
prior to the public release.  As such, should issues be found, please forward them to: support {at} xpstorystudio.com.

Posted: Jan 04 2005, 12:49 AM by jsgreenwood | with 4 comment(s)
Filed under: ,
When TDD Goes Bad #2

I've been debating whether to post another entry on this for the last couple of weeks due to another bad smell I've spotted around Agile practices.  However, it's my blog, and I think it's worth saying.  But first, the bad smell...

Zealotry.  Many advocates of Agile practices have had to fight their corner hard against supporters of traditional methodologies.  Having had several such confrontations myself, it does become tiring, and you do beging to expect fairly incongruent arguments against it.  There is a danger, though, that the habit of dealing with arguments against Agile that don't stack up will lead to a blinkered view where it's assumed that any negative statement about the principles and approach should be dismissed.  No methodology is perfect - different circumstances call for different approaches, etc.  Also, just because a methodology when applied correctly is appropriate and should result in success, it doesn't mean that it will be applied correctly and appropriately.  People can, and will, get things wrong - especially when lacking experience.

So, TDD Gone Bad #2...  Mock oriented development.  Mocks are a good thing where you really want to extract a dependency and ensure you're testing the right thing.  But there are a couple of issues that can arise:

  1. Poor integration tests, as everything is being tested in isolation - we can end up with a system where the constituent parts are clean, isolated, well tested, and known to be correct.  But how they fit together is a greyer (or even blacker) area unless mocking is accompanied with a complement of integration tests. Dave describes this well in his blog, so there's no need for me to go on about it here: http://www.twelve71.org/blogs/dave/archives/000616.html
  2. Mock oriented design.  Whilst mocking is good at removing dependencies, their introduction regularly alters the design of the system.  I've seen numerous occasions where the introduction of mocks has added a large amount of complexity to an otherwise simple design.  This complexity leads to higher implementation costs, a higher cognitive load on the developers working on the system, and higher maintenance costs (as there's more code to maintain).  All of which go against the principle of "the simplest thing".  The irony is that the introduction of mocking can, sometimes, make completing a system far more time consuming due to the different levels of granularity, and the additional code required to implement interfaces, etc.  Again, that's not to say mocks aren't very useful things, just that they're a tool to be used where appropriate, not a pattern to base the foundations of your entire implementation on...

Whenever mocking is used, the value that the mock gives versus the cost it will introduce over the lifetime of the system should be measured (or at least estimated/considered)

Posted: Dec 29 2004, 10:35 PM by jsgreenwood | with 7 comment(s)
Filed under:
When TDD Goes Bad #1.2

So, #1.1 was all about the "business" - people defining requirements, and how these can cause issues.  #1.2 is just a short entry about the underlying statement I was trying to make in the original post:

Anti-pattern: Where inexperienced/misguided developers keep on testing where it's not realistic or financially astute to do so, thinking that's what TDD is, and thinking they're "adding value".  As we all know, that's not what TDD is.  And it's something we should try to avoid by coaching, mentoring, and working closely with development teams; helping to give them ways of judging the value of any piece of work.

It's all too easy to work with a group of savvy technical people that get TDD, and not be able to see how it could go wrong.  If you try and scale this across an enterprise where some people just don't get it, and the support isn't there to keep things moving in the right direction, things can, and do go wrong.

Posted: Dec 08 2004, 04:06 PM by jsgreenwood | with 2 comment(s)
Filed under:
When TDD Goes Bad #1.1

OK, so people don't seem to get what I was trying to say in my previous TDD post, to the point of claiming (on certain newsgroups) that I "don't get TDD".  There were two points I was trying to make. So, I'll try and lay out the first of these now, in a simpler manner:

  • Every test you write has a cost associated with it - the development time/cost: C
  • Every test you write has a value associated with it - the time/money it'll save over the lifetime of the system: V
  • At its simplest level, V must be greater than or equal to C for it to have been worth implementing the test (I'll probably post something separately on the mathematical modelling of this in the future)

The reason that V isn't always greater than C can arise from several situations from a "story" (business) perspective:

  1. Developers have been provided with open ended requirements - where there is no bounding to the tests that have to be written.  At its most extreme case, this could be something like "Test: An unexpected physical occurence takes place outside the system. Result: The data will be backed up correctly".  That's an open ended test, though.  What if someone cuts power?  Or a lightning bolt hits the server rack?  Or a tiny blackhole forms over the CPU?  "C" would become infinite if all scenarios were dealt with (just because I've used an extreme example here doesn't mean it couldn't be something much more subtle, either).  Some of these scenarios are more likely than others, and there's diminishing return on each test being implemented as the likelihood of the scenario occuring drops.  To stop this situation from occuring, requirements need to be closed and finiteIf they're not closed/finite, you arrive at the situation in my previous post where a developer can simply "judge" how much testing is enough.  Unless developers are they calculating the C:V ratio of each test in a situation with any open-endedness, then TDD can go bad.
  2. Developers have been provided with requirements with negative value - where the cost of implementing the test + code is inherently greater than the value it will deliver.  Accurate estimation of tasks and comparison against the value of any given story will avoid this situation.

[In part #1.2, I'll discuss the second part of this]

Posted: Dec 07 2004, 05:43 PM by jsgreenwood | with 5 comment(s)
Filed under:
XPStoryStudio site up: www.xpstorystudio.com

Having registered the site a couple of weeks ago, I've got round to putting together and uploading a couple of web-pages for XPStoryStudio, in preparation for it's release in the very near future.  All that's left to do is put some installation documentation together and generate the finished installer package.  In addition to posting major news about its release, etc. here, all the details will be published on XPStoryStudio's site:

www.xpstorystudio.com

Posted: Nov 28 2004, 08:36 PM by jsgreenwood | with no comments
Filed under:
When TDD Goes Bad #1

Although Test Driven Development (TDD) is one of the greatest steps forwards in software engineering, especially when combined with modern languages and testing frameworks (i.e. xUnit), there's a definite anti-pattern lurking in there - Test Oriented Development.

At its most basic level level, writing tests is done for one reason and one reason only.  It's not about ensuring you meet requirements, it's not to have a repeatable means of knowing the code functions properly after changes, and it's not to aid design.  The key reason for writing tests is the same as everything else in software development - to make the company commisioning the software more profitable.  So, the justification for TDD is based upon the premise that, in the long run, writing tests will save the company money on down time, effort in re-workings, and so on - but these are just implementation details of the underlying principle.  Now, although most developers that do TDD understand the long-term payback view of the approach, I'd question how many actually consider the value of each test or set of tests that are being developed.

Using an agile methodology, tying the (acceptance) tests to the stories is fairly simple, seemingly giving visibility of the value of tests.  But, given that many verbal tests are quite open ended, how do you know how much testing is enough?  The test may be stated as "When I enter the correct data, but there is a connectivity issue, I will see error message X".  A verbally described test such as this can have a hundred different programmatic tests implemented - simulating standard firewall issues, XML-firewall issues, web servers being down, latency in connections, time-outs, corruption of data, etc.  There is no immediately visible answer to "how much testing is enough?" in such a scenario.  This is even worse with unit tests, where they aren't likely to be defined formally (verbally) at all, and are largely up to how meticulous a developer is.

Added to the lack of metric is the fact that the developer is concentrating on testing before "coding" - the primary function is hence the development of tests, leading to a situation of "test-oriented development".  This is where the production of tests becomes the overriding concern and output of a team, with the development of required functionality being secondary.  When this occurs, the design, quality, and scope of the tests all become greater than that of the system actaully being created.  As the development of tests follows the law of diminishing returns principle, beyond a point, each additional test you write for a certain circumstance adds only more and more marginal increases in quality.  Eventually, you will reach a point where there will never be a point in the lifetime of the system where the development of the tests achieves ROI.  Basically, the development of tests becomes counter-productive to the aims of the business.  This is compounded by the Heisinger's Uncertainty Principle-like facet of TDD - it's alteration of the design of the code it's testing.  But that's a subject for another post in the next few days...

Avoiding test-oriented development can be quite difficult - changing a developer's mindset to take on TDD is difficult enough to start with, trying to then amend that to temper the creation of tests goes back on that principle to some extent.  My current thinking on this is that, along with acceptance tests and development tasks, a separate list of requirements needs creating for each story - a list of required unit tests.  Rather than just estimate the effort required to complete a story point in its entirety, the effort required for each unit test (and its value in pound-notes/dollar-bills) should also be calculated to some extent. Each test is then subject to the same cost-benefit analysis and prioritisation as stories at a higher level.  The task of doing this should also help developers to consider the reason for creating a test; how realistic/contrived the situation it's testing is, how better the effort could be expended, and finally, help remind them what they're actually trying to achieve at a less programmatic level.  If this seems like too much effort, then an alternative may be to look at the ratio of time that is to be spent on writing tests to that of the production code itself for a given story-point to ensure that the focus isn't shifting from the system to the tests.

Posted: Nov 26 2004, 12:58 AM by jsgreenwood | with 12 comment(s)
Filed under:
Coordinating Enterprise Website Development in .NET

One of the favourite Enterprise development strategies for .NET websites I've come up with is that of splitting the website into logical areas along functional- (and naturally change-) boundaries; having separate areas of the site developed as separate ASP.NET controls.  I'm not talking small-scale controls like "Address entry" here, I'm talking entire 5+ page application processes as a single control (that might itself comprise further controls).  This model works well for all kinds of large organisations - an online e-commerce site could have separate basket, checkout-process, and product-search controls, for instance, each of which would be aligned against a set of services that they're consuming (in an SOA).  The current enterprise-in-question is a bank, which fits this model better than most other sites I've come across due to the sheer amount of functionality necessary on the website, and its diversity.  Application forms alone constitue a raft of controls:

  • Mortgage applications
  • Credit card applications
  • Personal loan applications
  • Motor insurance applications
  • Etc.

My view on coordinating this has been, for some time, to have separate teams working on each "control", basically becoming product teams that deliver a black-box control to one overall "site" team.  It is then this team's responsibility for plugging all of the controls into the main site, maintaining all the links between areas, etc.  There are several benefits to implementing an enterprise-class site this way:

  • It gives a simple way of logically breaking-down/grouping a large number of development staff
  • It supports concurrent development on multiple parts of the site.  Because the controls don't know about each other, there's no risk of them linking to one another and dependencies being created.  This allows for varying rates of change in different areas, and doesn't predicate large-scale upgrades when a new version of some base functionality becomes available
  • It provides a black-box component that allows for alternative internal implementations, for instance migrating from an HTML based UI to a Macromedia Flash one, and eventually to a Longhorn one.
  • It matches a model of having multiple levels of continuous integration, rather than having a single-level uber-integration that becomes unwieldy

That's the good news - a paradigm for developing enterprise websites that theoretically has few drawbacks.  Moving on to the implementation is where the bad news begins...

In .NET, these controls could either be developed as user controls (.ascx files), or as custom controls (.cs files).  The advantage of the former is the speed of development, and close integration with UI designers - developers can work in "HTML view".  The latter involves writing all content as controls.  A short example of this is as follows:

Web control implementation:
  <h1>Hello World</h1>
  <asp:TextBox id=txt runat=server />

Custom control implementation:
  Literal literal = new Literal();
  literal.Text = "<h1>Hello World</h1>";
  Page.Controls.Add(literal);
  TextBox txt = new TextBox();
  txt.Id = "txt";
  Page.Controls.Add(txt);

Although possible, developing a graphically compelling site using the second method is torturous, so I've discounted it as an option.  I also don't want control teams to deliver an editable ASCX control to the site team - it would be prone to editing, etc. and not as much of a "boxed product".  There is an alternative, however.  Whenever .NET renders a web page that makes use of a user control, it converts the first code example into the second internally.  In Visual Studio .NET 2002 and Visual Studio .NET 2003, this process is largely invisible, and only happens when the page is requested.  Visual Studio .NET 2005 ("Whidbey") changes this, though, allowing for pre-compilation of websites into Assemblies (DLLs).

So, my long-term plan was to have each team develop a control in a separate VS.NET 2005 project, pre-compile the output when they're ready to release, and then just hand the Assembly over to the "site" team to use just like any other class library.  Having spent many, many hours over the last week poking and prodding the aspnet_compiler.exe tool, rewriting it from scratch, attempting reflection on the System.Web.Compilation namespace, and even disassembling most of the System.Web assembly, I'm getting close to giving up.  Rather than creating one sensibly named assembly for the whole project, the compiler creates multiple randomly named assemblies - usually one for each control/page (although it sometimes puts several together).  After recreating the aspnet_compiler tool from scratch, and extending it to sift through these files, pull out just the assemblies, give them new file names based on the controls they contain, etc. I thought I was getting somewhere.  Now the problem is that if one of these controls uses another from the same project, it will be given a reference to the random 8-letter assembly name (and matching internal module name).  I've yet to see if any internal assembly hacking can rectify this.

None of this would be necessary if Microsoft hadn't specifically made the majority of the System.Web.Compilation namespace private/internal, presumably in an attempt to make it difficult for other people to develop competing tools, etc.  The most infuriating part of this is that, in theory, you can access all of the System.* functionality through reflection, whether its supposedly private or publicly visible.  Additionally, tools like Salamander and Reflector make it possible to decompile vast swathes of the system classes back into C#.  So Microsoft aren't really "protecting" anything, they're just making it painful to come up with enterprise class solutions.

The point to this post?

  • Here's a model for enterprise web-site development that works, even if it DOES mean handing over controls that aren't totally black-boxed.
  • If anyone is a System.Web.Compilation guru and fancies helping out on creating a tool to enable this, please mail me
  • If anyone thinks that Microsoft should make it easier to compile source code, please mail them
SOA Design with Agile methodologies

Service Oriented Architecture (SOA) and Agile - two of the current "hot topics" in IT.  The interesting thing is that even though they're fundamentally different - one being a set of architectural principles, the other a set of methodology principles, one of the key goals and benefits of both is enabling change.

Now, there are clearly differences in the goals (enterprise integration vs. fail early, etc.), but that's beyond the scope of this entry.  What this is about is that these are two sets of principles that are both useful and appropriate in today's technology landscape.  So, a natural situation to arrive at is where you want to transform a company to use both Agile and SOA.  This raises an interesting issue, though...

  • Agile methodologies promote an incremental, iterative approach to development of functionality (including method signatures), with visibility of the impact of change given through test coverage.  Basically, working on the premise that, change is cheap if supported correctly
  • SOA promotes a well defined service interface through contracts - these contracts are aligned against business processes, not implementation details.  The rigidity of the service interface, and its non-technical alignment allows for internal change without impacting consumers of the service.  Changing a service interface is an involved process, though; rather than an outright change, a versioning mechanism is needed due to the immutability of contracts, and therefore a migration process is required.  The visibility of service usage is also very difficult to monitor as it could be external organisations rather than internally controlled systems.  The upshot of these two factors is that change to the interfaces of a service are relatively expensive

So, whilst the internal development of services and service-consumers is in-line with Agile, the creation of service interfaces isn't - an incremental, iterative approach to those would cause serious versioning and compatibility problems, especially with remote (3rd party) consumers.  Similarly, imagine you're on the receiving end of another company that has an Agile approach to a service interfaces you're consuming; the contract you're adhering to would shift under your feet continually, introducing additional cost.

Thankfully, there is not only a solution to this problem, but it is also one that actually benefits Agile methodologies.  The solution is to make use of the incremental aspects of Agile, without the iterative approach when designing service interfaces.  Basically, the service interfaces should be defined just-in-time; when the first requirement for a certain piece of functionality arises, the definitive implementation of the interface for that particular service method and no more is derived.  The removal of an iterative approach to interface design is important not just for cost reasons - one of the key factors in the success of an SOA is how accurately the service interface mirrors the real world process.  If deriving a clear definition of this interface cannot be done, then it generally means that the underlying process isn't well understood.

As for the benefits of this to Agile, it's long been my opinion that the largest problem with such approaches are the introspective facets of them - the virtual disregard for what isn't clearly defined and within the primary context.  Service interfaces act as change boundaries for these introspective "pockets", containing them from unintentional impact on other such "pockets".  This allows iterative approaches to be taken to the component- and object-oriented internals of services, without incurring the cost of repeatedly integrating between potentially external, non-visible systems.

More Posts Next page »