Agile QA: Iterations & Release Testing

In discussions with Ward and others in our QA group I'm starting to see another possible arrangement for an agile QA team.

One of the frustrations expressed to me by QA is the lack of time to do some of the testing thought to be necessary (such as exploratory, volume, stress, security, etc), but are not directly accounted for in agile/XP.

We attempted to make these things stories and tasks within an iteration, but they are incredibably difficult to estimate. Plus volume, scaleability and other large sized tests are difficult to complete with our 2 week iteration.

Some of these ideas were derived from a discussion about what makes a story "done". Currently we have a moderately complex 2 page document (on a wiki) that describes the attributes of doneness. What it essentially says is: "The story is done when the customer is satisified. The customer may choose to defer some defects/enhancements to another iteration."

This lead to the idea that maybe the story is done when the story's positive tests pass. I.e. if the story says the context menu will only allow X, Y and Z items when the user right-clicks the widget. Then that acceptance test would pass when the context menu does that. This greatly simplifies the criteria for having a story marked as done. What this leaves out of course is all the other traditional testing a QA engineer would do to ensure that the quality level for the story has been measured.

The intent here is to make sure all the functionality the customer asked for is available. If the developer is turning over code that isn't really complete (acording to the story) this can have a significant impact to the iteration. Either because they are working to hard and adding to many features that the customer didn't ask for (gold plating) or misunderstood the requirements and didn't complete things the way the customer indended. In either case the goal is to deliver the software the customer desires.

This is not to say that those types of black box, boundry, and other tests are not valuable. They just don't feed into the "doneness" of a story. These types of test absoultely do need to be run so that the customer knows what the level of quality is in the software he/she is getting.

Defects found during this time are typically smaller in scale and have less overall impact on the iteration and can usually be fixed during the current iteration without becoming part of a later story. Although sometimes this is required and happens naturally as a discussion with the customer and QA person as the developer explains the nature of the defect and why it is so expensive to fix..

Now what this allows is for the QA engineers to allocate iteration time to do the other types of testing. This is not tracked on cards any more than refactoring is tracked. It is just assumed that it is a necessary part of the process and is accounted for in the velocity.

How will this work in real life? I don't know yet. We are planning further discussion today to determin how realistic this is. More later.