July 2003 - Posts

Version Control: Decision

Regarding my previous post on version control, I thought I give a little update on what we finally decided to do.

Because we are so used to VSS and have a huge amount of code & history one of the things we needed was the ability to migrate all that history to our new repository.We finally decided on using SourceGear's Vault.

Eric Sink (CEO) was very helpful in answering several question honestly without a lot of marketing spin, which I appreciate as a technician.

Posted by Wayne Allen | 2 comment(s)
Filed under:

Agile Process: Iteration Planning Tweak

On Ward's suggestion we made a change to our iteration planning around the velocity & task sign-up process.

Previously we (loosely) tracked individual velocities and people could sign up for tasks until their tasks consumed the number of ideal days allocated to them. In our case 1 ideal day = a velocity of 1, 2 ideal days = a velocity of 2, etc. This was working well in that everyone always had responsibility for 1 or more tasks. The undercurrent of fear that always ran was that management would get a hold of these individual numbers and use them as performance measures, i.e. "we need to get your velocity up", equating velocity with productivity.

Ward's suggestion was to track team velocity, i.e. pool everyone's individual velocity and have individuals sign up for tasks until the pool was gone. A minor change, but with several potential benefits. First was the removal of the fear that the individual velocity would be misused. Second was that some people would be able to sign up for no tasks. This didn't seem like a benefit initially. However, it does allow people who are naturally pairs, or bug slayers to do what they do best and still keep the team on track. The team velocity also accounts for the problem of an individual having a bad iteration and only being able to sign up for 1/3 the amount of work they would normally do.

At our last iteration planning meeting we made this change without much initial comment from the team. The interesting net results from the meeting were that some did not sign up for anything, all the stories are accounted for and about half the team velocity is dedicated to testing. This last fact is really interesting since QA does not make up half the engineering team. So either QA added padding or there really is more testing needed than previously tracked. What will the result be? I'm guessing either the developers will have to help make sure QA gets all the help they need. Or we'll have a testing crisis at the end of the iteration.

XP/Agile Universe

Looks like I'll be attending XP/Agile Universe in New Orleans Aug. 10-13. Any other bloggers going to be there?
Posted by Wayne Allen | with no comments

Agile QA: Iterations & Release Testing

In discussions with Ward and others in our QA group I'm starting to see another possible arrangement for an agile QA team.

One of the frustrations expressed to me by QA is the lack of time to do some of the testing thought to be necessary (such as exploratory, volume, stress, security, etc), but are not directly accounted for in agile/XP.

We attempted to make these things stories and tasks within an iteration, but they are incredibably difficult to estimate. Plus volume, scaleability and other large sized tests are difficult to complete with our 2 week iteration.

Some of these ideas were derived from a discussion about what makes a story "done". Currently we have a moderately complex 2 page document (on a wiki) that describes the attributes of doneness. What it essentially says is: "The story is done when the customer is satisified. The customer may choose to defer some defects/enhancements to another iteration."

This lead to the idea that maybe the story is done when the story's positive tests pass. I.e. if the story says the context menu will only allow X, Y and Z items when the user right-clicks the widget. Then that acceptance test would pass when the context menu does that. This greatly simplifies the criteria for having a story marked as done. What this leaves out of course is all the other traditional testing a QA engineer would do to ensure that the quality level for the story has been measured.

The intent here is to make sure all the functionality the customer asked for is available. If the developer is turning over code that isn't really complete (acording to the story) this can have a significant impact to the iteration. Either because they are working to hard and adding to many features that the customer didn't ask for (gold plating) or misunderstood the requirements and didn't complete things the way the customer indended. In either case the goal is to deliver the software the customer desires.

This is not to say that those types of black box, boundry, and other tests are not valuable. They just don't feed into the "doneness" of a story. These types of test absoultely do need to be run so that the customer knows what the level of quality is in the software he/she is getting.

Defects found during this time are typically smaller in scale and have less overall impact on the iteration and can usually be fixed during the current iteration without becoming part of a later story. Although sometimes this is required and happens naturally as a discussion with the customer and QA person as the developer explains the nature of the defect and why it is so expensive to fix..

Now what this allows is for the QA engineers to allocate iteration time to do the other types of testing. This is not tracked on cards any more than refactoring is tracked. It is just assumed that it is a necessary part of the process and is accounted for in the velocity.

How will this work in real life? I don't know yet. We are planning further discussion today to determin how realistic this is. More later.

Posted by Wayne Allen | with no comments
Filed under:

Ward in the house

We have the great pleasure of having Ward Cunningham on-site this week to help us fine tune our implementation of XP/agile process.

Agile QA: Customer Advocate

One of the roles we have defined in our agile process is the "Customer Advocate". This is a distinct role from Customer or Customer Proxy in that the advocate is looking after the customer's needs from a quality perspective. The need for an advocate is seen in many agile implementations. The usual symptoms are a customer that needs to do their "real" job or where the customer needs assistance writing appropiate stories and acceptance tests. The advocate him/herself may or may not have any specific domain knowledge.

There are four primary activities the advocate participates in:

  • Story development
  • Release/Iteration planning
  • Acceptance testing
  • End game

Story Development
The advocate works with the customer to ensure the stories are as clear as possible (from the point of view of the developer and the customer) so that they can be more easily turned into working code. This includes clarifying ambiguity, ensuring non-functional/indirect requirements are captured and making sure the acceptance tests are specified. Since typically the advocate has quality assurance background they are well suited to these tasks.

Release and Iteration Planning
The advocate helps facilitate the iteration planning by substituting for the customer when he/she is not available. Answering developer questions with more technical specifics when needed and ensuring that non-present stakeholders are well represented.

Acceptance Testing
The advocate takes over the responsibility of executing the tests on behalf of the customers. They will also develop additional tests to run as the code is developed to ensure the story stays true to its original intent. The advocate also can represent the customer when technical questions about the story arise. Ultimately the customer will rely on the advocates advice when trying to assess the completeness of a story.

End Game
The end game is where the customer, advocate and developers finalize the stories as complete, assess the readiness of the release to production, develop risk assessments and assist in measuring the quality of the product once placed in production and pronounce it ready for consumption.

I want to express many thanks to Dal Marsters for his help in developing this role.

Posted by Wayne Allen | 2 comment(s)
Filed under:

What are you currently reading?

ScottW asked What are you reading? For all the reasons he mentioned I think it a good idea.

I find myself reading blog entries and online articles far more often than paper books recently.

Agile QA: 1 Team or 2?

We've been having an interesting discussion internally about whether our development and QA groups should be part of the same agile team or seperate with a half iteration offset. So far we've developed a list of pros and cons to help us decide where to go.

1 Team Advantages

  • Development and QA are always in sync (i.e. working on the same stuff at the same time).
  • QA can play the customer proxy/advocate role to help define stories and acceptance tests ahead of development. Essentially increasing the ability of the customer to get more done (especially helpful when the customer is busy doing "real" work).
  • Developers can assist QA execute tests, effectively dynamically increasing the QA staff on demand.
  • When the iteration is over, all the code works and the acceptance tests run.
  • Singular focus for the entire team.

1 Team Disadvantages

  • QA typically gets releases late in the iteration making it difficult to run all acceptance test before the end of the iteration.
  • Some QA engineers are not interested in being the customer advocate and doing requirements gathering, etc.
  • Some traditional QA activities such as exploratory testing are not well supported.
  • Some types of testing (volume, capacity, and other stress tests) do not fit within the iteration.
  • Estimating QA activities is difficult at best, since time spent exposing issues depends directly on the specific developer writing the code as well as replicating issues that are data dependant¬†are often time intensive.

2 Team Advantages

  • Development has produced most of the code when the QA iteration starts.
  • QA can focus on more traditional tasks.
  • QA tasks for the iteration are better understood.

2 Team Disadvantages

  • QA cannot represent the customer since they don't see the stories until after the developers have started them. Thus the customer needs to have good acceptance test writing skills.
  • QA and development are working on different tasks causing lots of context switching for developers when bugs are found.
  • 2 teams have slightly different focus causing friction when "old" code has issues.
  • Development needs to branch the code base at the end of each iteration and apply bug fixes against the branch and then merge the results at the end of the QA iteration.

I am generally in favor of the 1 team approach, but the issues such as late build delivery make me feel for the QA folks trying to get their job done. Perhaps as we get better at continious integration and no-touch deployment this will get better. Others have suggested using both models to get the best of both worlds by having the seperate QA team focus on testing other than acceptance tests and we will probably try something like this.

Posted by Wayne Allen | 1 comment(s)
Filed under:
More Posts