Gunnar Kudrjavets

Paranoia is a virtue

Having majority of the tests implemented before feature is added to the code base

One of the things we’re currently trying to do is to align development and test processes and schedules more closely. The fuzzy description of initial problem statement is the following: “Whenever the implementation of some feature is added to the code base then at the same time majority of tests for this feature must be also implemented and some specific criteria about the quality of the feature must be met.”

However unclear this requirement may be it immediately raises a number of questions and implications for the entire software development process. Let’s look at some of them:

  1. Testability. For now let’s use the simplistic model where we define a software product as just a set of features. To meet this requirement above we need to focus more than ever on testability of every individual feature. The features need to be designed so that there’ll be way of testing them in isolation. It definitely won’t be possible in every single case, but the alternative with lots of interdependencies is very frightening. No more you can check in the interface IFoo and say that you can use it, but it won’t possible to properly test it before IBar is implemented two weeks from now. Of course mock objects will help us tremendously.
  2. Psychological impact. The well-known software development models like the waterfall model and spiral model have very clear distinction between development and testing. Now the development and testing will become a gray area where it’s very hard to formally determine when some feature is actually done. As everything unknown and new is usually frightening for people then there’ll be probably lots of problems with skepticism targeting this approach.
  3. Collective responsibility. No more it’s “Feature F is coded, so development is done and ball is now in test organization's court, go and test it.” The completeness of feature set becomes a responsibility of the entire team and developments vs. test attitude should be fading.
  4. Exact metrics. How do we explicitly specify that desired amount of testing is done? Is it number of test cases (extremely bad measurement)? Is it specific percentage of code coverage (I don’t like this one also)? Is it some percentage of use cases automated? Is it all priority 1 and priority 2 test cases automated (how much priorities should we have?)? Is it some collective gut feeling which tells that feature is tested and it is "good enough"?
  5. Impact on morale. If getting both product and test code implemented is a collective responsibility then Alice and Bob as developers may think that automating these API test cases isn’t what they signed up for and collective mutiny against this approach may take place ;-)
  6. Office moves. Very frequent interaction between development and test team will be required to accomplish the end goal properly. Should we start planning for office moves and try to make sure that people working on the same feature set have their offices located nearby. Please, no communal workspace ;-)
  7. Impact on the entire testing process. Are the test plans even necessary anymore or should the test code be the only documentation? What about manual test cases? Should we try to automate everything which is possible no matter what the cost is?

This list can go on with number of additional questions but it’s wise to stop here because I don’t have currently answers, just questions. Now I need to go and talk with some smart people and figure out how to solve these problems or get the confirmation to the fact that I'm just overreacting ;-)

Now playing: Depeche Mode, "Behind the Wheel [Remix]".

Comments

No Comments