When Tests Aren't Tests
There has been a great discussion on the XP and XP-Testing mailing lists about what “tests” are and how Exploratory Testing relates to XP/Agile development.
According to James Bach the mission of a tester is “to learn enough important information about the product so that our clients (developers and managers, mainly) can make informed decisions about it.”
Does this sound anything like the purpose of what XP calls “tests”?
My assertion is that “tests” in the XP sense (i.e. unit/programmer & acceptance/customer tests) are really about design and communication, not testing. Others are starting to see this as well.
Unit tests (or TDD) are for fleshing out micro-design during initial code creation. They then fall into the role of supporting design iterations through the refactoring safety net. And finally they show intent of how the code is to be used (communication).
Acceptance tests are for communicating to the developers what needs to be built and communicating back to the customers how much has been correctly built.
One of the results of this viewpoint is that not only are developers responsible for creating XP unit/programmer tests, but they are also responsible for creating XP acceptance/customer tests. Testers will have some significant input into these tests (especially the customer tests), but their job is different than just automating acceptance test cases. In fact most testers look at customer tests and say, “thats a good start, where are the rest of the tests?”
Has your concept of programmer/customer tests changed since starting XP? Have you talked to testers about what they think of XP “tests”? What did you find out?