One of the reasons some organizations spurn Exploratory Testing is the notion that “All Testing Should Be Repeatable.”
To this, I say: “Bah.”
Repeatability is an overrated attribute for testing. Repeating a given test for a specific purpose, like for regression testing, is one thing. But insisting all testing be repeatable is an unnecessary constraint that results in more expensive, less powerful testing. Consider:
- There are an infinite number of possible tests. Why repeat the same small, finite set over and over again, getting just the information those repeated tests can give you, when there are numerous other possible tests that could give you new, different, and more interesting information? The pesticide paradox* says that repeated tests will find fewer and fewer issues. James Bach explains this in terms of the Mine Field Analogy.
- The level of documentation required to make manual testing completely predictable and repeatable is extremely expensive. So not only does rigid repeatability result in more limited testing, you’re paying more for those limitations.
- Repeatability of the testing is not the same as reproducibility of the bugs that the testing revealed. Often when people say they want repeatable testing, what they really mean is that they don’t want bug reports that read, “I did a bunch of stuff, I forget what, and it crashed.” You can get reproducible steps for bug reports without turning your testers into human robots blindly following a script.
It isn’t just Exploratory Testing that suffers from the goal of Repeatability. Some folks trot out the “Not Repeatable!” argument when discussing model-based test automation.
In model-based testing, an automated test generates and executes tests on the fly using a probabilistic algorithm that explores the software under test using some kind of model. If you can model your software as a finite state machine, for example, you can create an automated test that will happily run through all the states and transitions for hours, days, or potentially weeks. It’s a handy way to do longevity testing. I’ve also used it to do interaction testing. But because the automated test doesn’t do the same thing over and over again, some people dismiss the value with a “Not Repeatable!” declaration and a wave of their hand.
Of course, you can write model-based automated tests so they support re-execution. One way to do this: code the automated test to write each action it takes, along with the data it used, to a log file in a standard format. Then create an automated test or fixture that can read the log and execute the steps. As an added bonus, you now have a test automation engine that understands a Domain Specific Language. Spiffy.
Allow me to tie this back into yesterday’s post. Some people dismiss any testing technique that does not result in 100% repeatable tests. They’re afraid of the consequences of not being able to execute the same exact steps in the same exact way. But there is great value in the information that can be gleaned through Exploratory Testing and Model-Based Testing. And the reproducibility of a bug is an entirely different thing than the repeatability of a test. Once again, fear is a lousy compass .
* Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.” Beizer, Software Testing Techniques. p 9.