Exploratory Testing is a style of testing in which you explore the software while simultaneously designing and executing tests, using feedback from the last test to inform the next. Exploratory Testing helps us find surprises, implications of interactions that no one ever considered, and misunderstandings about what the software is supposed to do. Cem Kaner first coined the term “Exploratory Testing” a couple decades ago, though exploratory or “ad hoc” testing has been around longer than that.
Recently, I was talking with a group of XP developers about using Exploratory Testing on XP projects to augment the TDD unit tests and automated acceptance tests.
“Oh, Exploratory Testing,” said one of the developers, “that’s where the tester does a bunch of wacky, random stuff, right?”
“Not exactly,” I replied, a little dismayed that myths about Exploratory Testing still abound after so many years. What looked wild to that developer was actually the result of careful analysis. He held a common misconception about Exploratory Testing: he noted the lack of formality and apparently arbitrary sequences and actions, and he concluded that Exploratory Testing was an exercise in keyboard pounding rather than a rigorous approach.
Two key things distinguish good Exploratory Testing as a disciplined form of testing:
- Using a wide variety of analysis/testing techniques to target vulnerabilities from multiple perspectives.
- Using charters to focus effort on those vulnerabilities that are of most interest to stakeholders.
Variety and Perspectives
The old saying goes, “If all you have is a hammer, everything looks like a nail.” If the only testing technique a tester knows is how to stuff long strings into fields in search of buffer overflow errors, that’s the only kind of vulnerability that tester is likely to find.
Good test analysis requires looking at the software from multiple perspectives. Field attacks like entering long strings or badly formatted dates, or entering data that’s the wrong type altogether (strings where a number should be) are one approach. Other approaches include:
- Varying sequences of actions
- Varying timing
- Using a deployment diagram to find opportunities to test error handling by making required resources unavailable or locked, or to break connections
- Deriving transition and interrupt tests from state models
- Using use cases or analyzing the user perspective to identify real-world scenarios
- Inventing personae or soap operas to generate extreme scenarios
- Using cause-effect diagrams to test business rules or logic
- Using entity-relationship diagrams to test around data dependendencies
- Varying how data gets into and leaves the software under test using a data flow diagram as a guide
Each of these types of testing reveal different kinds of vulnerabilities. Some check for problems related to error handling while others look at potential problems under normal use. Some find timing problems or race conditions, others identify logic problems. Using a combination of analysis techniques increases the probability that, if there’s a problem, the testing will find it.
Charters and Focus
Because good test analysis will inevitably reveal more tests than we could possibly execute in a lifetime, much less by the ship date, we have to be choosy about how we spend our time. It’s too easy to fall into a rat hole of potentially interesting sequence and data permutations and variations.
There are a variety of test selection strategies we can employ, such as equivalence analysis and all-pairs. But even before we begin combining or eliminating test cases, we need a charter: we need to know who we’re testing for and what information they need. Exploratory Testing charters define the area we’re testing and the kind of vulnerabilities we’re looking for. I’ve used charters like these in past Exploratory Testing sessions:
- “Use the CRUD (Create, Read, Update, Delete) heuristic, Zero-One-Many heuristic, Some-None-All heuristic, and data dependencies to find potential problems with creating, viewing, updating, and deleting the different types of entities the system tracks.”
- “Exercise the Publish feature in various ways to find any instances where a valid publish request does not complete successfully or where the user does not receive any feedback about the actions the Publish feature took on their behalf.”
- “Use a combination of valid and invalid transactions to explore the responses from the SOAP/XML interface.”
Notice that each charter is general enough to cover numerous different types of tests, yet specific in that it constrains my exploration to a particular interface, feature, or type of action.
Variety and Focus Yield Consistently Useful Information
Exploratory Testing is particularly good at revealing vulnerabilities that no one thought to look for before. Because you use the feedback from each experiment to inform the next, you have the opportunity to pick up on subtle cues and allow your intuition to guide you in your search for bugs.
But because Exploratory Testing involves designing tests on the fly, there’s a risk of falling into a rut of executing just one or two types of tests (the hammer/nail problem) or of discovering information that’s far afield from what your stakeholders need to know. Focusing with charters, then using a variety of analysis techniques to approach the targeted area from multiple perspectives, helps ensure that your Exploratory Testing efforts consistently yield information that your stakeholders will value.
- Bach, James. “What Is Exploratory Testing?”
- Bach, Jonathan. “Session-Based Test Management”
- Kohl, Jonathan. “Exploratory Testing on Agile Teams”
- Kohl, Jonathan, “User Profiles and Exploratory Testing”
- Marick, Brian. “A Survey of Exploratory Testing”
- Tinkham, Andy and Kaner, Cem. “Exploring Exploratory Testing”