Exploratory Testing – chapter from a forthcoming book

Jim Shore and Shane Warden are writing The Art of Agile Development to be published by O’Reilly in 2007.

I was honored when they asked me to be a guest author and write a chapter on Exploratory Testing on XP projects. It’s ready for review. In true Agile style, promoting visibility and seeking feedback, Jim and Shane have made much of the book available for public review prior to publication. And I’d like to know what folks who read my blog think of the chapter.

You can find it here http://jamesshore.com/Agile-Book/exploratory_testing.html

6 thoughts on “Exploratory Testing – chapter from a forthcoming book

  1. I appreciate your four tools. If you have to give an overview of exploratory testing without getting into details, then that seems like a good approach to an overview.

    Here are some of the nits I had while reading:

    “Exploratory testing is a manual process.”

    Well, we can be exploratory with test automation, too. ET is an integrated approach to creating and performing tests. ET is not limited to any specific test, but rather about pursuing lines of inquiry. Hence, I can walk up to anyone’s automated test suite and practice my exploratory testing with it by modifying it, watching the results, modifying it again, etc.

    “Your exploratory testing is not a means of evaluating the software through exhaustive testing. Instead, you’re doing just enough to see if ‘done done’ stories are, in fact, bug free.”

    I don’t understand this passage. The second sentence seems to contradict the first. As a tester, I am unable to tell whether a product is bug free.

    “Some test design techniques are well understood, such as boundary testing.”

    I’m concerned that you are reinforcing a pretty simplistic idea of boundary testing. I’m never going to stop my boundary analysis with a test that hovers near to the alleged boundary. To do so would be to invite confirmation bias. An exploratory approach to boundary testing is one that is capable of discovering boundaries, not merely confirming them (maybe there is an unknown boundary at 1000 in addition to the known boundary at 100). Furthermore, an exploratory approach is not focused only on trying values near the boundaries to see if the expected thing happens, but also to explore for other things that may happen in association with the expected. Example: I pasted 2.4 million characters into a field, and got the expected result– until I thought to tab to the next field, at which point the program crashed. This expanded view makes boundary testing into a challenging intellectual problem– so much more than the simple descriptions of it that we keep hearing.

    “gleaned from experience and intuition about what causes software to break”

    Heuristics can be gleaned from harder sources than experience or intuition. For instance, I know that software systems are composed of many parts, and that any one programmer cannot know everything about all the parts. A programmer may put a limit of 4000 characters on a text field. This establishes a boundary. He may not know, however, that the GUI library itself has a buffer overflow problem at 200,000 characters. That means there is a higher priority boundary at 200,000 characters. This leads to the heuristic: don’t limit boundary testing to testing alleged boundaries by using values near those boundaries.

    Hence a heuristic may (and for me usually does) come from identifiable and defensible dynamics of technology. To assign them, instead, to experience or intuition makes it sound to me like you think there’s no analysis to be done and nothing particular to learn as we develop our heuristics.

    “When the bugs are rampant, you may be tempted to hire a QA department to catch the bugs. This may provide a temporary fix, but it’s the first step down the slippery slope to a long, difficult battle for quality.”

    Hello, I’m a tester. I’m a skilled tester. There are quite a few of us. From our point of view, having untrained testers, such as programmers, take full responsibility for testing software is the slippery slope toward generally low standards of quality.

    I realize that you are contributing a chapter to a capital “A” Agile book, so I guess you have to toe the line. But exploratory testing is not an approach that was pioneered or developed by programmers testing their own stuff. It was developed and is being developed, by a handful of dedicated testers who contribute to projects as testers. We don’t think it’s such a terrible thing to have an independent test team.

    — James

  2. There is something wrong about the (both chapter and Exploratory testing) introduction/positioning. You start describing testing that usually happens “at the end” and why it is so wrong. Does exploratory testing (ET) address this issue? Examples (“interaction between“ and “impacts performance”) in my mind map to what is classically called integration and system tests. You also point out that exploratory testing that doesn’t need to be part of “this iteration”.
    It seems you try to say that ET fulfill something that is missing in TDD with regards to quality. I feel like there are a few of those something, e.g. something called “conceptual integrity” in a book “Mythical Man-Month” and professional tester skills/experience (heuristics).
    More over ET validate process while TDD validates the product. In classic methodology “testing” do both that is a drawback poorly addressed by maintaining process documentation and review. It is not clear however what should be done if this validation outline too much problems…

  3. Pretty neat. I have a couple of editorial comments that you should feel completely free to ignore:

    I think you can throw out your first 4 grafs and start with “XP teams have no separate QA department.” (I might replace “XP” with “Agile”) The ideas in those first grafs never reappear, and including them seems unnecessarily both combative and defensive, if you see what I mean. If you feel like you have to make the points about twisted reward systems and division among teams, weave it into the rest of the narrative.

    Of your four tools, consider presenting them in a different order. “Charters” is not the sexiest one. The most rock’n’roll of the four seems to be “Heuristics”, so something like “1–ETs are guided by heuristics, but they’re not loose cannons: 2– they need a Charter to tell them where to go.”

    I think Jill and Michael should find a big bug first (by analyzing risk or something), and then confirm that the abstractions in the design protected the rest of the code from exhibiting the bad behavior. *Some* exploratory tester in your vignettes should find a dang bug! 🙂

    Move the “Contraindications” section up the stack. I think it belongs immediately after the Wilma/Betty/Jeff scenario and before the Questions because it directly addresses the one and sets up the other.

    I particularly like “The flaw in this approach is in using exploratory testing as a means of regression testing” and “Only do exploratory testing when it is likely to uncover new information *and* (my emphasis) you are in a position to act on that information”. I think that those two points need to be shouted out. If you can sneak them in one or two more places, I wouldn’t be sad.

    Great job!

  4. I have seen wonders with exploratory testing for a testing team testing scientific application with some skill gap. I am in a similar situation where I have institue exploratory testing for team testing some networking appplications. I would like to hear from experts about how to start with exploratory testing in an organization does only scripted testing for decades.


Comments are closed.