Once upon a time, I worked on a project where the developer protested “SCOPE CREEP!” to every bug report I filed.
Sadly, the two of us built up a lot of animosity arguing over whether or not the bugs I found were bugs or enhancements. I reasoned that I was testing conditions that were likely to occur in the real world, and “not crashing” did not count as an enhancement. The programmer argued that he’d done what he’d been asked to do and that it was too late to add more work to his plate. “No one said anything about the software being able to handle corrupt data!” he snapped.
Much to the programmer’s chagrin, management tended to agree with my assessments. The programmer ended up doing a whole lot of rework, grousing the whole time.
I later realized that the programmer thought I was making up new requirements as I went along. Seriously. He thought my tests were unfair.
Of course, that’s not what I intended.
The way I saw it, my testing was revealing answers to questions no one had thought to ask before: What if this file is locked? What if that connection is broken? What if the data is corrupted? If I’d been involved in the project earlier, I would have asked the questions earlier. But this was a waterfallish project, and testing happened at the very end of the process.
I know I’m not alone in having had a project like that. Some situations are even more dysfunctional. “They yell at me when I find bugs,” lamented one tester, “and they yell at me for not finding bugs if a user finds a problem in the field. I can’t win.”
Can’t win. Can’t break even. Can’t quit. At least not without dire consequences, like unemployment. Sometimes testing feels like Ginsberg’s restated laws of thermodynamics. The only way out of the trap is to change the game.
Fortunately, testers have the opportunity to change the game when we’re part of the team from the beginning, like on Agile projects. We can take those same testing skills we apply at the end of the project that enable us to find good, deep bugs, and use them to elicit specific acceptance criteria with good examples.
In fact, this is one of the most important ways people with testing expertise can help Agile teams.
But one stumbling block for some testers new to Agile his how, exactly, to do this.
First, you have to be able to imagine tests based on only the sketchiest idea of what the software under development is supposed to do.
Since this is the situation most testers work in, we’ve already honed that skill. We know how to take a hand-wavy set of statements like “users belong to groups; groups have permissions; permissions allow for create, read, update, and/or deleting of floozibitzes” and turn them into 193 test cases. So let’s assume that as a tester, you know how to identify potentially interesting conditions, actions, sequences, configurations, and such.
Now let’s explore how to leverage that skill at the beginning of a project, transforming tests into questions that will prompt stakeholders to discuss acceptance criteria in concrete terms.
First, remember the anatomy of a test. Tests generally involve some setup (“Log in as a user with read only permissions”), one or more actions (“Double click the floozibit record”), and one ore more expected results (“Verify the Edit button does not appear”).
Let’s rephrase that as a sentence: “Given this setup, take some action, and verify the expected results.”
With traditional tests, we know the expected results in advance. But when we’re still exploring requirements, we’re trying to determine the expected behavior. We only have our imagined setup and actions to work with. But we can get some pretty detailed information if we ask the question in the form: “If we have this setup and take these actions, what do you expect to happen?”
Let’s take a couple examples. Where you might be tempted to design a test around a condition like corrupted data or NULL values, instead frame it as a question:
“How should the software respond if it encounters corrupted data?”
“What if the data in this field were NULL in the database?”
Of course, sometimes when we ask that question, we get an answer like “Gosh, I dunno.” That doesn’t help much. Fortunately, testers are also good at figuring out what a reasonable expected result might be based on all kinds of things like past versions of the software, comparable products, and past experience. We can suggest some possible expected results in our questions: “If we have this setup and take these actions, should the software do this or that?”
“If the software encounters corrupt data, is it better for it to attempt a repair or discard the data or something else?”
“If the data in this field is NULL in the database, should the software turn that into a zero-length string?”
Of course, just because we can imagine a test doesn’t automatically make it interesting. Just as you might offer potential real world use cases to make a bug report more compelling, you can offer use cases to make a condition more interesting when eliciting requirements:
“That could happen if the database were improperly restored, or with data migrated from a legacy database.”
“What if a user bookmarkes a page for an item that is later deleted?”
Where you might define negative tests, instead ask about allowed/disallowed actions:
“Should a user with Delete Account permissions be able to delete the last Administrative Account?”
Traditional software development processes define the relationship between requirements and tests by insisting that tests be written from requirements. As a result, bad requirements lead to bad tests, or to a whole lot of arguing about what is and is not in scope.
Forget the arguing. Let’s focus on gaining alignment, and gaining it early. And as testers, we’re in a unique position to help the team do just that.
Anyone who is good at designing tests can use that skill to frame a wide variety of specific, thoughtful, and thought-provoking questions around expected behavior. And that means everything you know about testing can contribute to establishing a well-defined, specific, shared understanding of what the team is building.