Diluting the Tester Role

In a comment on my last post, Shrini asked:

“While hitting hard at ‘Jargon’ based software testing experts, you also appear to give the impression that ‘testing’ is ‘everyone’s job’ (as quality) and seem to dilute the importance and role of testing in software world – you might want to clarify. Don’t you think that you can hit at these jargon creators without diluting the role of testing?”

Yes, thank you Shrini. I appreciate your question, and I would like to clarify.

I do believe that testing is so crucial that it is everyone’s job.

And I also believe that software development teams need people who have taken the time to become really good at testing.

I believe that these two ideas are compatible. Let me ‘splain.

On more than one occasion I have worked with software development teams that were under such time pressure to write code, that they felt that they had no time available to test. They left the testing to a designated team of independent testers. The result in each case was code that was so bug-riddled that stabilizing it took months. During these long months of playing whack-a-bug, the testers and developers were constantly battling, management was perpetually screaming, and no one was happy. Predictably, even when the software did ship, it was prone to failure. In weekly meetings that followed the release, Tech Support let us all know that the quality was unacceptable and that the customers were cranky. Subsequent new development efforts were hampered by the need to patch critical bugs in the field. We were in technical debt up to our eyeballs.

(At this point, I’m guessing that at least one person I have never met in person is reading this and thinking “Wow! She worked here!” I probably didn’t; it’s an all-too-universal experience. Kinda like Dilbert.)

These experiences led me to write “Better Testing, Worse Quality,” a paper that explores the system effects that lead improvements in testing to yield even more fragile software.

And then I started working with Extreme Programming teams and discovered what I’d been missing. I experienced how Test Driven Development and Continuous Integration and Collective Code Ownership and Paired Programming and continuous Refactoring led to a solid code base. Project after project, I observed that the XP teams I was working with were achieving significantly better results than the code-and-fix teams of my past. Oh, the XP projects weren’t perfect. We still had bugs. But we didn’t play whack-a-bug for months on software that was supposed to be done-except-for-testing. We didn’t need stabilization phases. The software was always stable. It might not do everything yet, but what it did, it did well.

I also witnessed the power of Customer Acceptance Testing. I realized that the people responsible for defining the requirements are in the best position to determine whether or not the implementation matches their vision.

But at the same time, I recognized that I had something to offer the team. Compared to the professional programmers, my programming skills were meager. Compared to the product managers, my understanding of the product vision was superficial. But compared to any of the other team members, my understanding of where the risks and vulnerabilities were likely to hide was superb.

I used my skills to provide feedback to the team, to point out implementation bugs and requirements ambiguities and risks. I explored the implementation and reported what I found back to the team. I thought up tests no one else had thought up before.

But I did all these things in collaboration with the team. We pooled our knowledge. And we shared responsibility for testing activities. Testing became an integral part of the software development process. We could no more separate the development and testing activities than we could have separated our hands from our bodies, leaving our hands to type on the keyboard while the rest of our bodies took a break.

Perhaps paradoxically, spreading responsibility for testing to the whole team increased the overall test effectiveness. The testing mindset became pervasive. The test effort wasn’t diluted; instead it grew and flourished.

(I could write bad prose with corny analogies involving a comparison between diluting a drop of colored dye in water and spreading seeds in the wind. I could point out that inanimate objects become weak or disintegrate when spread, while living things – like ideas and knowledge – grow. But I won’t. Oh dret. I just did. Um, never mind.)

So anyway…bad analogies aside…

I do believe that everyone is responsible for testing, even on teams that don’t claim to be doing Agile or XP. The developers test that the code does what they expect and intend, and that new changes don’t break existing expectations. The business stakeholders test that the implementation is what they had in mind. Testers – those who have studied testing long enough and hard enough to get good at it – bring specialized skills to the table to support the rest of the team.

Testers apply critical thinking skills and analytical abilities, coming up with new questions to ask the software. “What if a user does this after that?” we ask. “What if the system is in this state when that happens? What if the data looks like this? What if we configure it like that?”

But testers are most effective when they do this with, not for, the team.

Beware Obfuscating Jargon

For a variety of reasons (one of which might possibly have been procrastination) I found myself surfing a variety of sites related to software testing certifications and exams yesterday.

I will skip over my own feelings on certification for a moment. What struck me is the sheer amount of jargon and memorization involved in the various certifications.

I’m imagining entire legions of people being trained to utter stuff like:

We baselined the behavior of the SUT using path sensitized conditions to ensure optimal statement and branch coverage along with extensive pseudo-random sequences and a sampling of pre-selected usage scenarios.

When what they really mean is:

We varied inputs and actions to exercise the software thoroughly and used it like real users for good measure.

In my more paranoid moments – those moments when tinfoil hats seem like a splendid idea (and an ultra-cool fashion statement) – I wonder if such jargon-spewing isn’t the result of some industry-wide conspiracy intended to dissuade software managers from thinking they could DARE to test software without an army of highly trained certified software testing specialists.

Testing, quite frankly, should not be that hard. And if it is that hard, something is wrong. Something that more testing experts, and more independent testing, will not fix.

At its core, testing is about finding out if the software is OK. That it does everything it is supposed to and nothing that it’s not. That it will do no harm. That it will keep the information it has been entrusted with safe and secure. That it will support the business, providing greater benefit than cost. That it will provide accurate results to those who depend on it for information from which they will make decisions. That it will not fall over dead when it most needs to work.

In order to know all that about software, we have to test it. We test our assumptions about it. We check our ideas and ideals against the reality of implementation to see if anything comes up short. Testing tells us if we’ve built what we intended, and if our intentions were adequate.

We don’t need a complex vocabulary to do that. We need some techniques, yes. And tools. And we need the software to have been created in such a way that we can tell when something has gone wrong. (The software we’re testing should make testing easier rather than harder. This is what it means to be testable. Software that swallows errors, merrily claiming all is well, blithely continuing with no indication anywhere in the UI or a log or a return code that something has gone terribly wrong is nigh onto impossible to test. Make the software more testable, and the testing becomes easier. Funny how that works.)

But jargon doesn’t make for better testing. Understanding test design principles makes for better testing. And jargon alone does not convey that understanding. There may even be an inverse relationship between the amount of jargon in a given set of test documents and the actual usefulness of the test results. I’m not sure; it’s just a hypothesis.

In any case, it should not take a legion of software testing experts who have studied software testing to the exclusion of all else to tell us their independent assessment of whether or not the software will stand up to the harsh usage of the real world. Rather, it takes the efforts of the whole team to assess the benefits and risks, and an executive team that is functional enough to make sound business decisions based on that information.

And while the level of jargon in the field might convince you otherwise, I believe that good software testing is actually quite simple. At all levels in the software and stages in the project, we need to know what we expect, and we need to look for it. Then we need to imagine situations that might prove challenging to those expectations and try those too.

There are various techniques and heuristics that can help us imagine challenging conditions, sequences, inputs, configurations, etc., and people who have studied testing know those techniques and heuristics. People who have studied testing are very useful and every team ought to have at least one. Also, some people are just natural born testers, innately skilled at finding risks and weaknesses and ways to exercise the software thoroughly. Such people should be cherished. Furthermore, an outside perspective on software can be like a second opinion from a medical specialist or like an independent audit where external assurance is required.

But for Quality’s sake, don’t abrogate responsibility for all testing activities to an isolated, independent team of testing experts. Testing should be everyone’s responsibility.

So to be clear: I’m not calling for an end to the professional tester. I am saying that overly complex testing jargon makes it seem as though Real Testing is beyond the grasp of non-testers so it should be left entirely in the hands of the testing experts. And that notion does a disservice to everyone.

While I am on the topic, the notion that programmers can’t test is absurd. Any programmer who can understand CS concepts like closures or DeMorgan’s Theorem is perfectly capable of understanding testing techniques like model-based test generation, all-pairs analysis, and code insertion attacks. Such programmers need to be shown any given testing technique at most once. Then they’re off and running and doing really interesting things with automating tests or generating data. I know because I have had the great pleasure of working with such developers in an environment where management actively encouraged developer testing. But I digress.

In short: everyone on the team has a vested interest in knowing whether or not the software is OK, and that means everyone on the team should be test obsessed, not just the designated testers. And testing activities are too crucial to the success of a software development effort to be left solely in the hands of “certified test experts.”

And my advice to anyone who looks at testing literature and feels intimidated by all the 25- and 50-cent words is to forget the jargon. Focus on the techniques and ideas behind the jargon. I’m sure you’ll find them well within your grasp.

Agile2008: Walking the Talk

Starting with XP/Agile Universe in 2003, I have been a participant, presenter, and/or committee member for various incarnations of what is now the Agile20XX conference series sponsored by the Agile Alliance. In fact, I was up to my eyeballs in Agile2007 for some months as the co-chair of the Tutorials track. (Those of you also involved in Agile2007 know that wasn’t my most stellar moment. I made more than my share of mistakes. But that’s not relevant just at the moment. My real point is that I know something about how the conferences were planned in the past.)

So here’s what is absolutely amazing me about Agile2008. I’m not involved in planning the conference at all. I’m not on any committees; I have attended no conference calls; I’m not on any mailing lists. I’m just another person who is planning to attend. And yet I have a much better idea right now about how the Agile2008 is shaping up than I did in any previous year when I was on the conference committee and theoretically had an inside view.

That’s because the Agile2008 conference committee has made some very important, and extremely cool, changes to the submission process this year. Changes that took some serious vision and guts. Specifically:

  • Anyone can browse – and express their opinions on – the submissions to date. You just have to make yourself a login.
  • Submitters can see feedback as it comes in and revise their proposals accordingly.
  • Potential submitters can see what has already been submitted – and what reactions those submissions are getting. So they have the opportunity to learn what representatives in the community are looking for in submissions.
  • The browsing and sorting features make it easy to find submissions with particular characteristics. That’s handy for reviewers, submitters, and those who are just plain curious.

This is a huge change from the previous system that involved track chairs assigning reviewers in command-and-control style, and where submitters received anonymous feedback after all the decisions were made, too late to do anything about it.

The result is tremendous visibility, fast feedback, self-organization, and whole team (community) involvement. What a great example of an Agile group walking the talk! Moreover, I believe these changes will make Agile2008 an exceptional conference – one that is truly by the community, for the community.

Rock on.

I encourage everyone with an interest in the conference to check out the Agile2008 submission system, browse the submissions to date, and participate in the emerging conference in whatever way you feel is appropriate, whether that means submitting a proposal, reviewing proposals, or perhaps just basking in the increased transparency and community involvement the new system affords.

Oh, and for those of you who are thinking about submitting a proposal, the deadline for submissions is Feb 25, 2008.