Testing: Not a Phase, but a Way of Life

“Our testing leaves a lot to be desired,” the Project Manager shot an accusatory glance at the QA Manager. The QA Manager glared back. I was seated between them at the conference table, and felt trapped in the middle.

“What makes you say that?” I asked, shifting my chair so I could see both managers at once.

“The software stays in test for too long. Our ship dates just slip and slip,” the PM shook his head. “Testing takes too long!”

“I see,” I replied. “And why is it that the software stays in test so long?”

The QA Manager cocked his head expectantly, apparently curious about the PM’s answer too.

“The testers find too many bugs!” complained the PM.

It was hard to keep a straight face in that meeting. The testers were doing a great job of finding problems with the product before it shipped, yet the PM was complaining bitterly. I privately wondered why the PM wasn’t complaining to the Development Manager about the existence of the bugs. I also wondered what political history in the company had led up to this meeting.

The organization used a traditional phased software development process. First the developers developed, then the testers tested. They’d called me in as a consultant to see if I could speed up testing. All I could tell them is that the more bugs are in the product when it comes into test, the longer it’s going to take to test it, and the more bugs you’re going to find during test. If the testers are finding the bugs, and if the bugs the testers are finding are real, then they’re probably doing a good job even if you don’t like how long it takes to test. If you want to speed up the Testing Phase, give the testers more stable software so there are fewer problems to find.

I then asked about developer testing. Did the developers do unit testing? What did that unit testing look like? The PM mumbled something I didn’t understand while the QA Manager rolled his eyes. Apparently I hit a nerve. The real problem in the organization was not that testing took too long, but that it only happened at the end.

Over the course of several meetings with QA and Development, I probed more to understand the state of developer testing. In one such meeting I discovered a fundamental misconception: team members believed that you couldn’t really start testing until you put all the parts of the system together. In essence, both the testers and the developers believed that unit testing was a waste of time.

“Give me an example,” I prompted.

The developer happily complied. “Take this bug, for example,” he explained. “It only happens when you create a child record, then delete it very quickly. That results in the parent object pointing to a NULL child, and it raises an unhandled exception. There’s no way we could have found that in unit testing,” he concluded smugly.

“What if you’d tested that method passing in a NULL value for the child record. Would that have exposed the unhandled exception?” I countered.

“Oh. Yeah. I guess it would.”

In addition to having several misconceptions about testing, the developers also lacked general testing knowledge. The result was what I’d originally diagnosed: the code was under-tested during development, so there were lots of bugs left to find during the Testing phase. The thing that I hadn’t realized before these conversations was the degree to which the under-testing during development led to untestable code. It made sense in hindsight though. Early testing increases testability, making later testing easier. It’s a virtuous cycle.

Alas, this company was not an isolated case. I’ve seen similar situations in numerous organizations. It is one of the unfortunate side effects of the misguided edict “Separate QA and Dev.”

Agile teams tend to address this problem head on with integrated teams, testing from the beginning, and an emphasis on automated unit testing.

Even non-Agile teams can benefit from early testing. “Separate QA and Dev” doesn’t have to mean “Only QA Tests.” The former is misguided; the latter is just stupid. “Only QA Tests” ensures that feedback is maximally delayed. And it perpetuates the misconception that developers aren’t very good at testing, a ludicrous notion. Just because a developer’s innate testing skills may have atrophied as a result of years of the “Developers Develop; Testers Test” mindset doesn’t mean that developers can’t test. Developers are generally quite adept at identifying technical risks. They’re just out of practice at testing for them.

But practice makes perfect. Developers who get in the habit of testing usually find that they’re pretty good at code-level testing. They also often find that thinking about testing improves their code.

Treating testing as a phase thus does far more damage than just elongating feedback loops and making release schedules more unpredictable: it reduces overall code quality and undermines the team’s skills.

By contrast, testing throughout a project results in all sorts of goodness: shorter feedback loops, improved code quality, and more empowered teams. Testing activities can, and should, start on day 1 of a project, with or without designated QA personnel. These activities can include far more than just executing tests on finished code. We can test assumptions, test for ambiguity, test for understanding, and test completion criteria. If we have any kind of expectation at all, we can test for it in the project agreements, artifacts, AND code.

This is why I maintain that testing isn’t a phase, it’s a way of life. (It’s also why I’m test obsessed.)

Subscribe

Subscribe to our e-mail newsletter to receive updates.

8 Responses to Testing: Not a Phase, but a Way of Life

  1. Matthew Heusser November 30, 2006 at 10:16 am #

    Great conclusion. Not exactly where I expected you to go, but good and surprsing. Paul Graham would be pleased. 🙂 (http://www.paulgraham.com/essay.html)

    I was expecting you to say that if there are a lot of bugs, then it’s not a testing phase, it’s a _fixing_ phase. The problem isn’t that the testers are too “slow” – clearly, they can find the bugs fast enough. The problem is that the dev’s can’t fix them fast enough! 🙂

    Of course, that’s a confrontational approach that misses the root cause entirely – your solution jumped right to the root cause. Good for you!

  2. Pavan December 3, 2006 at 8:20 am #

    1) Deveopers take 3 days to finish a product in a very good codition (according to them),Testers take # hours to prove it as completely faulty.
    2) Again they start developing for any 3 days and complete
    3) But PM had to deliver the Product by the weekend
    4) Force comes on Testing People to Complete the Testing and finish it off
    5) Testers let the Product go in a unfinished format
    6) Comes issues from Client saying ho can QA can’t find so small issues
    7) Testers say we dint have time to test
    8) Developers say they have not are responsible since they gave product in time

    So.Who is responsible for this..and when will this problem solve????

  3. esh December 4, 2006 at 12:44 pm #

    Hi Pavan,

    The problem you described is a frequent occurrence on traditional software projects, and is one of the reasons I prefer working with Agile (especially XP) teams. But to answer your question:

    Who is ultimately responsible? The management team that set up the project structure so that testing can only be executed after development is done. Everyone is doing their best given the situation. It’s not the PM’s fault that there are deadlines. It’s not development’s fault that there are bugs. And it’s not testing’s fault that they can’t find all the bugs at the very end of the development cycle. It isn’t anyone’s fault. But it is management’s responsibility.

    How to solve this problem? Start testing from the very beginning, not at the end. Make testing an integral part of the development effort. Work in short iterations with lots of feedback. And work as an integrated, cohesive team so it never devolves into an us v. them, “we did our part; they didn’t do theirs” discussion. An integrated team says: “The team as a whole is responsible and accountable for the outcome, and we’ll all work together to make it right.”

  4. Shrini Kulkarni December 5, 2006 at 7:59 am #

    Nice Article Elisabeth —

    The root cause of all this – is poor understanding of what is testing at all levels be it developer, PM, management, Sales and marketing. The day we understand what testing can do – we will have all these people who mock at Testing – supporting Testing.

    On the part where you seem to suggest “Agile” model of development and Testing – I am of the opinion that there are situations like the ones in “offshoring – Testing” or where “Agile” setups are not available – it is difficult to drive the improvements over traditional Waterfall model.

    On the part where you talk about formalized unit Testing – I think major resistance or reluctance from developers for unit testing comes from fact that they are “already” overloaded in a time crunch situations with pressures from all corners PM, test, business, Quality – Poor developer – needs support from “friendly tester” next cubicle. Effective way to push developers to do unit testing is to support them with time and resources (test)

    Consultants like you, James Bach, Cem Kaner and others are doing great job of spreading awareness about testing among “people who matter” Keep writing

    Shrini

  5. Chris Hansen March 9, 2007 at 4:48 am #

    Hi Elizabeth:

    Catching up on your blog, so forgive the lateness of this comment. One of the real frustrations of testing is that the management assumes that the testers add quality at the very end of the process. No one else in the organisation takes any responsibility for the quality of the software; they just say, “It’ll get found in testing.” Then they take away time at the end of the process, giving the testers even less time to find bugs.

    I often use the analogy of testers having large boxes of fairy dust to spread over the software, adding the quality that wasn’t there when the testers got the product to test. Doesn’t work that way in real life, unfortunately.

    When I’m asked by a project manager why the testers are finding so many bugs, I answer: “Actually, there’s a very effective method of reducing the number of bugs found by the testers.” New PMs always fall into the trap and eagerly say “Hey, that’s great! Tell me how!” I answer: “We just stop testing and release the product as is.” Those with a sense of humour laugh weakly. The majority are outraged.

  6. Sandeep Anand October 24, 2007 at 8:24 am #

    Does testers do unit testing by any chance? In Agile do testers do unit testing?

  7. Elisabeth Hendrickson October 24, 2007 at 8:34 am #

    Hi Sandeep,

    Unit testing should be done by the developers as they are developing the code, not by independent testers as a separate activity.

    This is particularly true in Extreme Programming where the unit tests are a side effect of doing Test Driven Development (TDD), and thus are intrinsic to the coding effort. It wouldn’t make any more sense to separate the unit testing effort in XP than it would to have someone in charge of adding the semicolons at the end of lines of Java code.

    Out of curiosity, why do you ask?

  8. Radhakrishnan January 14, 2009 at 4:36 am #

    Hi Elisabeth,

    Great post. I too have experienced these kind of situations.

    Just a doubt, if a tester pairs with the developer when he is writing the unit tests, will they be able to cover more unit tests than when two developers pairs while writing the tests

    Is this a good way to go?

    Elisabeth responds:

    Maybe. Maybe not. When I joined my first Extreme Programming team I thought that one of the ways I could contribute would be by increasing the unit test coverage. After all, I reasoned, I knew a lot more about testing than the developers on the team.

    I quickly discovered that while I might know more about testing in general, I could not contribute to unit testing in a meaningful way, even when pairing with a developer. Oh, I could pair on writing code. And my test-centric perspective might have changed how we, as a pair, wrote the code. But I don’t think we did any more, or any better, unit testing than two test-infected developers would have.

    The things that get in the way of better unit testing aren’t usually a lack of testing skill on the part of the developer. Rather, it’s impediments like untestable legacy code that really prevent good test coverage. And frankly, awesome dev skills–understanding how to extract interfaces and write good mocks and tease out inappropriate interdependencies between classes–are more important than testing skills if you’re trying to retrofit unit tests onto a untested, untestable legacy code base.