When I took my first job in QA many years ago, I thought that my role was to ensure quality.
I was dead wrong.
I figured out how wrong I was pretty fast. A developer stonewalled on fixing a bug until it was just too late to get the bug fixed for the release. I fought for that bug fix, and fought hard. In the end, I learned that I could lobby all I wanted, but getting bugs fixed was outside a my scope of control. Quality was barely within my sphere of influence. My title may have been “QA” but my real role was testing. I’ve since discovered that’s normal: many (most?) organizations use the term “QA” and “Test” interchangeably.
My new insight was the classic: Testing can tell us about the absence of quality, but cannot ensure it. I still think that’s true, but that insight doesn’t guide what or how I should test. It fails to inspire me. It’s accurate, but not helpful.
My next realization was that Testing is an information service. Testers provide information that decision makers can use to mitigate risk and make better decisions about software projects. This insight explains why testers should not be the gatekeepers. We provide information, not judgment. We identify and explain risks. We act as advisors. We don’t make the news, we report it. And we shouldn’t ever accept the role of quality police.
Again, that insight served me well for a time. And I still think it’s true. It influences my actions. I recognize that good testing maximizes the amount of information produced given the time available. As a tester, I seek feedback on how much stakeholders value the information I am providing. I focus on providing accurate and relevant information in a timely manner. I do my best to make sure the information I provide is actionable.
But that “information service” view of testing doesn’t say enough about the people I serve. They’re there implicitly, but I want to make the relationship between the service I provide and the people I serve explicit.
That realization led me to fine-tune my view. I now see testing as an information service that answers questions for project stakeholders. Or, bumper-sticker style: Project stakeholders have questions; testers have answers.
Sometimes testers suggest questions the stakeholders should be asking, like “Do you want to know what will happen to response time when the system is under load?” or “What if we can cause this system to corrupt or lose data?”
And we can help our stakeholders ask better questions. If our stakeholders say they want to know “does it work?” we can suggest reframes: “By ‘works,’ do you mean only under ideal circumstances, or are you concerned about how well the system handles error conditions?”
Any test we design, execute, or automate should answer a question that is interesting to our stakeholders.
So what questions do your tests answer? And what questions should your stakeholders be asking that they aren’t thinking about?