“Our testing leaves a lot to be desired,” the Project Manager shot an accusatory glance at the QA Manager. The QA Manager glared back. I was seated between them at the conference table, and felt trapped in the middle.
“What makes you say that?” I asked, shifting my chair so I could see both managers at once.
“The software stays in test for too long. Our ship dates just slip and slip,” the PM shook his head. “Testing takes too long!”
“I see,” I replied. “And why is it that the software stays in test so long?”
The QA Manager cocked his head expectantly, apparently curious about the PM’s answer too.
“The testers find too many bugs!” complained the PM.
It was hard to keep a straight face in that meeting. The testers were doing a great job of finding problems with the product before it shipped, yet the PM was complaining bitterly. I privately wondered why the PM wasn’t complaining to the Development Manager about the existence of the bugs. I also wondered what political history in the company had led up to this meeting.
The organization used a traditional phased software development process. First the developers developed, then the testers tested. They’d called me in as a consultant to see if I could speed up testing. All I could tell them is that the more bugs are in the product when it comes into test, the longer it’s going to take to test it, and the more bugs you’re going to find during test. If the testers are finding the bugs, and if the bugs the testers are finding are real, then they’re probably doing a good job even if you don’t like how long it takes to test. If you want to speed up the Testing Phase, give the testers more stable software so there are fewer problems to find.
I then asked about developer testing. Did the developers do unit testing? What did that unit testing look like? The PM mumbled something I didn’t understand while the QA Manager rolled his eyes. Apparently I hit a nerve. The real problem in the organization was not that testing took too long, but that it only happened at the end.
Over the course of several meetings with QA and Development, I probed more to understand the state of developer testing. In one such meeting I discovered a fundamental misconception: team members believed that you couldn’t really start testing until you put all the parts of the system together. In essence, both the testers and the developers believed that unit testing was a waste of time.
“Give me an example,” I prompted.
The developer happily complied. “Take this bug, for example,” he explained. “It only happens when you create a child record, then delete it very quickly. That results in the parent object pointing to a NULL child, and it raises an unhandled exception. There’s no way we could have found that in unit testing,” he concluded smugly.
“What if you’d tested that method passing in a NULL value for the child record. Would that have exposed the unhandled exception?” I countered.
“Oh. Yeah. I guess it would.”
In addition to having several misconceptions about testing, the developers also lacked general testing knowledge. The result was what I’d originally diagnosed: the code was under-tested during development, so there were lots of bugs left to find during the Testing phase. The thing that I hadn’t realized before these conversations was the degree to which the under-testing during development led to untestable code. It made sense in hindsight though. Early testing increases testability, making later testing easier. It’s a virtuous cycle.
Alas, this company was not an isolated case. I’ve seen similar situations in numerous organizations. It is one of the unfortunate side effects of the misguided edict “Separate QA and Dev.”
Agile teams tend to address this problem head on with integrated teams, testing from the beginning, and an emphasis on automated unit testing.
Even non-Agile teams can benefit from early testing. “Separate QA and Dev” doesn’t have to mean “Only QA Tests.” The former is misguided; the latter is just stupid. “Only QA Tests” ensures that feedback is maximally delayed. And it perpetuates the misconception that developers aren’t very good at testing, a ludicrous notion. Just because a developer’s innate testing skills may have atrophied as a result of years of the “Developers Develop; Testers Test” mindset doesn’t mean that developers can’t test. Developers are generally quite adept at identifying technical risks. They’re just out of practice at testing for them.
But practice makes perfect. Developers who get in the habit of testing usually find that they’re pretty good at code-level testing. They also often find that thinking about testing improves their code.
Treating testing as a phase thus does far more damage than just elongating feedback loops and making release schedules more unpredictable: it reduces overall code quality and undermines the team’s skills.
By contrast, testing throughout a project results in all sorts of goodness: shorter feedback loops, improved code quality, and more empowered teams. Testing activities can, and should, start on day 1 of a project, with or without designated QA personnel. These activities can include far more than just executing tests on finished code. We can test assumptions, test for ambiguity, test for understanding, and test completion criteria. If we have any kind of expectation at all, we can test for it in the project agreements, artifacts, AND code.
This is why I maintain that testing isn’t a phase, it’s a way of life. (It’s also why I’m test obsessed.)