If you work in an Agile organization and are using a heavy weight specialized tool for test management, I have an important message for you:
Stop. Seriously. Just stop. It’s getting in the way.
If you are accustomed to heavyweight test management solutions, you might not realize the extent to which a test management tool is more of an impediment than an aid to agility. But for Agile teams, it is. Always. Without exception.
I don’t make such claims lightly and I don’t expect you to accept my claims at face value. So let me explain.
The Agile Alternative to Test Management
The things you need need to manage the test effort in an Agile context are whatever you are already using for the: Backlog; Source Control Management (SCM) System; Continuous Integration (CI) System; and Automated Regression Tests.
That’s it. You don’t need any other tools or tracking mechanisms.
Any test-specific repository will increase duplication and add unnecessary overhead to keep the duplicate data in sync across multiple repositories. It will also probably necessitate creating and managing cumbersome meta data, like traceability matrices, to tie all the repositories together.
All that overhead comes at a high cost and adds absolutely no value beyond what SCM, CI, & the Backlog already provide.
But, But, But…
I’ve heard any number objections to the notion that Agile teams don’t need specialized test management systems. I’ll tackle the objections I hear most often here:
But Where Do the Tests Live?
Persistent test-related artifacts go in one of two places:
- High-level acceptance criteria, test ideas, and Exploratory Testing charters belong in the Backlog with the associated Story.
- Technical artifacts including test automation and manual regression test scripts (if any) belong in the Source Control System versioned with the associated code.
And Where Do We Capture the Testing Estimates?
In Agile, we ultimately care about Done Stories. Coded but not Tested means Not Done. Thus the test effort has to be estimated as part of the overall Story implementation effort if we are to have anything even remotely approaching accurate estimates. So we don’t estimate the test effort separately, and that means we don’t need a separate place to put test estimates.
How Do I Prioritize Tests?
Agile teams work from a prioritized backlog. Instead of prioritizing tests, they prioritize Stories. And Stories are either Done or not. Given that context, it does not make sense to talk about prioritizing the tests in isolation.
Hello, I Live in the Real World. There is Never Enough Time to Test. How Do I Prioritize Tests Given Time Pressure?
If the Story is important enough to code, it’s important enough to test. Period. If you’re working in an Agile context it is absolutely critical that everyone on the team understands this.
But Testing is Never Done. Seriously, How Do I Prioritize What To Test?
This isn’t really a test management problem. This is a requirements, quality, and testing problem that test management solutions offer the illusion of addressing.
The answer isn’t to waste time mucking about in a test management tool attempting to manage the effort, control the process, or prioritize tests. Every minute we spend mucking about in a test management tool is a minute we’re not spending on understanding the real state of the emerging system in development.
The answer instead is to invest the time in activities that contribute directly to moving the project forward: understanding the Product Owner’s expectations; capturing those expectations in automated acceptance tests; and using time-boxed Exploratory Testing sessions to reveal risks and vulnerabilities.
What about the Test Reports?
Traditional test management systems provide all kinds of reports: pass/fail statistics, execution time actuals v. estimated, planned v. executed tests, etc. Much of this information is irrelevant in an Agile context.
The CI system provides the information that remains relevant: the automated test execution results. And those results should be 100% Green (passed) most of the time.
What about Historical Test Results Data?
Most teams find that the current CI reports are more interesting than the historic results. If the CI build goes Red for any reason, Agile teams stop and fix it. Thus Agile teams don’t have the same kind of progression of pass/fail ratios that traditional teams see during a synch and stabilize phase. And that means historic trends usually are not all that interesting.
However, if the team really wants to keep historic test execution results (or are compelled to do so as a matter of regulatory compliance), the test results can be stored in the source control system with the code.
Speaking of Regulatory Compliance, How Can We Be in Compliance without a Test Management System?
If your context involves FDA, SOX, ISO, or just internal audit compliance, then you probably live in a world where:
- If it wasn’t documented, it didn’t happen
- We say what we do and do what we say
Test repeatability is essential
In that context, specialized test management solutions may be the defacto standard, but they’re not the best answer. If I’m working on a system where we have to be clear, concrete, and explicit about requirements, tests, and execution results, then I would much rather do Acceptance Test Driven Development. ATDD provides the added value of executable requirements. Instead of the tests and requirements just saying what the system should do, they can be executed to demonstrate that it does.
Certainly, doing ATDD requires effort. But so does maintaining a separate test management system and all the corresponding traceability matrices and overhead documentation.
Our Management Requires Us to Use a Specialized Test Management System. Now What?
Send them the URL to this post. Ask them to read it. Then ask them what additional value they’re getting out a test management system that they wouldn’t get from leveraging SCM, CI, the Backlog, and the automated regression tests.
So, have I convinced you? If not, please tell me why in the comments…