Specialized Test Management Systems are an Agile Impediment

If you work in an Agile organization and are using a heavy weight specialized tool for test management, I have an important message for you:

Stop. Seriously. Just stop. It’s getting in the way.

If you are accustomed to heavyweight test management solutions, you might not realize the extent to which a test management tool is more of an impediment than an aid to agility. But for Agile teams, it is. Always. Without exception.

I don’t make such claims lightly and I don’t expect you to accept my claims at face value. So let me explain.

The Agile Alternative to Test Management

The things you need need to manage the test effort in an Agile context are whatever you are already using for the: Backlog; Source Control Management (SCM) System; Continuous Integration (CI) System; and Automated Regression Tests.

That’s it. You don’t need any other tools or tracking mechanisms.

Any test-specific repository will increase duplication and add unnecessary overhead to keep the duplicate data in sync across multiple repositories. It will also probably necessitate creating and managing cumbersome meta data, like traceability matrices, to tie all the repositories together.

All that overhead comes at a high cost and adds absolutely no value beyond what SCM, CI, & the Backlog already provide.

But, But, But…

I’ve heard any number objections to the notion that Agile teams don’t need specialized test management systems. I’ll tackle the objections I hear most often here:

But Where Do the Tests Live?
Persistent test-related artifacts go in one of two places:

  • High-level acceptance criteria, test ideas, and Exploratory Testing charters belong in the Backlog with the associated Story.
  • Technical artifacts including test automation and manual regression test scripts (if any) belong in the Source Control System versioned with the associated code.

And Where Do We Capture the Testing Estimates?
In Agile, we ultimately care about Done Stories. Coded but not Tested means Not Done. Thus the test effort has to be estimated as part of the overall Story implementation effort if we are to have anything even remotely approaching accurate estimates. So we don’t estimate the test effort separately, and that means we don’t need a separate place to put test estimates.

How Do I Prioritize Tests?
Agile teams work from a prioritized backlog. Instead of prioritizing tests, they prioritize Stories. And Stories are either Done or not. Given that context, it does not make sense to talk about prioritizing the tests in isolation.

Hello, I Live in the Real World. There is Never Enough Time to Test. How Do I Prioritize Tests Given Time Pressure?
If the Story is important enough to code, it’s important enough to test. Period. If you’re working in an Agile context it is absolutely critical that everyone on the team understands this.

But Testing is Never Done. Seriously, How Do I Prioritize What To Test?
This isn’t really a test management problem. This is a requirements, quality, and testing problem that test management solutions offer the illusion of addressing.

The answer isn’t to waste time mucking about in a test management tool attempting to manage the effort, control the process, or prioritize tests. Every minute we spend mucking about in a test management tool is a minute we’re not spending on understanding the real state of the emerging system in development.

The answer instead is to invest the time in activities that contribute directly to moving the project forward: understanding the Product Owner’s expectations; capturing those expectations in automated acceptance tests; and using time-boxed Exploratory Testing sessions to reveal risks and vulnerabilities.

What about the Test Reports?
Traditional test management systems provide all kinds of reports: pass/fail statistics, execution time actuals v. estimated, planned v. executed tests, etc. Much of this information is irrelevant in an Agile context.

The CI system provides the information that remains relevant: the automated test execution results. And those results should be 100% Green (passed) most of the time.

What about Historical Test Results Data?
Most teams find that the current CI reports are more interesting than the historic results. If the CI build goes Red for any reason, Agile teams stop and fix it. Thus Agile teams don’t have the same kind of progression of pass/fail ratios that traditional teams see during a synch and stabilize phase. And that means historic trends usually are not all that interesting.

However, if the team really wants to keep historic test execution results (or are compelled to do so as a matter of regulatory compliance), the test results can be stored in the source control system with the code.

Speaking of Regulatory Compliance, How Can We Be in Compliance without a Test Management System?
If your context involves FDA, SOX, ISO, or just internal audit compliance, then you probably live in a world where:

  • If it wasn’t documented, it didn’t happen
  • We say what we do and do what we say
  • Test repeatability is essential

In that context, specialized test management solutions may be the defacto standard, but they’re not the best answer. If I’m working on a system where we have to be clear, concrete, and explicit about requirements, tests, and execution results, then I would much rather do Acceptance Test Driven Development. ATDD provides the added value of executable requirements. Instead of the tests and requirements just saying what the system should do, they can be executed to demonstrate that it does.

Certainly, doing ATDD requires effort. But so does maintaining a separate test management system and all the corresponding traceability matrices and overhead documentation.

Our Management Requires Us to Use a Specialized Test Management System. Now What?
Send them the URL to this post. Ask them to read it. Then ask them what additional value they’re getting out a test management system that they wouldn’t get from leveraging SCM, CI, the Backlog, and the automated regression tests.

So, have I convinced you? If not, please tell me why in the comments…

Subscribe

Subscribe to our e-mail newsletter to receive updates.

33 Responses to Specialized Test Management Systems are an Agile Impediment

  1. Lisa Crispin October 6, 2009 at 3:25 pm #

    Great post, this needed to be said, and it’s such a simple message. I’ll be quoting you next time someone asks me about test management.

    One thing my teams have done differently than what you suggest is to keep the high level tests and exploratory test charters on a wiki, not in the product backlog, so they are easier to keep up to date. As much as possible, we put these in with the automated FitNesse tests.

    For example, my current team writes the high level tests in a BDD style on the FitNesse wiki. When we automate those tests, we put the automated tests collapsed under the given/when/then statements. Of course, we might automate extra test cases that weren’t in the high level tests, we just add the BDD language for those. We make notes about what we are going to test with ET.

    Some people grumble because we have to turn FitNesse’s own version control off in order to manage the tests with SVN, but I much prefer SVN’s version control, personally!

  2. Fred October 6, 2009 at 3:40 pm #

    Regulatory compliance is definitely one reason we’ve invested into a test management system (TestLink). We’re working with financial application requiring traceability and audit readiness.

    Some other reasons organization might consider:
    - Assuming your tests are difficult to automate (for X or Y reason) and you would like to outsource or insource at some stage (to add capacity or have your software in support mode), might be good to have some amount of document describing your tests for knowledge transfer/sharing.
    - If you’re working in distributed geographies, having a test management system might help communication between teams. User stories might be defined by product owner located in a different location than the dev and test team (It’s not unusual within large agile team). Product Owner can give feedback on your testing through your test management system.

  3. Dave Nicolette October 6, 2009 at 4:21 pm #

    Elizabeth,

    Great summary! I’m bookmarking this. Something tells me I’ll need it.

    Fred,

    Seems like you don’t quite get the message, here.

    1. Outsourcing testing reduces your capacity. It’s a fallacy to assume the opposite.

    2. If your tests are difficult to automate you should try re-thinking the tests and/or remediating the code to make it more testable.

    3. With agile methods and tools, the tests /are/ the documentation of the tests. Read Lisa’s comment. Think about trying BDD style tests for high-level (aka acceptance or feature-level or some such verbage) using tools like FitNesse, Cucumber, or Concordion. The tests themselves are readable by humans, including non-technical ones, and are also executable.

    4. The more complicated tools you have, the harder it is for geographically distributed teams to communicate. It’s a fallacy to assume the opposite.

    5. Traceability and audit readiness can be supported by the automated reporting features of any decent continuous integration server.

    I would have thought these things would be obvious by now, after all these years of agile work in the industry. Silly me.

    Cheers,
    Dave

  4. Fred October 6, 2009 at 4:34 pm #

    Dave,

    #1: I won’t go into the debate for outsourcing but there are still some reasonable reasons to outsource test (sunset of legacy application is one). It does apply completely to insourcing though.

    #4: TestLink is a very simple application and is helping us communicate across our distributed team (200+ dev, 70 testers all part of the company, no outsourcing). I don’t see how it is considered as a complicated tool.

    #5: Try an FDA audit and let me know if the auditor will settle for reporting features of continuous integration … I don’t think so.

    Considering test management system useless is a bit extreme imho and they do bring some amount of value, especially for large distributed team. There is not one good way to run testing within an agile organization.

  5. Sam Chen October 6, 2009 at 9:32 pm #

    Good post. However when you talk about test estimation should be the part of story implementation effort, I think it is very hard to achieve. Especially when you have multiple venders and third-party tools involved in the project or multiple teams allocated in different places. So even if developer finishes the code, you may have to wait few days for a small component to be integrated, then you will have a working environment to test. And sometimes it is very hard to get an exact date for completion of integration, the only thing you can do is to keep chasing it up. Therefore probably it is a good idea not to estimate test effort as part of story effort in such a case at all.

  6. Matthew Farwell October 6, 2009 at 11:27 pm #

    Good article. However, could you please give examples of a heavy weight Test Management System is? Just so we know what we’re talking about.

    Elisabeth responds: in my book, heavyweight Test Management Systems have the following characteristics: they store test cases in a separate specialized repository outside the source control system, manage test suites, and track and report test execution results independent of CI. Quality Center is the most iconic of these, but by my definition, any specialized Test Management System (and there are many) is a heavyweight impediment. Lisa’s solution of using a Wiki is about the only example of using a separate system for capturing tests that I’ve seen work well in an Agile context, and note that her group still versions the Wiki in the source control system with the code. So it’s not really separate.

    Overall, I agree with the thrust of your post, I’m not sure that your proposed scheme will work in all cases. For instance, where for security reasons the testers do not have access to the same SCM system. I’m working at a company like that right now.

    How would you deal with coverage? How do I know what has been tested up until now, and what remains to be tested (or not if there is no time)?

    Elisabeth responds: if tracking functional test coverage is really an issue, the Stories are probably too large or the tests are not sufficiently automated.

    One thing we’re doing at the minute is generating written test cases from the automated tests (which are expressed in a DSL). We have a perl script which reads the tests and produces a test case document. We use this to avoid having to write seperate test case documents. We can write a new automated test case, and then, when it’s finished, create the documentation.

    Thanks.

    Matthew.

    Elisabeth responds: I’m wondering how Agile your organization is. Keeping the testers out of the SCM system and rewriting automated tests expressed in a DSL into another form just for documentation’s sake both make me suspect. The ultimate test of Agility is this: Are they releasing deployable fully tested software at least once a month, consistently, as a matter of course and without heroics or shortcuts, while adapting to changing business needs? If not, I wonder what definition of Agile they’re using. And I hope they won’t say “Agile doesn’t work” when they don’t realize all the promised productivity improvements.

  7. Stephan October 7, 2009 at 1:19 am #

    It was about time someone said this. I personally think that there’s one good place for test cases even in organisations which don’t follow agile ideas: SCM.

    After all, test cases can easily be stored as plain text (I think of them in terms of plain text anyway). Given simple DSL (domain specific language) elements it’s also rather easy to keep track of meta data (as assigned tester, estimated duration etc. should you need to keep track of this information for what ever reason). Even though this is especially simple for automated tests, it can certainly also help when manually executing test cases (which should be simple enough in an environment that supports automatic execution).

    Additionally I think that a light weight test environment can easily track & store test results (should you need them) and also keep the link between a certain version of a test case (automated or not) and the corresponding result. – Sometime this is just what you need to locate a problem (whether it’s in the software itself or the test environment).

  8. John Lockhart October 7, 2009 at 1:59 am #

    Great post thanks Elisabeth – worth waiting for!

    What we tend to see in New Zealand is that many organisations claim to be doing Agile, but most are at best doing some sort of hybrid, so everything then gets a bit blurred.

    However, I think it’s really important to have an ideal to aim for, and I think you have articulated that ideal, and the reasons to strive for it. The artificial separation of testing from the rest of the development process – particularly development and analysis/requirements – has been one of the great inefficiencies in software development and the ability of Agile practices to remove that division to me (OK – I’m biased) is perhaps its greatest benefit.

    cheers,
    John (john@webtest.co.nz / http://www.webtest.co.nz)

  9. Ralph October 7, 2009 at 5:08 am #

    I am looking for an agile test management tool for our department. I agree completely with your article.

    We have some kind of test plan for every PBL Item we realise. This Test Approach Document (TAD) contains information about what will you test, what not, are there any risks, what kind of testing will be done in unit testing, GUI testing, loadrunner, ET session and formal test techniques. We use Work for TAD’s currently. However, I am looking for tool that that supports customer fields and is able to store data hierarchical. I would like to have tree like Release|Sprint|Team|PBL Item and then a number of fields on the PBL item. Do you or anyone else know such a tool or has a similar solution.

    The wiki sounds interesting. However, I don’t know if it possible to have structure as I described above.

    Elisabeth responds: I’m a little confused about why you need a whole test plan for every Product Backlog Item. In what way is it not sufficient to capture the acceptance criteria or conditions of satisfaction in the Backlog with the Story, then capture the specific tests in the source control system? Any other system that creates any other hierarchy will just be an impediment to progress: one more thing to update.

  10. Chris October 7, 2009 at 11:09 am #

    Hi,

    I agree with the post and luckily haven’t been using such a tool for years. However, I do still struggle with questions that come up about *exactly* what was tested in a previous release.

    Manager: For the 6.1 release, did we test over a VPN using Windows XP SP2 German, with roaming profiles turned on along with folder redirection, a non-local pst file, on a Tuesday?

    Me: Hmm, not sure. We can try it again now and see the results…

    How does this type of very specific test execution information get persisted?

  11. Dawn Cannan October 7, 2009 at 11:32 am #

    Elisabeth-

    I was able to make this point recently, though didn’t realize I was making the point until you put it so concisely. I more described it in a roundabout way. I was having a conversation with an old-school IBM-er (I seem to do that a lot here in Raleigh), and he asked me maybe 3 times about what Test Management Tool I use. I sidestepped the question, not quite wanting to say “I don’t use one”.

    Drawing on my recent ‘getting to the heart of the issue’ studies, I asked him several times what information he would be hoping to glean out of a Test Management Tool. When I finally understood what he hoped to gain, I was able to explain that he could gain that information (it centered around reports of what tests had been run and what the results were) without a Test Management Tool.

    I described the automated tests themselves being placed right with source control, and in that way organization of test cases was handled. I described being able to see results from test runs in the CI system, and being able to customize reports from the CI system itself. I also then described auto tests being self-documenting.

    I think his final aha! moment came when he realized that if using something like Fitnesse, he *also* had the ability to use the wiki structure to view and drill down into tests for both viewing and running them.

    I was relieved to finally hear his “Ohhhhhhhhhhhhhhhhhhhhh!”

  12. Jeffrey Fredrick October 7, 2009 at 6:06 pm #

    Great post Elisabeth!

    As someone who has spent a bit of time dealing with Continuous Integration & Testing I think there’s another place that historical test results — at least for automated tests — live, and that’s the CI server.

    You mention that the CI server tests should be 100% green most of the time… which make the question of how often they’re not an interesting one, and a question the CI server can help you answer easily.

  13. Darren October 8, 2009 at 1:59 pm #

    *How Do I Prioritize Tests?
    Agile teams work from a prioritized backlog. Instead of prioritizing tests, they prioritize Stories. And Stories are either Done or not. Given that context, it does not make sense to talk about prioritizing the tests in isolation.

    Darren – Not all tests are created equal. Some are more valuable than others. I can definitely see a class of tests that are important to make sure are covered in the initial test but are not as valuable running during every regression test. Certainly if the time existed and they were automated ( I know – key to agile but you have to start somewhere ) I would not fret but as soon as we get into a time crunch or these tests are manual and thus incure a much larger cost I start to doubt the authors assumption that tests do not need prioritization. Maybe we are thinking different scales – the author is not very clear. I agree that the suite of tests for a story should have the same prioritization, but the test cases contained in that suite are most probably not all of the same value.

  14. Darren October 8, 2009 at 2:01 pm #

    *What about the Test Reports?
    Traditional test management systems provide all kinds of reports: pass/fail statistics, execution time actuals v. estimated, planned v. executed tests, etc. Much of this information is irrelevant in an Agile context.

    Darren – Assuming we have not created the worlds first bug free software and putting aside the argument that if you don’t have enough time then it’s a process problem – the real world delivers on deadlines that people pay for. So even if you say you want 100% pass rates – how do you know that? How do you know you are done? Test case management systems can answer that for you. Even if you have an automated solution, test case management systems can bring together the results of multiple suites of automation test results to paint the big picture of new test and regression tests so you can answer the definition of done questions. Yeah you could build this into the automation buy why build, validate, and maintain it when canned solutions already exist? Execution times actual vs estimated – these were not useful even before Agile. In a manual sense they are of limited value if only for a training purpose. Planned vs executed – Answer part of definition of done? I guess we could do it manually and incur the larger human cost, or build the automation for it – but why? “Much of this information is irrelevant in an Agile context.” Darn this author always secures a way out – I wish he/she would commit and back up their statements with confidence. The implications of their statements is not undone by adding an escape clause.

  15. Darren October 8, 2009 at 2:02 pm #

    *What about the Test Reports?
    The CI system provides the information that remains relevant: the automated test execution results. And those results should be 100% Green (passed) most of the time.
    However, if the team really wants to keep historic test execution results (or are compelled to do so as a matter of regulatory compliance), the test results can be stored in the source control system with the code.

    Darren – This makes me wonder if the author is just trying to find a way to not use another system. Putting the cost benefits of reducing the number of systems aside ( not what the article is implying ) why put this in source control? Source control is best at managing source. Test Case Management systems are designed to handle tests. Why fit a square peg into a round hole? I also have to take a swipe at the authors statement that the tests should be 100% passed most of the time. I find it odd for me the validation manager to be the one saying that no company releases 100% bug free software. Sure the author escapes this wrath by saying “most of the time” but come on really. The % green needs to be within the tolerances of what the company is intending to build. There is no such thing as bug free code. As soon as you let 1 test not pass during 1 regression run, having historical data is important.

  16. Darren October 8, 2009 at 2:04 pm #

    *But Testing is Never Done. Seriously, How Do I Prioritize What To Test?

    Darren – This section is a dozy to respond to. I would question the authors statement that test management solutions only offer the illusion of addressing. Any tool used incorrectly can qualify for this. Source control used to track test case results can create the illusion that we have accurate and historical test case results. Test case management systems can certainly be abused. I would even agree that they are some level duplicate effort in that you are stating ( at least for the current iteration ) what you are going to test twice ( assuming you will execute 100% of all regression tests every pass ). But I think this creates an illusion in that this is significant. This is just a step in outlining the container for the actual test cases. Its like creating a folder in windows explorer. It takes 1 second and whats important is what goes into that folder. Electronic test case management handles concurrency, versioning, tracking of test cases results ( in real time or loaded after – tester preference ). As soon as you admit you don’t have the worlds first bug free code knowing what failed when has value. Its not the holy grail, but is also not irrelevant. As soon as you realize you may not be able to rule 100% of all your tests every regression cycle knowing what was executed when and when was the last time something was run becomes valuable. Its not the holy grail, but is also not irrelevant. As soon as you realize that not everyone on your team has an S on their chest knowing things becomes important. Saying you don’t have the right people because they don’t have an S on their chest is a cop out. Only in star trek do you have the utopia perfect people only getting into star fleet. We always strive to get better but reality shows that’s an elusive goal that is often a moving target. Oh and the best part of all – time boxed exploratory testing – translation is ad hoc testing with a time limit. As soon as you loose that S on your chest you have to start wondering about that.

  17. Darren October 8, 2009 at 2:05 pm #

    *What about Historical Test Results Data?
    Thus Agile teams don’t have the same kind of progression of pass/fail ratios that traditional teams see during a synch and stabilize phase. And that means historic trends usually are not all that interesting.

    Darren – I agree here, although as my team has discussed these were of limited value even before agile. They were only a tool, not a decision making data point. I agree this tool’s usefulness may be non-existent in its original sense. But looking back on how iterations when, learning from our mistakes, admitting that we may not have the worlds first bug free code – you start to see some possible uses of this data that may help understand a question here or there. Good news is automated and there is almost no cost to having it stored – you only have to use it when you have a question that makes it the right tool.

  18. Declan Whelan October 8, 2009 at 2:32 pm #

    Elisabeth,

    I totally agree.

    One area that I wonder about is how to “manage” sessions for exploratory testing. Do you have any advice on handling these? I worked with one client that used JIRA to coordinate and record the session information (goal, debrief outcomes etc.).

    Any insights would be greatly appreciated!

    Cheers,

    Declan

  19. Mohinder Khosla October 8, 2009 at 2:35 pm #

    This is something of an education for me considering I have not been involved with an Agile development so far. I enjoyed reading the above post and the comments with great deal of interest and I can not agree more. The way we were taught about testing systems suggests that Test management tools holds the key to success or failure. If the QA manager can not work out what has been tested and what had not and if he can’t run reports to show them to the stakeholders then he is stuffed and would have trouble getting sign off from the project/programme manager. After reading books on Agile developments and testing, I am a convert to the ideas discussed above but I do understand the arguments from other side where SOX and FDA are imposed on organistions especially in the finacial sector.

  20. Itay Maman October 11, 2009 at 1:23 am #

    Great post.

    Reading this list of arguments manifested one fundamental truth that software development organizations/individuals should accept, the sooner the better: testing is developing.

    Once you start thinking in such terms most of the aforementioned objections disappear. It becomes obvious that tests should be stored in the same source control system as the source code. There no question regarding “how do I find time to write tests when the deadline is pressing”. Test prioritization becomes a non issue because it is story prioritization. etc.

    One just needs to accept this truth. Sadly, accepting it is hard because shifting paradigms is hard.

  21. Jonas Söderström October 12, 2009 at 4:40 am #

    What about defects found, are they put in the backlog or do you have a seperate system for this?

  22. Gil Bloom October 13, 2009 at 11:51 am #

    So the conclusion is to use a light weight test management tool, not to abandon it completely.

  23. Matthew Farwell October 13, 2009 at 1:01 pm #

    Elisabeth responds: in my book, heavyweight Test Management Systems have the following characteristics: they store test cases in a separate specialized repository outside the source control system, manage test suites, and track and report test execution results independent of CI. Quality Center is the most iconic of these, but by my definition, any specialized Test Management System (and there are many) is a heavyweight impediment. Lisa’s solution of using a Wiki is about the only example of using a separate system for capturing tests that I’ve seen work well in an Agile context, and note that her group still versions the Wiki in the source control system with the code. So it’s not really separate.

    Matthew: OK. I understand (and agree with) the point about a different repository. That only creates problems. But that isn’t the only function of Test Management Systems.

    If by CI you mean automated tests, I totally agree with the all-green philosopy. However, if the tests aren’t automated, you need to be able to 1) prioritise and 2) trace execution of the test cases. The execution of test A could be more important than that of test B, because test A is more likely to uncover more bugs. Or perhaps test A tests the usability of an interface – a notorious area for differences of opinion between the user and the developer.

    Elisabeth responds: if tracking functional test coverage is really an issue, the Stories are probably too large or the tests are not sufficiently automated.

    Matthew: Not all tests can be automated. And if we’re talking (non-automated) regression tests, then these (usually) have to be prioritised along with the rest.

    Matthew: [Generating test docs automatically from the test cases].

    Elisabeth responds: I’m wondering how Agile your organization is. Keeping the testers out of the SCM system and rewriting automated tests expressed in a DSL into another form just for documentation’s sake both make me suspect.

    Matthew: How agile are we? Not at all. Sorry for any confusion, I don’t work at an agile company; I’m just interested to know how these issues would be handled.

    The testers are kept out of the developers SCM system because the company doesn’t want to give them access to the encryption algorithms written by the developers.

    Actually, the tests are re-expressed as a document to aid understanding of the tests we do; so they can be read by a non-technical person (i.e the functional leads); and so they can be incorporated into the test coverage.

    Elizabeth responds: The ultimate test of Agility is this: Are they releasing deployable fully tested software at least once a month, consistently, as a matter of course and without heroics or shortcuts, while adapting to changing business needs? If not, I wonder what definition of Agile they’re using. And I hope they won’t say “Agile doesn’t work” when they don’t realize all the promised productivity improvements.

    Matthew: See above. And yes, they probably would say ‘Agile doesn’t work’, but thats another story :-)

    By the way, I totally agree with what Itay said: Testing is developing. At least when we’re talking about automated tests.

  24. Ryan QIAN October 16, 2009 at 4:04 am #

    Great post. Currently, I’m still working in traditional way, after reading this, I finally got the direction I’d like to go. Thank you.

  25. Avi October 21, 2009 at 5:55 am #

    Hi

    How about functional integration testing?
    Say there are 2 agile teams work on 2 separate granular features. I understand each team will develop and validate their portion of functionality. How do they make sure perform integration testing?

    In regards to automated testing. Automation is maintenance and time and investment of effort and if we are talking about Front UI application – it will require portion of the functionality to be available so one can automate it.
    The Great benefit of automation tool is re-usability – someone will have to build a Master Test suite to run across all functional areas. Who is handling this and how. I do understand that one can prototype automation even without UI, but it also would take lots of time and maintenance.

    And finally – Agile vs. Standard Methodologies – my observation is that company would move to Agile as a result of its not mature status. Means – there is no mature management or there is not a good established SDLC process and Change Management. Face it – specific company or IT Department may have any methodology (or Testing Tools) implemented but if they are not mature enough – it won’t help. I personally know companies where it is not about SDLC Methodology or process – but about MIND SET of Management and Team…

    Thank you!
    Avi

  26. Wilson Mar October 25, 2009 at 5:07 am #

    I referenced this article on my description of Quality Center at
    http://wilsonmar.com/quality_center.htm

    The toughest part about switching to agile from a heavyweight is perhaps explaining to senior management how the hundreds of thousands spent on buying the software and putting data in it and programming reports from it has now all become useless.

  27. eastwood09 October 26, 2009 at 8:49 am #

    I have never really seen a good arguement for not having testcases.
    Even in Agile. If you have Test Cases you should have a system to manage them. (of course you can you excel, but you should graduate to a more stable solution).
    So from that regard. agile or not. Test Case Management for Test Cases are a priority. .

  28. Adam Geras October 28, 2009 at 3:32 pm #

    By extension, I assume you would have similar feedback regarding the agile project management tools that store releases, iterations, user stories, team members, (and with more recent versions, tests), etc.

    My recent projects have been COTS implementations with no CI, no SCM, and no automated tests other than capacity/performance tests, and only business specialists as testers. In trying to maximize ‘being agile’, the test management solution turned into the logical place to store the backlog. Used in this manner, it encourages test-first thinking since it is blatantly obvious that there is a relationship between backlog items and tests. We found we could also encourage exploratory/investigative thinking by assuming testers knew how to use the system and we could avoid writing ‘test steps’. We only created checklists for items on the backlog, and attached the session testing worksheets as evidence of pass/fail.

    Like I said, different context. But being in that different context doesn’t mean we can’t try to be agile :-)

  29. Jim Knowlton October 30, 2009 at 1:14 pm #

    Love this article Elizabeth. Agree wholeheartedly. Will forward to my boss. I’m not too optimistic though…I’m at a major software vendor, and they seem somewhat reluctant to let go of the illusions that heavyweight tools give them. Especially when said test management tools are implemented enterprise-wide as a standard…ugh.

    Jim

  30. Sajeev November 16, 2009 at 9:03 pm #

    This is currently the trend. Every one feels Agile methods is the medicine for every disease around. But I personally don’t feel Agile testing can totally avoid the need of conventional Test management and testing practices. Especially in the areas where end user products are developed like Mobile devices, PC applications, etc where the products are used in varied ways by millions of customers. Agile testing happens with very limited scope and effort, so it can not cover such scenarios effectively. Agile testing methods can provide a better solution for developer testing, where the S/W or the product is positively qualified. It does not focus on negative testing or non functional testing areas like performance, compliance and conformance, etc. Historical data, metrics and future product contributions are another black spot in agile testing approach.

    A product which is delivered just by following these new approach, and a product which gone through agile testing plus conventional testing are not comparable. I will opt for the second approach where developer, integration teams do testing using agile methods followed by a set of conventional testing which guarantees quality aspects and non functional requirements in the product are intact.

    Regards,
    Sajeev

  31. Bradley Landis November 30, 2009 at 8:36 pm #

    Elizabeth or Lisa,

    Do you have a link to some information about connecting Fitnesse to your SCM? This sounds like a great idea, but I haven’t been able to find any information on how it actually works.

    Thanks,

    Bradley

  32. Bradley Landis November 30, 2009 at 8:44 pm #

    This brings up another question I have with regard to hiring agile testers. All of the resumes that I recieve are from people deeply involved with these “Specialized Test Management Systems” (ie Quality Center). When interviewing them, I find that they have spent most of thier careers learning how to work with these systems instead of actually learning how to test. So what do I look for on resumes to identify a quality agile tester that has skills beyond wrangling these beasts?

  33. Pavan Sudarshan August 17, 2010 at 12:05 am #

    YAGNI

Leave a Reply