Agile-Friendly Test Automation Tools/Frameworks

Several people have asked me recently why I’m not a fan of the traditional test automation tools for Agile projects. “Why should I use something like Fit or Fitnesse?” they ask. “We already have <insert Big Vendor Tool name here>. I don’t want to have to learn some other tool.”

Usually the people asking the question, at least in this particular way, are test automation specialists. They have spent much of their career becoming experts in a particular commercial tool. They know how to make their commercial tool of choice jump through hoops, sing, and make toast on command.

Then they find themselves in a newly Agile context struggling to use the same old tool to support a whole new way of working. They’re puzzled when people like me tell them that there are better alternatives for Agile teams.

So if you are trying to make a traditional, heavyweight, record-and-playback test automation solution work in an Agile context, or if you are trying to help those other people understand why their efforts are almost certainly doomed to fail, this post is for you.

Why Traditional, Record-and-Playback, Heavyweight, Commercial Test Automation Solutions Are Not Agile

Three key reasons:

  1. The test-last workflow encouraged by such tools is all wrong for Agile teams.
  2. The unmaintainable scripts created with such tools become an impediment to change.
  3. Such specialized tools create a need for Test Automation Specialists and thus foster silos.

Let’s look at each of these concerns in turn, then look at how Agile-friendly tools address them.

Test-Last Automation

Traditional, heavyweight, record-and-playback tools force teams to wait until after the software is done – or at least the interface is done – before automation can begin. After all, it’s hard to record scripts against an interface that doesn’t exist yet. So the usual workflow for automating tests with a traditional test automation tool looks something like this:

  1. Test analysts design and document the tests
  2. Test executors execute the tests and report the bugs
  3. Developers fix the bugs
  4. Test executors re-execute the tests and verify the fixes (repeating as needed)
  5. …time passes…
  6. Test automation specialists automate the regression tests using the test documents as specifications

Looking at the workflow this way, it’s surprising to me that this particular test automation strategy ever works, even in traditional environments with long release cycles and strict change management practices. By the time we get around to automating the tests, the software is done and ready to ship. So those tests are not going to uncover much information that we don’t already know.

Sure, automated regression tests are theoretically handy for the next release. But usually the changes made for the next release break those automated tests (see concern #2, maintainability, coming up next). The result for most contexts: high cost, limited benefit. In short, such a workflow is a recipe for failure on any project, not just for Agile teams. The teams that have made this workflow work well in their context have had to work very, very hard at it.

However, this workflow is particularly bad in an Agile context where it results in an intolerably high level of waste and too much feedback latency.

  • Waste: the same information is duplicated in both the manual and automated regression tests. Actually, it’s duplicated elsewhere too. But for now, let’s just focus on the duplication in the manual and automated tests.
  • Feedback Latency: the bulk of the testing in this workflow is manual, and that means it takes days or weeks to discover the effect of a given change. If we’re working in 4 week sprints, waiting 3 – 4 weeks for regression test results just does not work.

Agile teams need the fast feedback that automated system/acceptance tests can provide. Further, test-last tools cannot support Acceptance Test Driven Development (ATDD). Agile teams need tools that support starting the test automation effort immediately, using a test-first approach.

Unmaintainable Piles of Spaghetti Scripts

Automated scripts created with record-and-playback tools usually contain a messy combination of at least three different kinds of information:

  • Expectations about the behavior of the software under test given a set of conditions.
  • Implementation-specific details about the interface.
  • Code to drive the application to the desired state for testing.

So a typical script will have statements to click buttons identified by hard-coded button ids followed by statements that verify the resulting window title followed by statements to verify the calculated value in a field identified by another hard-coded id, like so:


The essence of the test was to verify that ordering 6 items at $7 each results in a shopping cart total of $42. But because the script has a mixture of expectations and UI-specific details, we end up with a whole bunch of extraneous implementation details obfuscating the real test.

(If you’re nodding along, thinking to yourself, “Yup, looks like our test scripts,” then you have my sympathies. My deep, deep sympathies. Good, maintainable, automated test scripts do not look like that.)

All that extraneous stuff doesn’t just obscure the essence of the test. It also makes such scripts hard to maintain. Every time a button id changes, or the workflow changes, say with a “Shipping Options” screen inserted before the Checkout screen, the script has to be updated. But that value $42.00? That only changes if the underlying business rules change, say during the “Buy 5, get a 6th free!” sale week.

Of course, there are teams that have poured resources, time, and effort into creating maintainable tests using traditional test automation tools. They use data-driven test strategies to pull the test data into files or databases. They create reusable libraries of functions for common action sequences like logging in. They create an abstract layer (a GUI map) between the GUI elements and the tests. They use good programming practices, have coding standards in place, and know about refactoring techniques to keep code DRY. I know about these approaches. I’ve done them all.

But I had to fight the tools the whole way. The traditional heavyweight test automation tools are optimized for record-and-playback, not for writing maintainable test code. One of the early commercial tools I used even made it impossible to create a separate reusable library of functions: you had to put any general-use functions into a library file that shipped with the tool (making tool upgrades a nightmare). That’s just EVIL.

Agile teams need tools that separate the essence of the test from the implementation details. Such a separation is a hallmark of good design and increases maintainability. Agile teams also need tools that support and encourage good programming practices for the code portion of the test automation. And that means they need to write the test automation code using real, general use languages, with real IDEs, not vendor script languages in hamstrung IDEs.

Silos of Test Automation Specialists

Traditional QA departments working in a traditional waterfall/phased context, and automating tests, usually have a dedicated team of test automation specialists. This traditional structure addresses several forces:

  1. Many “black-box” testers don’t code, don’t want to code, and don’t have the necessary technical skills to do effective test automation. Yes, they can click the “Record” button in the tool. But most teams I talk to these days have figured out that having non-technical testers record their actions is not a viable test automation strategy.
  2. The license fees for traditional record-and-playback test automation tools are insanely expensive. Most organizations simply do not have the budget to buy licenses for everyone. Thus only the anointed few are allowed to use the tools.
  3. Many developers view the specialized QA tools with disdain. They want to write code in real programming languages, not in some wacky vendorscript language using a hamstrung IDE.

Thus, the role of the Test Automation Specialist was born. These specialists usually work in relative isolation. They don’t do day-to-day testing, and they don’t have their hands in the production code. They have limited interactions with the testers and developers. Their job is to turn manual tests into automated tests.

That isolation means that if the production code isn’t testable, these specialists have to find a workaround because testability enhancements are usually low on the priority list for the developers. I’ve been one of these specialists, and I’ve fought untestable code to get automated tests in place. It’s frustrating, but oddly addictive. When I managed to automate tests against an untestable interface, I felt like I’d slain Grendel, Grendel’s mother, all the Grendel cousins, and the horse they rode in on. I felt like a superhero.

But Agile teams increase their effectiveness and efficiency by breaking down silos, not by creating test automation superheroes. That means the test automation effort becomes a collaboration. Business stakeholders, analysts, and black box testers contribute tests expressed in an automatable form (e.g. a Fit table) while the programmers write the code to hook the tests up to the implementation.

Since the programmers write the code to hook the tests to the implementation while implementing the user stories, they naturally end up writing more testable code. They’re not going to spend 3 days trying to find a workaround to address a field that doesn’t have a unique ID when they could spend 5 minutes adding the unique ID. Collaborating means that automating tests becomes a routine part of implementing code instead of an exercise in slaying Grendels. Less fun for test automation superheroes, but much more sensible for teams that actually want to get stuff done.

So that means Agile teams need tools that foster collaboration rather than tools that encourage a whole separate silo of specialists.

Characteristics of Effective Agile Test Automation Tools

Reviewing the problems with traditional test automation tools, we find that Agile teams need test automation tools/frameworks that:

  • Support starting the test automation effort immediately, using a test-first approach.
  • Separate the essence of the test from the implementation details.
  • Support and encourage good programming practices for the code portion of the test automation.
  • Support writing test automation code using real languages, with real IDEs.
  • Foster collaboration.

Fit, Fitnesse, and related tools (see the list at the end of the post for more) do just that.

Testers or business stakeholders express expectations about the business-facing, externally visible behavior in a table using keywords or a Domain Specific Language (DSL). Programmers encapsulate all the implementation details, the button-pushing or API-calling bits, in a library or fixture.

So our Shopping Cart example from above might be expressed like this:

Choose item by sku 12345
Item price should be $7.00
Set quantity to 6
Shopping cart total should be $42.00

See, no button IDs. No field IDs. Nothing except the essence of the test.

And by writing our test in that kind of stripped-down-to-the-essence way makes it no longer just a test. As Brian Marick would point out, it’s an example of how the software should behave in a particular situation. It’s something we can articulate, discuss, and explore while we’re still figuring out the requirements. The team as a whole can collaborate on creating many such examples as part of the effort to gain a shared understanding of the real requirements for a given user story.

Expressing tests this way makes them automatable, not automated. Automating the test happens later, when the user story is implemented. That’s when the programmers write the code to hook the test up to the implementation, and that’s when the test becomes an executable specification.

Before it is automated, that same artifact can serve as a manual test script. However, unlike the traditional test automation workflow where manual tests are translated into automated tests, here there is no wasteful translation of one artifact into another. Instead, the one artifact is leveraged for multiple purposes.

For that matter, because we’re omitting implementation-specific details from the test, the test can be re-used if the system were ported to a completely different technology. There is nothing specific to a Windows or Web-based interface in the test. The test would be equally valid for a green screen, a Web services interface, a command line interface, or even a punch-card interface. Leverage. It’s all about the leverage.

Traditional Tools Solve Traditional Problems in Traditional Contexts. Agile Is Not Traditional.

Traditional, heavyweight, record-and-playback tools address the challenges faced by teams operating in a traditional context with specialists and silos. They address the challenge of having non-programmers automate tests by having record-and-playback features, a simplified editing environment, and a simplified programming language.

But Agile teams don’t need tools optimized for non-programmers. Agile teams need tools to solve an entirely different set of challenges related to collaborating, communicating, reducing waste, and increasing the speed of feedback. And that’s the bottom line: Traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile teams.

Related Links

A bunch of us are discussing the next generation of functional testing tools for Agile teams on the AA-FTT Yahoo! group. It’s a moderated list and membership is required. However, I’m one of the moderators, so I can say with some authority that we’re an open community. We welcome anyone with a personal interest in the next generation of functional tools for Agile teams. We’re also building lists of resources. In the Links section of the AA-FTT Yahoo! group, you’ll find a list of Agile-related test automation tools and frameworks. And the discussion archives are interesting.

Brian Marick wrote a lovely essay on An Alternative to Business-Facing TDD.

I discussed some of the ideas in this article in previous blog posts, most notably:

A small sampling of Agile-friendly tools and frameworks:

  • Ward Cunningham’s original Fit has inspired a whole bunch of related tools/frameworks/libraries including Fitnesse, ZiBreve, Green Pepper, and StoryTestIQ.
  • Concordion takes a slightly different approach to creating executable specifications where the test hooks are embedded in attributes in HTML, so the specification is in natural language rather than a table.
  • SeleniumRC and Watir tests are expressed in Ruby; Ruby makes good DSLs.

Are you the author or vendor of a tool that you think should be listed here? Drop a note in the comments with a link. Please note however that comment moderation is turned on, and I will only approve the comment if I am convinced that the tool addresses the concerns of Agile teams doing functional/system/acceptance test automation.

29 thoughts on “Agile-Friendly Test Automation Tools/Frameworks

  1. Wow, monster of a post! All great information but so much of it that I’m worried that Shu people won’t be able to follow the logic…

    The whole chain of logic you give is important but I think there are Key Points that I think are hard to appreciate without having been there:

    1) Traditional test automation tools take development skill to use effectively

    2) Developers don’t want to use a “QA tool”

    3) That “I felt like a superhero” feeling means that even the people you’d hope would be on your side aren’t, because after all, who wants to turn in their cape?

    All in all a great piece.

  2. What is your suggestion for IT shops and IT services providers in Non Agile world? I have seen people investing in GUI/Record and Playback tools that fall in your “Test Last” group.

    Why IT folks approach automation that way ….(note some are myths)

    1. IT is outsourcing crazy … stuff has to be done at “low” cost .. rest all can wait or is of secondary priority

    2. Vendors of GUI automation tools give a very promising, quanitified and ROI centric business case ….Such case can make CIOs approve the spend

    3. Again, GUI tools, do not require either testing knowledge nor programming knowledge .. tool knowledge is enough (that also is easy to learn – even business users can learn in matter of hours) … hence this fits in their scheme of things for outsourced automation.

    4. One does not have worry about human testing practice … if there is a problem in testing, understaffed testing team, lack of testing skills, time pressure to release — go for automation. So GUI tools are best bet

    5. Outsourced service providers do not have hire people with good testing knowledge and programming background … pick any available resource (those on bench or fresher) and train her on the tool or ask her to download evaluation copy and play/learn. So we get automation capablity. So GUI automation is good bet for service providers

    6. Since the automation resources are ONLY tool experts- they can not (will not) think beyond the tool capabilities … working with developers, developing modular code components, version control and other good development practices are oblivious to them…

    7. There is this big talk about ROI — GUI tools say they can provide ROI in 10 cycles of execution or six months and there after continuous benefits …
    Automation will also cut testing cycle time ..

    What do you say?

    Shrini Kulkarni

  3. Great post! I have similar experiences:

    * It really helps developers writing testable code if they have to write automated tests for it. You get a much deeper understanding of testability that way.

    * It is really hard to apply good coding practices to the scripting capabilities of the big tools – I had one pre-sales engineer work on automating our login for a day and he gave up because he was not able to build it in a maintainable way (data-driven, using modules instead of hardcoded names, etc)

    * Trying to automate testing a web application, which wasn’t testable, I learnt a lot about writing complex XPath expressions (and felt like a superhero myself) but it did not really add value to the project as it was way too complex to maintain

    * During our transition to agile processes we tried to include our off-shore testing team but we did not find a reasonable way to profit from after-the-fact regression testing

    Thanks a lot for articulating what is so hard to explain sometimes…

  4. At the risk of nit-picking a great post, Selenium-RC tests can be expressed in any number of languages; not just Ruby. Python, Java, .NET, Perl, PHP, JS and Selenese can also be used to make RC based tests.


  5. Traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile teams.

    I’m not convinced that traditional test tools even solve traditional problems. Cause, maybe. Exacerbate, for sure. Solve? Hmm.


    —Michael B.

  6. Thanks for a great post! It’s nice to have one more reference to give when people ask what’s the problem with commercial tools and/or record-and-replay. I totally agree with everything you wrote about these tools, but want to add few more reasons why they should normally be avoided.

    1) Record-and-replay scripts don’t actually _test_ anything unless you explicitly add checkpoints. For example following lines in the above “spaghetti script” example won’t appear automatically. A test script without any assertions isn’t really worth much.


    2) Commercial tools in general aren’t easy to integrate with other systems. It is for example a good idea to store the test data in the same version control system as the production code, but most of the time commercial tools don’t allow it or at least they make it really hard. Another common integration problem is trying to run tests as part of the build process and getting results back to the continuous integration (CI) system.

    3) Platform support is poor. Many (most?) commercial tools at least from big vendors run only on Windows or require some special Linux version. CI systems and version control on the other hand are often run on Linux and requiring a Windows box just for a test automation tool and then integrating it with other systems is extra work. Similarly commercial tools have been very slow to support other browsers than IE, even though running regression tests against different browsers is definitely something that _should_ be automated. Hopefully everyone reading this already know about Selenium…

    4) Single tool can’t support all different interfaces. If you only need to test a website or a Swing GUI you may not need more than one tool. But what if you need to automate an end-to-end test involving a browser, mobile phones and VoIP clients on different platforms? In any more complex case you can’t use a single commercial tool. If you are lucky you can find different tools for different interfaces, but if they are from different vendors you need quite a lot more luck to be able to integrate them together. Note that this is not trivial with open source or in-house-built tools either but at least it is possible. For example that browser/mobile/VoIP case is a real example with successful end results.

    5) Elisabeth already mentioned licenses but I think they are even bigger problem than she writes. Since most commercial tools are _really_ expensive projects try to save and not get licenses to everyone. But what if you need to use the tool (i.e. have license) even to see results now and then? Sometimes “floating licenses” are used, so that not everyone needs to have a license but only limited number of people can use the tool at the same time. But what if your CI server can’t run tests because there are no licenses available? And how much customer value do we actually get from setting up license servers and other needed infrastructure?

    These points are of course generalizations. It is possible that there are, or there will be, commercial tools that are e.g. easy to integrate with other systems and sane licensing models. My personal experience so far isn’t too positive, though.

  7. While the criticizms of test-last are all valid. I hesitate to say agile is “new” methodology, but rather a game of semantics. For example, “Support starting the test automation effort immediately, using a test-first approach.” this really seems to be a Unit Test.

    Separate the essence of the test from the implementation details. The testing of implementation or migration is necessary evil. The application test is separate from migration/smoke test.

    Foster collaboration is a goal not only of agile, but of all teams. Big post not much substance.

  8. I will not be that positive in evaluation of this post as the others. I really don’t like the comparison of “a typical script” and “a table using keywords or a Domain Specific Language (DSL)”. The first one is an executable automated test written in a programming language. The second one is a test scenario, nothing more. In “traditional” testing you also have these scenarios. And in agile automated testing at some point you have to hard-code the scenario in a programming language (otherwise it will not be an executable automated test). I am not an advocate of traditional methods, in contrary I am trying to introduce some agile approaches in my company. But this post I find rather tendentious, unfair.
    My current opinion is that testing should be done continuously during the development. (We currently participate in a waterfall project but internally use iterative development.) It’s dangerous and ineffective to do testing separately at the end of the project. And you can use agile tools as well as traditional tools. It is not the most important aspect.

  9. Thanks for this really great post, i enjoyed reading it because it expresses many of my thoughts about those tools. I’d like to add a statements which is a bit special to the company where is work, we do open-source software.

    Commercial Licenses are no bad thing in general, we all want to make some money by doing our jobs and so the tools vendors like to. In my opinion its a real no-go for a company which has a strategy that’s based on partnerships and community integration. You simply cannot force a partner of somebody who wants to participate at the project to spend some thousands on tool licenses just to test the software you made. Those tools may have some good points which are worth the license fees for internal deployment, but there is no way anyone will take a look at your tests or even execute them if he has to spend something. We publish all of our testcases to show our parners and the community what we are doing for quality assurance, how we do it and anybody is free to check those tests for correctness if he/she likes to. It seems this practice is no usual in the market but our customers and partners where very excited to adopt those tests. This is possible by using tools like JUnit and Selenium-RC/Java, it won’t be possible by using commercial tools.

    Testing is not about writing and executing automated tests only, of course. By using those clunky record/playback-high-level-abstract tools testers start to see the testing tool as their “interface” to the application. They do what’s possible with the tool, the tool becomes the most crucial part in testing – but what for? I mean it’s all about happy customers and high quality software to deliver, no one really cares how you tested/fixed something, results are significant. Explorative testing is at least equally important as test automation. We actually to the whole automation stuff to get more free time for freestyle testing. If you have a team of QA guys that are very familiar with the product and technically skilled, it’s more easy to find potential problems, because they know what happens to a string that’s entered at a input field, how it’ll be transported to the server and how it find it’s way down to the database. This is something no tool can ever replace.

  10. Great – you totally describe WindowLicker ( ).

    This swing testing toolkit (in alpha, but totally usable ) was based on lession learned doing TDD for a Swing GUI application in a large organisation.

    To save the bother of finding an example here is one – probably the formatting is terrible, sorry.

    public void
    canAddTwoNumbers() {


    Please take a look, and give feedback positive or negative.

    Disclaimer: I’m a project owner

  11. Elisabeth,

    many thanks for that post, which found words for a lot of things, we have in mind. Thinking about the consequences, some ideas came up:
    Using Selenium / Watir can’t rescue the world as they need constructs as GUI-maps or DataDriven, too. Ok, we may implement them in a well known language supported by a mighty IDE, but more artefacts, which needs to maintained will appear. I know projects, where the test projects grew as complex as the application under test. With Selenium its so easy to get new scripts, so I recommend my customers to throw away outdated scripts instead of developing an advanced “Selenium-GUI-map-DataReader-Framework”. It’s obvious to check the failed tests, wether they are caused by an error before throwing them away. This recommandation is only valid, with the statement, that business logic has to be clearly seperated from the GUI layer.
    What do you thin about that ?

  12. Keyword-driven test automation is good for agile. I was been enjoying it for times. I like it.

    and, nice to see Pekka here 🙂

  13. Elisabeth,
    I agree with ideas expressed in your post that automation testers who concentrate their skills on some specific tools are just advanced (not even experts) users of some specific application. And I agree that each tool has to be selected rather to fit current requirements than to satisfy someone’s preferences.

    But I disagree with your approach to heavyweight record-n-playback tools usage with Agile context. It is comparison between the most uneffective, unreasonable Big Vendors Tool usage performed by non-qualified users with some smaller and more specific solution in well-setup process context. At least it’s unfair. And the most unfair thing is that you make comparison with tools working on different levels ( e.g. GUI testing tools work on GUI level, so they can be involved as soon as at least GUI is ready even without functionality bound ).

    My comments for this may take even more text than your post, so I can describe it in separate article and put the link here, because almost each text line of this post is under doubt. Only comparable things should be compared. Of course, if you wish.

    But the last thing to be mentioned is that I may recommend this post to people really thinking that ability to record some actions in specific tool makes them experts in test automation. It’s exactly for them

  14. I really enjoyed reading this post.

    Another tool to add to the list would be GUIdancer (
    It’s a commercial tool which uses keywords to specify automated GUI tests. It supports an early test automation start, and the separation of the test flow from implementation details.

    My disclaimer: I work for the tool vendors, but I’ve tried to keep this as neutral as possible.

  15. Bravo!

    We have been expressing many of these ideas for years to deaf ears. The monster sized tools vendors have so brainwashed the test community that most people simply don’t understand the concepts expressed above.

    You encouraged us to talk about tools that address these issues. Smartescript ( addresses the spaghetti and silo issues very effectively. To a lesser extent it also addresses test-first, but there is some work to do there still.

    When people trained on first use more effective tools, they try to do what they have always done, and it takes a while before they get it that there is a different way of thinking about test automation. It is similar to when many procedural coders first started using OO languages – they wrote procedural code in java, C++, etc, and gained nothing from the new languages until they also got the new paradigm.

    Part of the issue is also fear of change. After years honing skills in scripting test cases, suddenly the test automation expert can do their usual day’s work in a couple of hours, and they have to figure out how to use the extra time productively. Eventually it is a pleasant transition – who wants to be thinking script details when you can be thinking test architecture ? – but it takes a while for some to overcome the fear of change.

    Something not explicitly mentioned above is documentation – a much underestimated and underimplemented part of test automation. Test automation tools need to help document the test suites as much as possible, in ways useful to the interaction between dev and QA specialists. (SmartScript helps test engineers stay out of the weeds and provide more value to the dev team by automatically documenting test cases to make it easy for developers to reproduce bugs)

    There is one current sub-trend you hint at which I would argue against, and that is the idea that everyone needs to have developer-level skills to be a useful team member. I would argue that we need to go in quite the opposite direction – make it possible for as many people as possible to be productive team members, from most junior/inexperienced to most senior. If you make development skills a necessary part of QA, then you lose the benefit of a lot of very talented, dedicated and valuable people who don’t happen to be interested in developing the skills typical software developers have.

    I would argue that a better philosophy is to destroy silos by making test automation something that anyone can implement and maintain very easily. This leads to morale improvement, enhanced net speed of implementation, great staffing flexibility, and more consistent results.

    I hope this doesn’t sound too much like an advert – if so feel free to prune as necessary – but I was very excited to at last read an article by someone who truly gets it.

    Gordon MacGregor

  16. Great Post!!! This could be a long-term debate in most of the Organizations which needs Automation for Agile Process. Were there is many changes in the requirement often, they have to fight with automation is the fact.

  17. Yes! You are on top of it. I read your whole article since we ran into the intricate problem of finding a good test tool that supports agile ways of working as well as multiple protocols. My first feeling without looking at the big brands delivering the tools that can do everything was scaring me. Along the journey I got even more afraid since the tools are creating super experts. And that is not something we can afford working in a agile team of 8 people.
    But I could not find any good alternatives that covered everything. But we found good open source software that was doing exactly what we wanted for each area. So we put them together with junit ( I know it is not intended for functional tests but can be used for it anyway).
    Now reading this I was really happy to see all this that we have been discussing.


  18. Elisabeth

    Before I start it’s probably best to get my alcoholics anonymous-style admission out in the open. My name is John, and I’m a test automation addict. Actually, I just said that for effect, but I do use commercial test automation tools regularly in a traditional, non-agile environment.

    I was inspired to write this after I read your excellent article. Your description of the drawbacks associated with using “Traditional, Record-and-Playback, Heavyweight, Commercial Test Automation Solutions” is pretty spot on. I actually wrote this as a standalone paper after reading your article so apologies if it seems a bit formal.

    I want to share my experiences with these test automation tools and the approach that I have developed which helps me to avoid the pitfalls whilst maintaining my sanity.

    Recently, I was working for a large energy utility client when they decided to start an Agile development initiative for smaller tactical applications. As part of this initiative, my next-door colleague was asked to evaluate test-driven development, TDD, tools like Fit and Fitnesse. He researched these shiny new tools, and we often found ourselves discussing his findings. What struck me during these discussions were the parallels between my use of traditional test automation tools and the methods encouraged by the TDD tools.

    I have to clarify the last statement. I do not currently work in an Agile or test-driven development environment, but I have worked in an Agile development team in the recent past. The thing that strikes me about TDD is not the test-first philosophy – it is the way that tools like Fit encourage the definition of tests in a standardised form (usually tabular) which is relatively unambiguous and easy to create.

    The standardised format is critically important. It provides these benefits:
    1. It provides a familiar structure for the creator of tests to work within. Experience with the format improves productivity and the quality (of tests) when creating tests.
    2. The standardised format allows the test tool to read, interpret and execute the tests automatically. Admittedly, this doesn’t happen by magic. The test tool provides the execution framework and then, cleverly, supports an interface to allow itself to be extended with “rules” specific to the software under test. This is what makes a test tool like Fit generic.
    3. The test is to be executed automatically, and so it has to explicitly define the data to be used and the expected result. No room for ambiguity here.

    What is clear from the above is that the use of a test tool that provides a standardised format and an execution framework for tests is a great idea. Ideally, this approach would be used in a TDD environment starting with something like Fit during the early stages of development and then using, say, Selenium or Windmill in later stages. It is possible to get the benefits of a standardised test format and automation framework in a more traditional software development environment and this is what I have done.

    First thing first. Test automation tools, whether free or commercial, are a necessary evil if the software under test involves a graphical user-interface with buttons, tables, grids, listboxes and so on. This applies equally to web applications and assorted varieties of windows applications. So it follows that when testing an application which does not involve a GUI it is possible to use an automated test framework without burdening it with a heavyweight third party record and/or replay tool. It is important to keep this in mind because it means that the “framework” part of an automated test tool becomes really useful and generic if it can support the following:
    1. Independence from third-party tools that provide the functionality to interact with the GUI.
    2. A standardised format, like a table, for defining the test, data and expected results.
    3. A defined interface to allow itself to be extended with “rules” specific to the software under test.
    4. Manual and automated testing from the same set of defined tests.

    I have developed an automated test tool that provides all of the above. It currently supports HP/Mercury’s Quick Test Pro (QTP) and Compuware’s TestPartner for the GUI interaction. It uses Excel spreadsheets to define the tests, data, expected and actual results in a standardised format. It has a defined interface for extending its functionality to allow for the specifics of the software under test.

    Strangely, its most beneficial feature is its ability to use exactly the same test definitions in manual or automated test execution. This is not because the feature is clever or useful (which it is) but because it can be used in a way that can benefit the software development process in a similar way to working in a TDD environment.

    The definition of tests can begin quite early in the development process even in a traditional, test-last, development environment. Typically, test scenarios are defined by referencing requirements specifications and design documentation. It is possible, but not straightforward, to define comprehensive tests directly from the documentation by abstracting the expected behaviour of the system. The use of a standardised format for the test, its data and expected results helps to keep things structured and methodical. The job of defining the tests before the software solution has been developed, or even completely worked out, brings with it an informal test-driven approach to the development process. Furthermore, the anticipation that the tests will be used for both manual and automated testing requires care and attention to detail to satisfy the requirement for flexibility in executing manual tests versus the precision and conciseness inherent in automated test execution. My experience has shown that by considering the tests early, it is possible to identify problem areas and therefore raise awareness within the project of potential issues in the business requirements or in the proposed implementation.

    My conclusion, with respect to the use of an automated test tool in a traditional, non-agile environment, is that the benefits are substantial and can help avoid many of the pitfalls described by your article.

  19. Hi I didnt went trough the whole article but for sure i will , i just started learning about QTP and i already guessed that it wont work for Agile ( Scrum )
    Meaning that the tester(automator ) has to wait till he get a piece of working software , thus he will build he script using record and playback ( regardless of the requirement )
    I think that there is a solution for it , mixing manual testing and automation:
    first test manual , then create automated script for it , so the team use it
    ( such a waste of time=money )

  20. 1 next time i will read the whole stuff before posting sorry for duplication 🙂
    2 What do you think about HttpUnit ( even though it meant just for java Application J2ee ) but i guess it can fit in agile
    I think all unit testing tool ( JUnit …. ) can be used for automation ?
    AM i right ?

    Elisabeth responds:

    HttpUnit is a library for submitting http requests. In a similar vein, there’s HtmlUnit and JWebUnit. All three can work with any Web-based application, regardless of the language it’s implemented in (since they’re addressing the Web interface), though it’s most common to use them with a Java-based application. All three libraries can work with the JUnit framework – or any other Java framework including TestNG or even FIT and Fitnesse.

    Whether to use JUnit or other unit test frameworks for system-level test automation is another question altogether. And I think it’s time for me to tackle that question by writing about frameworks and drivers a little more. I started writing about this topic in A Place to Put Things, but there is much more I could say.

  21. Hey Elisabeth, I enjoyed the article so much and feel very much the same way. We are in a Agile environment, in order to coordinate the team on test first, we decided to generate all the GUI automation as soon as we get the jsp, or html. After hours, days of hard work on QTP, we thought we were in good shape. But a minor change on GUI would take hours or days to fix script before running again….then I start thinking how much value this practice or toolsmith practice really brings to project?

    I learned about Fit recently and believed this way it provide to testers should be the future of automation. as far as I can see now, Fit can be used as a layer before presentation layer and acts like data injector to the backend. If all the fixtures pass, programmer should have at least of confidence.

    However, we still need to automate the GUI part. My questions is since we have to do GUI automation eventually, how can we strike the balance on Fitnesse and GUI automation?


  22. I would only add, that in my opinion the “Traditional, Record-and-Playback” automation solutions that you describe are wrong not only in the context of Agile testing, but in general.

  23. Great subject, and great posts. I’ve watched a video of you on testing, very nice.

    About tools… I’ve been a QA tester for more than 15 years, using automation products. BTW, I *never* use record-playback. I code from scratch.

    It seems to me that the overwhelming majority of test tools mentioned in respect to Agile (in this post, in books, and on other websites) are open source tools that are single focused. Many many are for one platform only; the platform they are written in and for. That is a supreme limitation.

    The product[s] that I test, that the company I work for produces, has many platforms. One product has components that are: (a) web services, (b) Sharepoint web parts, (c) ASP.Net web parts, (d) COM-based applications, (e) .Net WPF applications, (f) Windows services, (g) Web services-based web products, (h) integrates with multiple vendors such as MS MQ, fax servers, email servers, IBM Records Manager, Application servers like Web Logic, etc., and on and on.

    If a company’s product is a single platform, say a web product only, then there is a lot more flexibility in tool choices: just choose on of the tools that matches the platform.

    But in a multi-platform environment, where we dont have the single-focu luxury, we need a tool, or a very small set of tools than can work together well, to accomplish what needs to be done.

    As much as open source tools are appropriate in many situations, I doubt it they can cut it in such a multi-platform environment such as ours. I suppose we could cobble together a long string of individual tools to accomplish what we want for testing individual pieces of the product, but to end-to-end throughput testing, system testing, then we need something more cohesive.

    How to implement Agile testing in such our environment? We’re transitioning to Agile, so I guess we’ll find out. Until now, we’ve used one of the “Traditional” automation tools for years, with good results. Yes, very expensive, but it gets the job done.

    And on top of that, we’re transitioning from single-location development and testing to distributed teams.

    Good luck to us! 😉


  24. Joe,

    Can you pls. share what sort of automation tools you used in case of your multi-platform enviornment?

Comments are closed.