Agile-Friendly Test Automation Tools/Frameworks

Several people have asked me recently why I’m not a fan of the traditional test automation tools for Agile projects. “Why should I use something like Fit or Fitnesse?” they ask. “We already have <insert Big Vendor Tool name here>. I don’t want to have to learn some other tool.”

Usually the people asking the question, at least in this particular way, are test automation specialists. They have spent much of their career becoming experts in a particular commercial tool. They know how to make their commercial tool of choice jump through hoops, sing, and make toast on command.

Then they find themselves in a newly Agile context struggling to use the same old tool to support a whole new way of working. They’re puzzled when people like me tell them that there are better alternatives for Agile teams.

So if you are trying to make a traditional, heavyweight, record-and-playback test automation solution work in an Agile context, or if you are trying to help those other people understand why their efforts are almost certainly doomed to fail, this post is for you.

Why Traditional, Record-and-Playback, Heavyweight, Commercial Test Automation Solutions Are Not Agile

Three key reasons:

  1. The test-last workflow encouraged by such tools is all wrong for Agile teams.
  2. The unmaintainable scripts created with such tools become an impediment to change.
  3. Such specialized tools create a need for Test Automation Specialists and thus foster silos.

Let’s look at each of these concerns in turn, then look at how Agile-friendly tools address them.

Test-Last Automation

Traditional, heavyweight, record-and-playback tools force teams to wait until after the software is done – or at least the interface is done – before automation can begin. After all, it’s hard to record scripts against an interface that doesn’t exist yet. So the usual workflow for automating tests with a traditional test automation tool looks something like this:

  1. Test analysts design and document the tests
  2. Test executors execute the tests and report the bugs
  3. Developers fix the bugs
  4. Test executors re-execute the tests and verify the fixes (repeating as needed)
  5. …time passes…
  6. Test automation specialists automate the regression tests using the test documents as specifications

Looking at the workflow this way, it’s surprising to me that this particular test automation strategy ever works, even in traditional environments with long release cycles and strict change management practices. By the time we get around to automating the tests, the software is done and ready to ship. So those tests are not going to uncover much information that we don’t already know.

Sure, automated regression tests are theoretically handy for the next release. But usually the changes made for the next release break those automated tests (see concern #2, maintainability, coming up next). The result for most contexts: high cost, limited benefit. In short, such a workflow is a recipe for failure on any project, not just for Agile teams. The teams that have made this workflow work well in their context have had to work very, very hard at it.

However, this workflow is particularly bad in an Agile context where it results in an intolerably high level of waste and too much feedback latency.

  • Waste: the same information is duplicated in both the manual and automated regression tests. Actually, it’s duplicated elsewhere too. But for now, let’s just focus on the duplication in the manual and automated tests.
  • Feedback Latency: the bulk of the testing in this workflow is manual, and that means it takes days or weeks to discover the effect of a given change. If we’re working in 4 week sprints, waiting 3 – 4 weeks for regression test results just does not work.

Agile teams need the fast feedback that automated system/acceptance tests can provide. Further, test-last tools cannot support Acceptance Test Driven Development (ATDD). Agile teams need tools that support starting the test automation effort immediately, using a test-first approach.

Unmaintainable Piles of Spaghetti Scripts

Automated scripts created with record-and-playback tools usually contain a messy combination of at least three different kinds of information:

  • Expectations about the behavior of the software under test given a set of conditions.
  • Implementation-specific details about the interface.
  • Code to drive the application to the desired state for testing.

So a typical script will have statements to click buttons identified by hard-coded button ids followed by statements that verify the resulting window title followed by statements to verify the calculated value in a field identified by another hard-coded id, like so:

field("item_1").enter_value("12345")
button("lookup_item_1").click
field("price_1").verify_value("$7.00")
field("qty_1").enter_value("6")
button("total_next").click
active_window.verify_title("Checkout")
field("purchase_total").verify_value("$42.00")

The essence of the test was to verify that ordering 6 items at $7 each results in a shopping cart total of $42. But because the script has a mixture of expectations and UI-specific details, we end up with a whole bunch of extraneous implementation details obfuscating the real test.

(If you’re nodding along, thinking to yourself, “Yup, looks like our test scripts,” then you have my sympathies. My deep, deep sympathies. Good, maintainable, automated test scripts do not look like that.)

All that extraneous stuff doesn’t just obscure the essence of the test. It also makes such scripts hard to maintain. Every time a button id changes, or the workflow changes, say with a “Shipping Options” screen inserted before the Checkout screen, the script has to be updated. But that value $42.00? That only changes if the underlying business rules change, say during the “Buy 5, get a 6th free!” sale week.

Of course, there are teams that have poured resources, time, and effort into creating maintainable tests using traditional test automation tools. They use data-driven test strategies to pull the test data into files or databases. They create reusable libraries of functions for common action sequences like logging in. They create an abstract layer (a GUI map) between the GUI elements and the tests. They use good programming practices, have coding standards in place, and know about refactoring techniques to keep code DRY. I know about these approaches. I’ve done them all.

But I had to fight the tools the whole way. The traditional heavyweight test automation tools are optimized for record-and-playback, not for writing maintainable test code. One of the early commercial tools I used even made it impossible to create a separate reusable library of functions: you had to put any general-use functions into a library file that shipped with the tool (making tool upgrades a nightmare). That’s just EVIL.

Agile teams need tools that separate the essence of the test from the implementation details. Such a separation is a hallmark of good design and increases maintainability. Agile teams also need tools that support and encourage good programming practices for the code portion of the test automation. And that means they need to write the test automation code using real, general use languages, with real IDEs, not vendor script languages in hamstrung IDEs.

Silos of Test Automation Specialists

Traditional QA departments working in a traditional waterfall/phased context, and automating tests, usually have a dedicated team of test automation specialists. This traditional structure addresses several forces:

  1. Many “black-box” testers don’t code, don’t want to code, and don’t have the necessary technical skills to do effective test automation. Yes, they can click the “Record” button in the tool. But most teams I talk to these days have figured out that having non-technical testers record their actions is not a viable test automation strategy.
  2. The license fees for traditional record-and-playback test automation tools are insanely expensive. Most organizations simply do not have the budget to buy licenses for everyone. Thus only the anointed few are allowed to use the tools.
  3. Many developers view the specialized QA tools with disdain. They want to write code in real programming languages, not in some wacky vendorscript language using a hamstrung IDE.

Thus, the role of the Test Automation Specialist was born. These specialists usually work in relative isolation. They don’t do day-to-day testing, and they don’t have their hands in the production code. They have limited interactions with the testers and developers. Their job is to turn manual tests into automated tests.

That isolation means that if the production code isn’t testable, these specialists have to find a workaround because testability enhancements are usually low on the priority list for the developers. I’ve been one of these specialists, and I’ve fought untestable code to get automated tests in place. It’s frustrating, but oddly addictive. When I managed to automate tests against an untestable interface, I felt like I’d slain Grendel, Grendel’s mother, all the Grendel cousins, and the horse they rode in on. I felt like a superhero.

But Agile teams increase their effectiveness and efficiency by breaking down silos, not by creating test automation superheroes. That means the test automation effort becomes a collaboration. Business stakeholders, analysts, and black box testers contribute tests expressed in an automatable form (e.g. a Fit table) while the programmers write the code to hook the tests up to the implementation.

Since the programmers write the code to hook the tests to the implementation while implementing the user stories, they naturally end up writing more testable code. They’re not going to spend 3 days trying to find a workaround to address a field that doesn’t have a unique ID when they could spend 5 minutes adding the unique ID. Collaborating means that automating tests becomes a routine part of implementing code instead of an exercise in slaying Grendels. Less fun for test automation superheroes, but much more sensible for teams that actually want to get stuff done.

So that means Agile teams need tools that foster collaboration rather than tools that encourage a whole separate silo of specialists.

Characteristics of Effective Agile Test Automation Tools

Reviewing the problems with traditional test automation tools, we find that Agile teams need test automation tools/frameworks that:

  • Support starting the test automation effort immediately, using a test-first approach.
  • Separate the essence of the test from the implementation details.
  • Support and encourage good programming practices for the code portion of the test automation.
  • Support writing test automation code using real languages, with real IDEs.
  • Foster collaboration.

Fit, Fitnesse, and related tools (see the list at the end of the post for more) do just that.

Testers or business stakeholders express expectations about the business-facing, externally visible behavior in a table using keywords or a Domain Specific Language (DSL). Programmers encapsulate all the implementation details, the button-pushing or API-calling bits, in a library or fixture.

So our Shopping Cart example from above might be expressed like this:

Choose item by sku 12345
Item price should be $7.00
Set quantity to 6
Shopping cart total should be $42.00

See, no button IDs. No field IDs. Nothing except the essence of the test.

And by writing our test in that kind of stripped-down-to-the-essence way makes it no longer just a test. As Brian Marick would point out, it’s an example of how the software should behave in a particular situation. It’s something we can articulate, discuss, and explore while we’re still figuring out the requirements. The team as a whole can collaborate on creating many such examples as part of the effort to gain a shared understanding of the real requirements for a given user story.

Expressing tests this way makes them automatable, not automated. Automating the test happens later, when the user story is implemented. That’s when the programmers write the code to hook the test up to the implementation, and that’s when the test becomes an executable specification.

Before it is automated, that same artifact can serve as a manual test script. However, unlike the traditional test automation workflow where manual tests are translated into automated tests, here there is no wasteful translation of one artifact into another. Instead, the one artifact is leveraged for multiple purposes.

For that matter, because we’re omitting implementation-specific details from the test, the test can be re-used if the system were ported to a completely different technology. There is nothing specific to a Windows or Web-based interface in the test. The test would be equally valid for a green screen, a Web services interface, a command line interface, or even a punch-card interface. Leverage. It’s all about the leverage.

Traditional Tools Solve Traditional Problems in Traditional Contexts. Agile Is Not Traditional.

Traditional, heavyweight, record-and-playback tools address the challenges faced by teams operating in a traditional context with specialists and silos. They address the challenge of having non-programmers automate tests by having record-and-playback features, a simplified editing environment, and a simplified programming language.

But Agile teams don’t need tools optimized for non-programmers. Agile teams need tools to solve an entirely different set of challenges related to collaborating, communicating, reducing waste, and increasing the speed of feedback. And that’s the bottom line: Traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile teams.

Related Links

A bunch of us are discussing the next generation of functional testing tools for Agile teams on the AA-FTT Yahoo! group. It’s a moderated list and membership is required. However, I’m one of the moderators, so I can say with some authority that we’re an open community. We welcome anyone with a personal interest in the next generation of functional tools for Agile teams. We’re also building lists of resources. In the Links section of the AA-FTT Yahoo! group, you’ll find a list of Agile-related test automation tools and frameworks. And the discussion archives are interesting.

Brian Marick wrote a lovely essay on An Alternative to Business-Facing TDD.

I discussed some of the ideas in this article in previous blog posts, most notably:

A small sampling of Agile-friendly tools and frameworks:

  • Ward Cunningham’s original Fit has inspired a whole bunch of related tools/frameworks/libraries including Fitnesse, ZiBreve, Green Pepper, and StoryTestIQ.
  • Concordion takes a slightly different approach to creating executable specifications where the test hooks are embedded in attributes in HTML, so the specification is in natural language rather than a table.
  • SeleniumRC and Watir tests are expressed in Ruby; Ruby makes good DSLs.

Are you the author or vendor of a tool that you think should be listed here? Drop a note in the comments with a link. Please note however that comment moderation is turned on, and I will only approve the comment if I am convinced that the tool addresses the concerns of Agile teams doing functional/system/acceptance test automation.

"Normal" in Context

It was my first week in Bangalore, and I was still adjusting to the time difference. I was actually a little proud that I was functional and awake given that it was something like 1AM my time.

“Want some coffee?” my host asked.

“No thanks, I’m fully caffeinated for now.” I replied.

“Even if you don’t want a coffee, you should come see how it’s prepared,” my host grinned at me expectantly.

“Um, OK.” I relented. I dutifully followed him through twisting and turning corridors until we arrived at the coffee counter.

There were three men at the counter. I watched as they made coffee for all the people in line in front of us. It was quite a production.

The first man reached up to a shelf for a ceramic cup and placed it on the counter. The cups were bigger than a demitasse, but much smaller than my typical ginormous supersized vat-o-coffee mug.

The second man then flipped the valve on the coffee maker allowing a dark, rich liquid — thicker than espresso — to flow into a small metal pitcher. He then upended the metal pitcher into the cup.

The third man had the best job of all. He was the real showman. This was what my host wanted me to see. He began by dipping a saucepan into a huge steaming pot of milk sunk into the counter. He then lifted it high and poured it back in a long stream. Dip. Pour. Dip. Pour. As he poured the milk back into the pot, it frothed.

When the third man judged the milk sufficiently foamy, he poured it into the prepared cup, careful to let just the milk out. No foam. Not yet. Once the level in the cup reached an invisible boundary, he poured the rest of the liquid back into the steaming pot, leaving just the foam in the saucepan. Then he gently tilted the saucepan over the cup, allowing just the right amount of foam to cover the center of the near-caramel-colored coffee mixture. The result was a foamy white top surrounded by a ring of darker froth around the edges. As he placed the dipper back across the pot of milk, the second man ceremoniously handed the patron their coffee mug, handle first.

Several people were in line, so I got to see the performance several times. Each time the team of three executed with precision. The resulting cups of coffee were identical in appearance: same volume in the cup, same amount of foam on top, same colors.

The milk pourer also seemed to have a quality control role. If he decided the color wasn’t quite dark enough, he would signal – almost imperceptibly – to the metal pitcher guy, who would then add a little more of the thick, dark coffee.

Of course, after such a performance, I had to have one of my own. Receiving my mug reverently, I took a sip. The drink was nothing like the coffee I usually get at home. The froth tickled a little. The drink tasted sweet and rich and just a little exotic. It was a bit like a latte, but richer and sweeter. I was hooked.

Visiting the coffee counter became a ritual for me. I drank many, many of those coffees while in India.

One day toward the end of my visit, the person in front of me requested unsweetened milk in his coffee. When it was my turn and I stepped up to the counter, the first man confirmed what I wanted: “Normal coffee, madam?” he asked.

“Yes,” I replied, smiling. “Normal coffee please.” Even if the beverage I was enjoying was not normal coffee to me, it was normal here. Sweet. Rich. Foamy. Normal. Once again, normal is in the eye of the beholder.

Same Blog, New Host

So I woke up this morning to an email from alert reader Michael Ludgate notifying me that when he tried to access any page on my site, he got the following message:

WordPress database error: [Can't create/write to file '/tmp/mysqltmp/#sql_11b6_0.MYI' (Errcode: 2)]

“Oh, joy,” I thought to myself as I began investigating. I knew that the problem could not have been caused by anything I did. First, my most recent update to the site was back on April 2, that was just a content posting, and I was 100% certain my site had been alive and well much more recently than that. Second, I don’t even have shell access, so I couldn’t make a file in /tmp/… disappear if I tried. That meant the problem had to be with my ISP, and was probably outside my control.

After poking around for a little while in the vain hope that if I messed with stuff the tmp file would spontaneously regenerate, I sent a missive off to tech support. The auto-responder helpfully told me that I could expect to wait 24 – 48 hours for a response. Even if this is just a blog, that’s too much downtime. So I decided a phone call was in order.

A baffled tech support rep suggested I uninstall and reinstall WordPress. When I pushed him on how this problem happened to begin with, the tech support rep backpedalled a little and said he’d escalate the issue. I now had an incident number for tracking purposes, and a promise that “someone would get back to me.”

At this point I evaluated my options.

I’ve been growing annoyed with godaddy.com anyway. Every time I log into my admin account, they try to upsell me. They don’t let me have shell access. They don’t support Ruby on Rails well – or not as well as I would like. I find their admin UI clunky. These are little annoyances; they’re not enough to push me to change hosts. But now that I had a down site and no ETA for a fix, I decided that changing hosts was actually the path of least resistance.

So my adventures began. I bought 3 months of hosting on A2hosting.com, a host I’d been contemplating for RoR hosting anyway. I managed to get a backup of my content from my godaddy.com site. (Note to self: I need to revisit my blog backup strategy. I happened to get lucky today – the catastrophic error that made my site unusable mercifully did not prevent me from exporting the data. But that was sheer luck. My previous backup was a couple months old. But I digress.)

I then put up a “pardon the mess” notification on both the old and new sites and changed my DNS entries to point to the DNS servers at the new location on A2. And I installed WordPress on the new site.

I had to wait for the DNS changes to propagate before I could do more because every attempt to log into the WordPress admin interface in the new location redirected me to the old site. Fortunately, I had to go to an appointment anyway. By the time I got back I could see the new IP address when I pinged testobsessed.com.

I then began migrating the content. That meant restoring the content from the SQL backup through a MySQL admin interface, upgrading the tables, panicking when it looked like the schema wasn’t going to upgrade cleanly, and finally breathing a sigh of relief when I saw the old content in the new site.

But wait! There was still more to do. I uploaded the theme files and other resources (including images and pdfs and such). I fixed the permalink settings. I tested. I fixed glitches. I tested again. I swore.

And finally I got the old site up on a new host.

This is, quite frankly, NOT how I expected to spend my day. I was supposed to be doing paperwork – invoices, expense reports, contracts, that sort of thing. This was a geekier, and possibly more exciting way to spend my day, but it was not at all what I had planned. And in my haste it is certainly possible that I missed some detail.

I think everything is working now. Please let me know if you find anything strange – missing content, broken links, whatever. (No, I don’t have a full set of automated regression tests to cover every link I publish. I’ve considered it, but the cost of maintaining such a suite of tests seems a little high for a non-revenue-generating blog.)

And that escalated trouble ticket filed with godaddy? Still not handled. I’m not particularly surprised. My suspicion is that the problem I encountered is somehow related to the fact that I had not migrated my WordPress to their whiz-bang new Hosting Connection application management console thingy. That first tech support rep was probably right: I could have solved my problem by uninstalling and re-installing WordPress.

But I don’t think that would have been any easier than moving hosts – the only saved step would have been the DNS change. And I’m happier having migrated. And – as an extra bonus – the site even seems to be responding a little zippier.

This little digression will probably boost another back-burner project to the foreground: I’ve been meaning to change the way I manage my main company site. Other alert readers have pointed out to me that my calendar at qualitytree.com is more than a year out of date. It’s embarrassing. But I haven’t fixed it because I publish that site using an obscure little Windows-based CMS. Since I’ve gone all-Mac, updating qualitytree.com means I have to boot up my old Windows laptop, and I hate doing that.

I’m going to see how things go with the new host for a while before migrating everything. But it looks like I’ll be migrating my other website sooner rather than later.

And now I’d better start on that paperwork I meant to take care of today.

Effective Test Automation Isn't Created in a Vacuum

“They never give us enough time to automate our tests, and then they complain at us that we don’t test fast enough!” J. shook her head. “And when I want to hire more people to help automate, they tell me I have too many people already! Management blames me because testing takes too long, but they won’t support me in fixing the problem. What’s wrong with them!?!”

J. is a QA manager in an organization that’s adopting Scrum. She’s frustrated, and understandably so. From her point of view, she’s being squeezed in all directions. The developers are producing releasable code every month. But for her team to run a regression test cycle – mostly manual – takes 6 weeks. That’s too long. Just one test cycle exceeds the sprint length by 2 weeks. J. feels tremendous pressure to reduce the time it takes to test the software. Yet at the same time, she feels like she’s not getting any support to do the one thing she can see that will help reduce the test cycle time: automate the regression tests.

I’ve had some visibility into J.’s situation for some time now. J.’s team has been trying – and failing – to automate the regression suite for the last two years. They aren’t making any headway because as soon as they get one script working, another one breaks. The automation is brittle, error-prone, and incredibly expensive to create and maintain. That’s in part because they’ve been using a cumbersome commercial tool that doesn’t support creating maintainable tests. It’s also because the user interface was not designed with test automation in mind. Many UI elements don’t have IDs, and the ones that do use automatically generated IDs that change with each build. In short, the combination of the tool and the software under test
equals a test automation nightmare. It’s no wonder J.’s team is not making headway.

Yet J. persists. Doing more of the same kind of test automation that’s already failing doesn’t make much sense to me, but she disagrees. “We just need more time!” she says.

The problem is that J. is still thinking in terms of silos. She thinks all testing tasks must be done by QA people using specialized QA tools. It simply would not occur to her to suggest that development help automate tests. Nor does she suggest that developers and testers collaborate on making the UI more testable. Instead, she says, “QA can’t go that fast. Slow down.”

J. doesn’t want to acknowledge that test automation created by a siloed QA team working in isolation to reverse-engineer existing software and automate tests against an untestable UI using proprietary tools accessible only to a few select team members is guaranteed to be incredibly expensive both to create and to maintain, and also ridiculously fragile. In short, her approach just isn’t going to work.

Unfortunately, J.’s story is likely to have an unhappy ending – at least for J. and her team. Her strategy of trying to get development to slow down, and telling management that they can’t release monthly, is backfiring. The development team is already bypassing QA for small changes and getting good results. But J. is undeterred. My past observations tell me that no matter what the reaction of the people around her, she will keep doing the same thing and expect different results.

But maybe, just maybe, by telling J.’s story here, I can help someone, somewhere.

So allow me to repeat the moral of this story:

When QA works in isolation, creating automated tests after the software is theoretically “done,” using proprietary tools that are available only to a select few team members, the results will be a fragile, unmaintainable mess.

For test automation to work well, it must be created in collaboration with the whole team and the resulting test automation code must be treated as code. That means it should be versioned with the source code, executed with each and every build, and created and maintained as part of the overall development cycle rather than as an afterthought.

And when a Test/QA group insists on keeping within their silo when the rest of the organization adopts Agile practices, they will end up bypassed and irrelevant as the rest of the organization finds ways to move forward without their help.