Random Thoughts on Record-and-Playback

Some years ago I had lunch with a QA specialist who invited me to visit him at work. He wanted to show off how he had used a macro recorder to automate his testing. Over lunch I offered the opinion that test automation is a programming activity. The QA specialist vehemently disagreed with me.

“I don’t need to program to automate my tests!” he said, waving his fork. “And I don’t want to program!” His fork punctuated his words. “All those FOR loops. Never could get the hang of FOR loops. And what’s the point anyway!” The fork was becoming dangerous. I considered ducking.

I couldn’t help but notice that he seemed angry that FOR loops were too complicated, and yet he was trying to automate tests. I haven’t visited that company again and I have no idea how they’re doing. But that guy? He scared me. No, it wasn’t the fork that scared me. It was the attitude about programming and automated tests.

Not Everyone Has to Code

I am often asked whether QA/Test people need to know how to program.

I always answer the same: QA/Test people do not need to know how to program, even on Agile projects. Everyone on a team brings something to the table. QA/Test people my not bring programming skills, but that’s OK, they don’t need to. The programmers already have that covered. Rather, QA/Test people bring testing skills and analysis skills and critical thinking skills and communication skills. Some bring other non-programming technical skills like database admin or network admin or system admin skills. And some do bring programming skills, and that’s great too. Whatever their skills, everyone brings something to the table.

And non-programming testers can collaborate very effectively with programmers to create awesome test automation (just ask Lisa Crispin).

But someone who is scared of FOR loops doing the coding part of test automation in isolation by recording and playing back stuff? Seems to me like that’s a good way for everyone on the project to waste a huge amount of time and become very frustrated.

Is Record-and-Playback a Solution to the Wrong Problem?

I’ve been thinking about that guy today as I’ve been thinking about record-and-playback test automation. I have had several conversations about record-and-playback test automation over the last few months with a wide variety of people.

Some people have said, “Yes, we discovered that what you are saying is true. So while we may start by recording a script, we modify it so much that it is not recognizable as a recorded script when we’re done.”

Others have said, “I only use the record-and-playback for quick and dirty test-assistance scripts. I know it can’t create real, robust test automation.”

Still others – usually vendors – have said, “Yes, I know you don’t like record-and-playback. But others do, and they want that capability. They want to automate their tests without programming.”

So individual contributors generally recognize that record-and-playback can be a helpful stepping stone but it is not a strategy. Yet at least some vendors still are very attached to record-and-playback, and customers apparently still demand it.

I wonder if record-and-playback is an attempt to solve the problem that QA/Test groups think they have rather than the real, underlying problem? It seems to me that the reasoning usually goes something like this:

  1. We need automated tests to provide fast feedback.
  2. Someone needs to write those tests.
  3. It has the word “test” in it, so it must belong to the QA/Test group.
  4. The QA/Test group doesn’t have much in the way of programming skills.

Therefore, the reasoning concludes, we must need something that will allow QA/Test groups to automate tests without knowing how to program. Or we must make all QA/Test people learn to code. That second thing isn’t gonna happen, so we need a tool to take care of the first solution.

The problem is that item #3 in the list is a total fallacy. Just because something has the word “test” in it does not automatically mean that it should be assigned to a designated, independent tester. If we get rid of the original constraint #3, we simplify the problem and open a world of alternative solutions. So let’s state the situation instead as:

  1. We need automated tests to provide fast feedback.
  2. Someone needs to write those tests.
  3. The QA/Test group may not have much in the way of programming skills.

So the people who are really good at coding can do the programming part of automating tests, and the non-programming testers can collaborate with them to make sure the automated tests are useful and effective. And I do mean collaborate. Not “hand off a specification to…”, not “make a request of…”, and certainly not “supervise.” I mean collaborate as in work together on, at the same time.

Worse, is Record-and-Playback a Substitute for the Real Solution?

So if the reason customers demand record-and-playback capability in test automation tools is that it enables people who don’t know how to code to automate tests, it makes me wonder why they’re making non-programmers do programming work.

The most common reason I hear from QA/Test people is that the programmers won’t automate tests, so the testers have to do it. The most common reason I hear from the programmers is that they don’t have time, and besides, there is a whole QA/Test group assigned to that kind of work.

But it seems to me like the real issue here is that we’re trying to use a tool to as a substitute for something else. We’re using it as an alternative to real collaboration.

So now when I hear someone tell me that they’re using record-and-playback a lot in their test automation strategy, it suggests that perhaps the test effort and development effort aren’t integrated, that the organization is still operating in silos, and that we need to work on breaking down barriers to collaboration before we start talking seriously about tools.

18 thoughts on “Random Thoughts on Record-and-Playback

  1. Well put, Elisabeth.

    I really don’t blame people much for wanting these tools. Who doesn’t want an easy solution to a hard problem? But I do think it’s shameful when vendors sell snake oil, and equally disappointing when companies treat hopes as plans, even after experience suggests otherwise.

    You’re absolutely right that collaboration is key. I love pairing with non-progammers to work on tests. I hope more teams discover how effective that is!

  2. In the perfect world though, wouldn’t you have a QA team who were developers?

  3. I liked this one so much that I sent it around in my company immediately. What makes it so good in my opinion is that you put the finger exactly on all the sore points, for example collaboration vs tool use. Even most managers will understand this one 😉



  4. I think that in an ideal world the QA folks–both those who code and those who don’t–would be collaborating so closely with the developers that they’re all part of the same team.

  5. w00t ^ 10.

    I love it when something gives voice to nagging suspicions. This piece certainly does that. Thank you for writing it.

    When I was in charge of test automation – loaded term, I know – I posed this question to my manager: “Why are we paying $xx,xxx for a tool that gives us the dubious honor of clicking a button WE created?”

    The subtext of that question was reflected rather nicely in your post. Do we HAVE to click the button? Aren’t we really trying to test the concept behind the button? If so, how might we accomplish this organically/internally rather than inviting an outside entity to act as a liaison between Dev and Test?

    Again I say: w00t ^ 10 on you.

  6. Elisabeth,

    Nice post, and I agree with alot of what you said. And we have had this type of discussion in the past. But based on my experience is that R&P is sold to the people with the money first (Management), and this is done via Snake Oil Dog and Pony shows the vendors have done. This sets a false expectation by management of what this type of work is, how it can be done, who it can be done by (or should be done by), how much really should be done, what it can really provide and the time it takes to get something useful/beneficial in place.

    Been there and done that one too many times. You know my favorite saying “It’s Automation, NOT Automagic”. And I do agree with the collaboration aspect, but that is only now starting to happen if you’re lucky.

    This is where what I’ve called a Hybrid Tester comes into play. A Tester who knows how to program and knows how both the application and the tool work and interact. They know how to build the code base/functions/framework to make for a robust and maintainable automation implementation. They also know how to “sell” it correctly; meaning they know how and what to say to management to get them to understand what it is all about. Hopefully you have a management that “listens”.

    Don’t get me wrong, R&P does have its place as a way to build quick throw away scripts and it does allow you to quickly prototype a test. But I am a big proponent of proper programming practices for automation. As a coworker of mine says “Why do you need silver bullets when you’re not hunting Werewolves?”

    See you at STPCon2010 if you go.


  7. Afraid I wasn’t nodding whilst reading all of this..

    “(just ask Lisa Crispin).”

    Frankly, I’d rather not ask Lisa’s opinion on much to do with testing. Here is a woman who has concentrated most of her testing career on tools and how to use them. She is clearly quoted as saying “No Manual Tests” in her testing extreme programming book. Her latest book on Agile Testing doesn’t really move anything forward.

  8. Lisa Crispin has always seemed quite sharp to me; I’m not sure why her having an opinion about what sorts of tests are most useful should mean I shouldn’t listen to her?

  9. It’s a pity that you’ve dismissed what Lisa has to say so cavalierly. She has a wealth of real-world hands-on-the-keyboard expertise as a tester working with Agile teams, shipping production software. I admire and respect her tremendously. And I think Agile Testing is an important book in our field.

  10. A valuable post, thank you. My head was nodding pretty metronomically while reading.

    I’m using Lisa’s Agile Testing book in a reading circle for our Testers. I concur it’s an important book.

  11. Hm. I saw Lisa and Janet’s book as codifying excellent practice. Even if that doesn’t move things forward, it certainly establishes an important foundation, in which I find considerable value.

  12. I don’t think that I’m dismissing what she has to say completely. The fact that I read her first diabolical book, AND still went on to read the second doe’s neccessarily show a cavalier attitude. Let me explain my comment slightly differently.

    Lisa appears to heavily focus on tools and toolsets to do testing rather than really studying the mechanics behind what makes a great test. She does however out rightly dismiss manual testing as a valuable project testing tool.

    Regarding the comment on “codifying excellent practice”, this almost smacks of having a written standardised agile testing, which I think is a mistake on many levels.

  13. Devs just looooooovvvvvvvvvveeeeeeeeee to automate things. Very ‘depressing’ sometimes. Look how easy it is to automate this simple test with this tool … *eyeroll* .

    No one ever seems to ask the question as to why we are automating, what is the ultimate goal.

    And of course, exploratory testing has no value at all…

    I guess I’m having a bad week.

  14. To my mind, all the endless hours spent on automating tests has no value unless someone actually does something about or looks into the actual reasons why the test is failing.

    Snowball / chance / hell.

  15. A manager friend asked me the other day about free tools for record playback that he could give to his non-programming testers. While I was happy to point him at a few tools on the understanding he’s old enough to make up his own mind, the nagging question for me was, “If they’re non-programmers, what will they do when the tool inevitably fails to work”?

    I suddenly couldn’t remember seeing any recording tools of late that ‘just worked’ (especially in the web space, workign with AJAX apps and similar).

    Reluctant to chime into the Agile Testing book segment as well, but I will say I was disappointed, mainly in that it seemed like a wiki dump that pointed to starting points, rather than a pointer to skill, wisdom and heuristics. At more than 500 pages, it felt surprisingly light on practice. Having said that, kudos to them for getting a book published.

    Oliver, feel free to contact me via my blog. We may have stuff to talk about 🙂

  16. Elisabeth
    Thanks for this. I now realise that the term test/playback has been annexed by product vendors 🙁

    I’d always used it for capture/manipulation/playback at a protocol interface with a generalised comparison mechanism for confirming state changes are as expected.

    Another source of confusion identified.

  17. I worked in a project where a well structured automated test framework, a tool such as HP BPT and a set of practices such as having a automation architect were enough to keep testers who don’t want to code away from it. A development team would take care of building components which testers used, leaving them at freedom to fathom as many testing scenarios as they could.

    Right. Not all companies have an automation team. I get it. And yet, a couple devs/QA can take care of the heavy coding and the rest devote to automating something more robust than record and play. And quite more legible, too.

Leave a Reply

Your email address will not be published. Required fields are marked *