Exploratory Testing in an Agile Context Materials

I’m giving a session at Agile2011 in Salt Lake City at 9AM Wednesday on Exploratory Testing in an Agile Context. The session itself will be entirely hands on: we will explore a hand-held electronic game that I brought while discussing how ET and Agile fit together hand-in-glove. However, I did produce materials for the session: a PDF that’s almost a booklet. Thought you all might like to see it.

Comments { 7 }

Agile Up 3 Here

We held Agile Up 3 Here at Agilistry Studio last week. Nine people gathered from all around the world for our second week-long intensive. Our team consisted of Alan Cooper, Jim Dibble, Pat Maddox, Alex Bepple, Brendon Murphy, Dale Emery, Matt Barcomb, Dave Liebreich, and me. Once again, we were working on mrhomophone.com.
My insights from the week:

  1. Distilling down to the absolute core of the intent for a given period of time is harder than it sounds. It’s tempting to include little nice-to-haves in the stories. Even when implementing, it’s tempting to do a bit of polishing in unrelated areas.
  2. Perhaps the temptation to expand the scope of the deliverable beyond the bare bones isn’t all bad. It can enable us to kick things up a notch, deliver something that surpasses the merely functional to something that feels indulgent.
  3. Or perhaps the temptation is dangerous. To the extent that we allow the extraneous little bits in, we risk losing sight of the bigger and more important goals.
  4. Laser-focused pair partners help with the struggle to distinguish between kicking things up a notch, yak shaving, and losing focus.
  5. Explicit working agreements help create safety (as well as creating a tight-knit team with a strong shared culture).
  6. Shared in-jokes and language also create shared culture. (“System Testing!” Ha ha!) Note that unless you were here, you have absolutely no idea why “system testing” might be funny. Even if I explained the joke, you still probably wouldn’t think it was funny. It’s a “you had to be there” kind of thing. And that’s why shared in-jokes are powerful for creating tight-knit teams.
  7. Creating a sense of safety is critical for learning.
  8. Deciding whether or not to upgrade your infrastructure cannot be a unilateral decision. (On the other hand, I don’t think I made the wrong call; I just did it the wrong way and at the wrong time. If I were to do it all over again, I would open up the discussion with the group in advance. And assuming we decided to upgrade our technology stack, I would start earlier and with more help in the beginning.)
  9. Integrating a test effort is hard, even when all the programmers are test infected and the tester is highly competent.

On a more personal note…

  1. When I think I have an answer, I cling to it doggedly. And when I finally let go of something, I really let go of it.
  2. I am extremely fond of my yak and cannot bear the thought of losing it, even if it will be replaced soon. (Sorry Alex.)
  3. It’s possible for people who don’t live here to introduce me to new things in my own back yard. Go figure. (Thanks for introducing me to the crazy dive bar, Pat.)

So that’s AU3H in a nutshell. AU4H will (probably) take place in May 2012. Details in another 6 months or so.

Comments { 0 }

Minimally Viable

Yves Hanoulle has been a marvelous supporter and evangelist of Entaggle. And on occasion he’s been really pushy. But it’s all been good. Entaggle would not be where it is today if not for his pushing.

At the end of February he was nagging me: “You should announce Entaggle to the general public,” he said. This was after he had already started tweeting and writing about it on LinkedIn (with my blessing).

I resisted, “It’s not ready. I still have a to do list a mile long before I’ll feel like it’s ready.”

“What are you waiting for?” he asked.

I looked at my backlog. I had weeks worth of stories to implement before I hit my “Announce” milestone. But as I looked at the list, I thought about what he said. “What exactly am I waiting for?” I challenged myself.

So I went through every single backlog item standing between me and a public announcement.

Some of them were about moderation features: the ability for users to flag something as inappropriate or spammy. Others were about what I saw as core capabilities of the system. All of them represented my first pass approximation of a minimally viable feature set.

But perhaps I hadn’t stripped it down to the truly minimal set.

So for each and every item I asked myself: “What is the absolute worst thing that could happen if the system doesn’t have this thing when we announce it to the general public?”

In each case, the answer was, “not much.”

Sure, I wanted moderation features to protect people from spam. But in the absolute worst case if someone decided to try to use Entaggle to spam people, I could delete the offending content through the Rails Console. I didn’t actually need a special interface to manage content.

Besides, the probability that a spammer would find Entaggle to be an enticing spam vector was low: the system only supported text; no links. And users were protected from having spam show up on their profile by the simple mechanism of requiring taggings to be “accepted” before they showed up anywhere.

So I decided to go for it. I announced Entaggle. People signed up in droves: 172 signups that week. I was amazed.

Predictably, I spent a good chunk of time that week on support issues. But here’s the kicker: none of those support issues had anything at all to do with any of the items in the backlog that had been holding up my announcement.

Not one.

Most of the feature requests were for things I hadn’t thought about or hadn’t thought were important enough to be at the top of my list. All the serious bug reports were for things I hadn’t thought (enough) about. No one said, “Gee, I wish I could flag something as inappropriate or block a user from tagging me.”

In fact, the worst thing that happened in production during that first week was not related to spam at all. Rather, it was test content from well-meaning testers wanting to exercise the system. They created tags with names like “<script>alert(“HI!”)</script>” that cluttered up the list and made it look like no one cared about the data in the system.

(Little did those folks know that Rails 3 handles javascript insertion attacks out of the box.)

So I solved that problem not through moderation, but by making my staging server available to the public and telling everyone they could create all the test content they wanted over there.

The bottom line was that the risks I imagined turned out to be completely different from the risks that manifested.

To put it another way, empirical evidence trumps speculation Every. Single. Time.

When Entaggle users write in to ask, “Is there a way to…”, I often I find myself replying, “No, I’m sorry, there’s not a way to do that yet.” I feel a tiny twinge of regret every time I have to say that, so I sometimes add a wry, “Ah, the joys of a minimally viable product.”

That feeling of regret lasts no more than a nanosecond before it is replaced with gratitude. By having released a minimally viable product, I am getting a huge amount of feedback about what people using the system actually care about. Every one of those “is there a way to…” is a story that goes into the backlog (if it’s not there already). The more people ask for it, the higher I prioritize it.

As a result, I’ve learned the difference between what people actually want and what I imagined they might want or need.

They wanted a list of new tags and users since their last login more than they wanted twitter integration. They want a better interaction model for tagging with the ability to do bulk tagging. They want email notifications.

At least for now, there is not spam in the system. So the moderation features I envisioned are an incredibly low priority. That will change if spammers discover Entaggle, of course. But getting the interaction model right is currently more important than mitigating a non-existent risk.

This means that if I had taken the time to “do it right,” and finished all the things on my list before announcing Entaggle officially, I would not have been any better off. No one would have cared about all the bells and whistles I added. I would have wasted a huge amount of time.

By releasing early, and continuing to release often, I make much better use of my time and limited resources.

Behold the power of releasing a minimally viable product.

Comments { 8 }

Have you tagged anyone yet?

logoLate last year I started working on what was then known as my “seekrit (not really) project.”

The idea was simple: provide a mechanism for people to give and get public recognition. The result was entaggle.com. I announced the project officially on March 1 and the enthusiastic response has been amazing!

Of course, giving and getting recognition online is not a new idea. The now-defunct WeVouchFor was built for exactly this purpose. And sites like Linked In let you publicly endorse people. But Entaggle uses a slightly different model: tags. To recognize someone, you tag them. (And yes, you can tag yourself.)

This project has given me a great opportunity to put all the techniques I teach into practice. In some cases the experience has reinforced things I already knew; in other cases it’s helped me see the depth of my ignorance.

It’s likely that for the next several months (at least) this will become the All-Entaggle-All-the-Time blog. Entaggle is a great case study because it’s all mine, so I have no privacy, intellectual property, or NDA concerns. That means I can tell real stories without disguising the details. Besides, whenever I’m not actively working with clients, Entaggle occupies all my brain cells.

You’ve been warned.

Oh, and while you’re thinking about it, go sign up and tag someone.

Comments { 0 }

Files shuffled around

When I moved my blog, I didn’t do a good enough job of verifying that all the assets moved over. Several folks have contacted me asking for their favorite content to be restored. Whoopsie!

Many many thanks to everyone who contacted me. Please accept my apologies both for breaking links and also for taking so long to fix the issue.

I’ve finally started putting things back to rights. However, the media uploader automatically put all the content I restored into the uploads folder for April 2011. And in the interest of getting the content back as quickly as possible, I’m leaving it there. That means the old PDF links don’t work, and unless I hear a great outcry I’m probably not going to spend the time to put everything back exactly where it was.

Instead, you can find the most requested items under “Quick Links” on the right side of the page.

If you notice something still missing that you want access to, please let me know. I’ll be happy to restore it and put a link under Quick Links to it.

Thanks!

Comments { 0 }

Checking Invisible Elements

This week, I’m investing a bunch of hours on my side project. Today, I’m working on a feature where a field is supposed to remain invisible until a user enters a combination of values.

There are a variety of ways to test this code including testing the javascript with something like Jasmine. However, in this case, I particularly want an end-to-end test around this feature. And in my case that meant using Cucumber with Capybara for my end-to-end tests.

I wanted to be able to say something in my Cucumber .feature file like:

And I should not see the "My Notes" field

However, my first attempt at implementing didn’t work the way I expected it to. The “My Notes” field existed on the page but was hidden. When I called Capybara’s “has_css?” method, it found the field and reported it present. So my test was failing even though the system behavior did exactly what I wanted it to. Whoopsie!

So now what?

After two hours of wrestling with Capybara and CSS selectors, I finally found a solution that I can live with. And since I know other people have had this problem, I thought I would share it here.

But first, a note: this particular technique won’t work on elements that are given the display attribute of none directly through styles. It requires you to set display to none through a CSS class. (But setting attributes through CSS classes is a better design anyway, so I think this is a reasonable limitation.)

In my particular case, because I’m using jQuery, I’m using the .ui-helper-hidden class. You’ll need to figure out the class name that sets the display attribute to none for your application. The sample code below uses “.ui-helper-hidden” as the class name.

Here’s the helper method that I came up with:

(If you have javascript disabled, you might not see the beautifully formatted gist from github above. In that case, you can see the helper method if you click here.)

I hope that little helper method saves someone some time. If so, it was totally worth the 2 hours I spent today figuring out how to write it.

Comments { 10 }

The ATDD Arch

The ATDD Arch

It seems like everyone is suddenly talking about Acceptance Test Driven Development (ATDD) these days.

I have worked with several organizations as they’ve adopted the practice. And I’ve watched each struggle with some dimension or another of it. The concept behind the practice is so simple: begin with the end in mind. But in order to gain traction and provide value, ATDD requires massive, fundamental changes from the traditional organization mindset where testers test, developers develop, product managers or business analysts write requirements documents, and each role works in its own little silo.

As one person said to me, “ATDD is moving some people’s cheese really hard.”

Sometimes when organizations contact me about helping them with ATDD, they start by talking about tools. They tell me they’ve selected a tool to do ATDD, or that they want me to help them with tool selection. They’re suffering from delayed feedback and slow manual regression cycles and they want to do ATDD because they see it as a path to automated acceptance tests. They think ATDD stands for “Automated Test During Development.”

What they don’t see is that ATDD is a holistic practice that requires the collaboration of the whole team. We collaborate on the front end by working together to define examples with expectations for stories, then articulate those examples in the form of tests. On the back end, when the team implements the story, testers and developers collaborate on connecting the tests to the emerging software so they become automated.

Handoffs don’t work with ATDD. The product owners don’t establish examples with expectations unilaterally; they work with developers and testers. The testers don’t create the tests unilaterally; they work with the product owner and developers. And when the team is ready to hook those tests up to the emerging software, there is no automation specialist just waiting to churn out reams of scripts. Instead, testers and developers collaborate to create the test automation code that mades the acceptance tests executable.

Starting an adoption of ATDD with the tools is like building an arch from the top. It doesn’t work.

The tools that support ATDD—Fitnesse, Cucumber, Robot Framework, and the like—tie everything together. But before the organization is ready for the tools, they need the foundation. They need to be practicing collaborative requirements elicitation and test definition. And they need at a bare minimum to be doing automated unit testing and have a continuous automated build system that executes those tests.

It’s best if the engineering practices include full-on Continuous Integration, Collective Code Ownership, Pairing, and TDD. These practices support the kind of technical work involved with automating the acceptance tests. Further, they show that the team is already heavily test-infected and are likely to value the feedback that automated acceptance tests can provide.

Comments { 14 }

The Agile Acid Test

A while ago I blogged about how I define Agile:

Agile teams produce a continuous stream of value, at a sustainable pace, while adapting to the changing needs of the business.

I’ve gotten a little flack for it. A handful of people informed me that there is only one definition of Agile and it’s in the values and principles expressed in the Agile Manifesto. The implication was that if my definition is different from the Manifesto, it must be wrong.

At Gary Brown’s urging, I reread the principles in the Manifesto. And I discovered that my “definition” is indeed in there. It’s in the principles: “…continuous delivery of valuable software…changing requirements…sustainable development…maintain a constant pace indefinitely.”

OK, so I’ll relent. Agile is defined by the Manifesto. And my “definition” is my Agile Acid Test.

Lots of organizations claim to be adopting Agile. Few have the courage and discipline to do more than pay lip service to it. Then they claim “Agile doesn’t work.” (My favorite take on this is Ron Jeffries’ “We Tried Baseball and it Doesn’t Work.”)

So, if a team tells me that they’re Agile, I apply my acid test to see if they’re really Agile. I ask:

How Frequently Do You Deliver?

When I say that Agile teams produce a continuous stream of value, I mean that they deliver business value in the form of shippable or deployable code at least monthly, and preferably more frequently than that. Shippable/deployable means ready for production. It’s done. There is nothing left to do. It is implemented, tested, and accepted by the “Product Owner.”

Some organizations are taking this to an extreme with continuous deploy. In those contexts, the time between when a developer checks in a line of code to the time when she can see her work in production is measured in minutes. Obviously continuous deploy isn’t necessarily appropriate in all situations. But even if you work in a context where continuous deployment to production doesn’t make sense, consider what continuous deployment to a testing or staging environment could do to shorten your feedback cycles.

In short, Agile teams deliver shippable product increments frequently. Delivering “almost done” or “done except tested” every month doesn’t cut it.

Could You Continue at This Pace Indefinitely?

“Sustainable pace” means that the team can continue to add capabilities to the emerging system at more or less the same velocity given no increases in team size.

There are two critical aspects to achieving a sustainable pace:

  1. people
  2. technical assets

Prior to working on Agile projects, I was accustomed to spending the last few weeks or months of any project in “Crunch Mode.” Everyone on the team would put in long hours (80 – 100 hour weeks were typical). We’d be hyped up on caffeine, stressed out, and cranky. But we’d do whatever it took to ship.

Having shipped, we’d celebrate our heroics. And then we’d go crash.

A few days later, we’d all drag ourselves back into the office. “This time we’ll do it right!” we would declare. We would spend buckets of time up front on planning, requirements, and design. And, let’s be honest, we were still exhausted, so we’d work at a slower pace. Inevitably, as the deadline loomed, we’d run short on time in the release and once again we’d be in Crunch Mode.

This is not a sustainable cycle. A few rounds of this and people are just too fried. Some leave for greener pastures, lured by the promise of higher pay and/or more sane schedules. Others “retire on the job.” The few remaining people who stay out of a sense of loyalty and who retain their work ethic find it impossible to get anything done because they’re surrounded by newbies and dead weight. Progress grinds to a screeching halt.

So caring for the people is the number one way to ensure work can continue at a sustainable pace.

But it’s not enough. The other side of sustainable pace is caring for the technical assets. Every time we take a shortcut, like copying and pasting huge swaths of code and not refactoring to remove duplication, shoving code somewhere expedient instead of putting it where it really belongs, or failing to write an automated test we know we really ought to write, we’re creating technical debt. As the technical debt mounts, the “interest” we pay on that debt also mounts.

Simple changes require touching multiple files. The code base becomes fragile. Eventually the team gets to the point that any change causes massive regression errors. For each new tiny bit of capability added, the team has to spend days playing “whack-a-bug” to get the features that used to work fine back to working. Once again, progress grinds to a screeching halt.

(Also note the connection between the human and technological aspects of sustainable pace: burnt out people tend to take more shortcuts.)

If the organization is not caring for the people, and the people are not caring for the technical assets, they will run into trouble. Maybe not today. Maybe not tomorrow. But soon, and for the rest of the life of that code base.

How Does the Team Handle Change?

I visited one team in the middle of a transition to Agile. The team was very pleased with their progress to date. They were delivering in 2 week sprints, and they were doing quite well with establishing and maintaining a sustainable pace.

But the kicker came when they showed me the project plan. They had every sprint laid out for the next 6 months. They were only a couple of sprints into the plan, but I could see trouble ahead. “What will happen if the requirements or priorities change?” I asked. The project manager squirmed a little. Promises had been made based on the master project plan. They weren’t allowed to deviate.

But change is inevitable. I don’t know the ending to that particular story, but my bet is that the project manager ended up redoing that Gantt chart a gazillion times before they shipped.

If the team is planning too far out, they won’t be able to adapt when, inevitably, priorities and needs shift. They’ll be able to continue delivering at a sustainable pace, but what they’re delivering will have substantially less value to the organization than it otherwise would.

Few Are Truly Agile

Often when I speak to an audience I ask how many people are on Agile projects. These days, no matter what audience I’m addressing, lots of hands go up. Agile is the new hot thing. All the cool kids are doing it. But when I ask audiences to self-assess on these three criteria, and then ask again how many are on an Agile project, hands stay down. Very few organizations are achieving this level of agility.

Not surprisingly, that means few organizations are really getting the benefits of Agile. In the worst cases, “Agile” is resulting in worsening quality, increased pressure, and more burnout. People on those projects are reporting that Agile is ruining their lives.

In such environments, Agile is often implemented as:

  1. Compress the schedule (because “Agile” means “faster,” right?)
  2. Don’t document anything (because “Agile” means no documentation, right?)
  3. Code up to the last minute (because “Agile” means we can change anything at any time, right?)

This is a recipe for pain: increasing levels of technical debt, burnout, chaos, and eventually inability to deliver followed by numerous rounds of Point the Finger of Blame. So yes, in these organizations, “Agile” (or the corrupted version in the form of a frAgile process) is indeed ruining lives.

My hope is that if you are in an environment like that, this Agile Acid Test can help you communicate with The Powers That Be to change minds about what Agile really means and what it looks like when done well.

Remember, just because someone says they’re doing “Agile” doesn’t mean they are. As Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four. Because calling it a leg doesn’t make it a leg.”

Comments { 19 }

Agile Transitions and Employee Retention

A question from my mailbox this morning (paraphrased):

Our organization is transitioning to agile. I often hear that not everybody will suit an agile team. I’m concerned that some of the non-agile-minded will drop out. How do we keep everyone on board?

My correspondent had heard statistics and advice like “20% of the people in your organization will not make the transition. Be prepared for some turnover.” And he’s right to be concerned. Agile transitions are not easy. No significant change is ever easy.

Since this is a question I hear often, and since my response to my correspondent applies to any organization in transition, I decided to post my response here.

I offer four observations:

1. People sometimes surprise us.

The person who seemed complacent, satisfied to stay in their little comfort zone, resistant to taking ownership, may turn out to be a great collaborative team member when given half a chance. I’ve seen it happen. By contrast, the “top performer” who seems so pro-active and who everyone is desperate to retain may turn out to be toxic in the new organization because she prefers the mantle of hero to true collaboration.

2. Leaving isn’t the worst thing in the world.

One of my absolute worst screwups as a manager was to work too hard to “help” an employee that was not performing well.

He was on a performance improvement plan for months. Both of us were miserable about the situation. He’d been with the company for a while, and after many organizational changes ended up in my group. The organization had changed, and he wasn’t fitting in well in the new world order. No amount of training or coaching was helping.

When we finally mutually agreed that things weren’t working, he found another job at another company almost right away. The next time I ran into him at a conference he was brimming with happiness at his new success. His new organization loved him and he was thriving. His skills and temperament were a perfect fit there.

So while I thought I was being kind when I tried to give him every chance to succeed in my group, I was actually being cruel by prolonging his feeling of failure unnecessarily.

Similarly, at one of my clients, a QA Manager who had been resisting the transition to Agile ultimately left. Upper management was very, very nervous about what his departure would do to the QA group. But it turns out that everyone was better off.

Leaving isn’t the worst thing in the world, and sometimes it can be the best thing for all concerned.

3. Creating safety is more important than retaining individuals.

Transitioning to Agile inevitably results in increased visibility. That visibility can be incredibly scary, particularly in a political organization where people have historically practiced information hiding, and information hoarding, as a survival strategy.

Instead of trying to retain specific individuals, it’s more important that managers focus on making people feel safe. Much of creating safety is about not doing things: don’t use velocity as an assessment mechanism; don’t add pressure by blaming the team if they miss sprint targets; don’t foster a culture of competition within a team.

Even more important is what managers can actively do to promote safety: talk to individuals about their concerns; get whatever resources people say they need in order to be successful; reward collaboration over individual achievement.

4. Treat people well.

The people in the organization are humans, not fungible “resources.” They deserve support and compassion. As long as managers treat people as people consistently throughout the transition, it will all be OK, even if some people decide that the new organization isn’t a good fit for them.

Comments { 13 }

Do Testers Have to Write Code?

For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.”

The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program.

But not all testers are doing test automation.

Testers who specialize in exploratory testing bring a different and extremely valuable set of skills to the party. Good testers have critical thinking, analytical, and investigative skills. They understand risk and have a deep understanding where bugs tend to hide. They have excellent communication skills. Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing. But some of the very best testers I’ve worked with could not have coded their way out of a For Loop.

So unless they’re automating tests, I don’t think that testers should be required to have programming skills.

Increasingly I’ve been hearing that Agile teams expect all the testers to know how to write code. That made me curious. Has the job market really shifted so much for testers with the rise of Agile? Do testers really have to know how to code in order to get ahead?

My assistant Melinda and I set out to find the answer to those questions.

Because we are committed to releasing only accurate data, we ended up doing this study three times. The first time we did it, I lost confidence in how we were counting job ads, so we threw the data out entirely. The second time we did it, I published some early results showing that more than 75% of the ads requested programming skills. But then we found problems with our data, so I didn’t publish the rest of the results and we started over. Third time’s a charm, right?

So here, finally, are the results of our third attempt at quantifying the demand for programming skills in testers. This time I have confidence in our data.

We surveyed 187 job ads seeking Software Testers or QA from across the 29 states in the US posted between August 25 and October 16, 2010.

The vast majority of our data came from Craigslist (102 job ads) and LinkedIn (69 job ads); the rest came from a small handful of miscellaneous sites.

The jobs represent positions open at 166 distinct, identifiable companies. The greatest number of positions posted by any single company was 2.

Although we tried to avoid a geographic bias, there is a bias in our data toward the West Coast. (We ended up with 84 job listings in California alone.) This might reflect where the jobs are, or it could be because we did this research in California so it affected our search results. I’m not sure.

In order to make sure that our data reflected real jobs with real employers we screened out any jobs advertised by agencies. That might bias our sample toward companies that care enough to source their own candidates, but it prevents our data from being polluted by duplicate listings and fake job ads used to garner a pool of candidates.

Based on our sample, here’s what we found:

Out of the 187 jobs we sampled, 112 jobs indicate that programming of some kind is required; an additional 39 jobs indicate that programming is a nice to have skill. That’s just over 80% of test jobs requesting programming skill.

Just in case that sample was skewed by including test automation jobs, I removed the 23 jobs with titles like “Test Automation Engineer” or “Developer in Test.” Of the remaining 164 jobs, 93 required programming and 37 said it’s a nice to have. That’s still 79% of QA/Test jobs requesting programming.

It’s important to understand how we counted the job ads.

We counted any job ad as requiring programming skills if the ad required experience or knowledge of a specific programming language or stated that the job duties required using a programming language. Similarly, we counted a job ad as requesting programming skills if it indicated that knowledge of a specific language was a nice to have.

The job ads mentioned all sorts of things that different people might, or might not, count as a programming language. For our purposes, we counted SQL and shell/batch scripting as programming languages. A tiny number of job ads (6) indicated that they required programming without listing a specific language by listing broad experience requirements like “Application development in multiple coding languages.” Those counted too.

The bottom line is that our numbers indicate approximately 80% of the job ads you’d find if searching for jobs in Software QA or Test are asking for programming skills.

No matter my personal beliefs, that data suggests that anyone who is serious about a career in testing would do well to pick up at least one programming language.

So which programming languages should you pick up? Here were the top 10 mentioned programming languages (including both required and nice-to-haves):

  • SQL or relational database skills (84)
  • Java, including J2EE and EJBs (52)
  • Perl (44)
  • Python (39)
  • C/C++ (30)
  • Shell Scripting (27) note: an additional 4 mentioned batch files.
  • JavaScript (24)
  • C# (23)
  • .NET including VB.NET and ASP.NET but not C# (19)
  • Ruby (9)

This data makes it pretty clear to me that at a minimum, professional testers need to know SQL.

I will admit that I was a little sad to see that only 9 of the job ads mentioned Ruby. Oh well.

In addition, there were three categories of technical skills that aren’t really programming languages but that came up so often that they’re worth calling out:

  • 31 ads mentioned XML
  • 28 ads mentioned general Web Development skills including HTTP/HTTPS, HTML, CSS, and XPATH
  • 17 ads mentioned Web Services or referenced SOAP and XSL/XSLT

We considered test automation technologies separately from programming languages. Out of our sample, 27 job ads said that they require knowledge of test automation tools and an additional 50 ads said that test automation tool knowledge is a nice to have. (As a side note, I find it fascinating that 80% of the ads requested programming skills, but only about half that number mentioned test automation. I’m not sure if there’s anything significant there, but I find it fascinating nonetheless.)

The top test automation technolgies were:

  • Selenium, including SeleniumRC (31)
  • QTP (19)
  • XUnit frameworks such as JUnit, NUnit, TestNG, etc. (14)
  • LoadRunner (11)
  • JMeter (7)
  • Winrunner (7)
  • SilkTest (6)
  • SilkPerformer (4)
  • Visual Studio/TFS (4)
  • Watir or Watin (4)
  • Eggplant (2)
  • Fitnesse (2)

Two things stood out to me about that tools list.

First, the number one requested tool is open source. Overall, of the number of test automation tool mentions, more than half are for free or open source tools. I’ve been saying for a while that the commercial test automation tool vendors ought to be nervous. I believe that this data backs me up. The revolution I predicted in 2006 is well under way and Selenium has emerged a winner.

Second, I was surprised at the number of ads mentioning WinRunner: it’s an end-of-lifed product.

My personal opinion (not supported by research) is that this is probably because companies that had made a heavy investment in WinRunner just were not in a position to tear out all their automated tests simply because HP/Mercury decided not to support their tool of choice. Editorializing for a moment: I think that shows yet another problem with closed source commercial products. Selenium can’t ever be end-of-lifed: as long as there is a single user out there, that user will have access to the source and be able to make whatever changes they need.

But I digress.

As long as we were looking at job ads, Melinda and I decided to look into the pay rates that these jobs offered.

Only 15 of the ads mentioned pay, and the pay levels were all over the map.

4 of the jobs had pay ranges in the $10-$14/hr range. All 4 of those positions were part time or temporary contracts. None of the ads required any particular technical skills. They’re entry-level button-pushing positions.

The remaining 11 positions ranged from $40K/year at the low end to $130K/year at the high end. There just are not enough data points to draw any real conclusions related to salary other than what you might expect: jobs in major technology centers (e.g. Massachusetts and California) tend to pay more. If you want more information about salaries and positions, I highly recommend spelunking through the salary data available from the Bureau of Labor Statistics.

And finally I was wondering how many of the positions referred to Agile. The answer was 55 of the job ads.

Even more interesting, of those 55 ads, 49 requested programming skills. So while 80% of all ads requested programming skills, almost 90% of the ads that explicitly referenced Agile did. I don’t think there’s enough data available to draw any firm conclusions about whether the rise of Agile means that more and more testers are expected to know how to write code. But I certainly think it’s interesting.

So, that concludes our fun little romp through 187 job listings. I realize that you might have more questions than I can answer. If you want to analyze the data for yourself, you can find the raw data here.

Comments { 62 }