2nd Annual QA/Test Job Posting Study

This is a guest blog post by Daniel Frank, my assistant. Daniel took on the challenge of updating the QA/Test job study for 2011, just in time for making New Year’s resolutions. Enjoy! Elisabeth

It’s been a little over a year since Elisabeth published “Do Testers Have to Write Code,” the results of an in-depth survey of job ads that she and Melinda conducted to see if employers expect testers to program. The resounding conclusion, with 80% of tester job ads requesting some kind of programming skill, was “Yes.”

This year we wanted to see if things have changed, so I conducted the same study again. I also wanted to add a bit more granularity to the study, to see if there were any trends that were missed last time.

I screened the lists with the same basic guidelines as our previous study. That means I restricted my search to the US only. I only counted a job if it was described as a testing/QA position in the job title. I did not include recruiter listings in order to avoid the risk of including duplicate jobs or even fake jobs used to gather pools of applicants.

Our final sample size this year is 164 jobs. That’s a little less than last year. Why?

The lists were sparse. There just aren’t that many job ads out there. Many of the job ads I found were from recruiters or were repeats, with the same company listing the same position several weeks in a row.

The simple fact that I had a hard time finding the same number of ads as last year is interesting information all on its own. From an overall economic standpoint, the country is in no more of a slump than we were in 2010. So why are there fewer listings for testers? Could it be that Alberto Savoia, who recently declared testing dead, is correct? We’ll come back to that question later.

Back to the study…

Like last year, the majority of our jobs came from Craigslist (90) and LinkedIn (64). The rest of them came from a smattering of other sites.

The data includes an even higher proportion of jobs in California than last year: 102 of the listings were in CA, with the remainder divided in small chunks between 28 other states. Unsurprisingly,Texas, Massachusetts, and Washington are the three runners up.

Last year there was some question of whether or not the sample was biased simply because we’re located in California. However, I took extra steps to try and get equal representation. The simple fact is that a search that might find 70 jobs when I filter the location for CA will result in 30 jobs or fewer if I filter for another area. If anything, I’d estimate that California is actually under represented.

I kept track of the job titles. By far the most popular title is “QA Engineer” (99 of the listings). 136 of the titles contained “QA” compared with only 32 containing the word “Test.”

An interesting side note: when I searched for the word “test” in the body of job ads, I found far more developer positions than similar searches for “qa” did. It would seem that at the same time QA/Test positions are requiring more coding skills, developer positions are requiring more testing skills. That might be another interesting job ad survey project.

So how much coding are testers expected to do?

Of the 164 listings, 102 jobs say they require knowledge of at least one programming language, and 38 jobs indicate coding is a nice to have. That’s 140 out of 164, or 85.37% of the sample. That’s an even higher percentage than last year. It’s difficult to say if the 5% uptick represents a real increase in demand, but at the very least it’s fair to say that demand for testers who code remains high.

I used the same criteria that Elisabeth and Melinda used last year. That means that I counted a job as requiring programming if the job required experience in or knowledge of a specific language, or if the job duties mentioned a language. There were 7 jobs which listed broad experience requirements like “must be able to script in multiple languages,” which also counted as requiring programming.

There were some judgment calls to be made about what may or may not count as a programming language. For the purpose of the results here, I counted SQL or other relational database knowledge as a programming language in order to be consistent with last year. However, unlike last year, I tracked proficiency in relational databases separately. This will let me track specific trends more easily in future studies.

One of the questions Elisabeth wanted to answer last year was whether jobs with self-identified Agile organizations required testers to code more than other jobs. This year 46 of of the 58 Agile job ads list programming skills as required or nice to have. That’s 79.31%, which is actually a lot less than last year’s 90%. However, this is one of those places where the small sample size has to be taken into consideration. In 2010, 49 out 55 agile jobs mentioned programming. Today, 46 out of 58 jobs mention it. Just a few jobs result in a 10% variation.

An enduring question about any kind of job is how much it pays. I saw even less mentions of pay this time around. Only 7 jobs even listed it, and 5 of those were button-pushing game testing positions in the $10-$20/hour range. The other two ran around $85,000-$105,000. Most positions simply don’t provide up front salary estimations, so we cannot draw any real conclusions from these data points.

Just for fun, I also noted whenever a job requested a certification. In 164 jobs I found exactly 4 mentions of certification, and not a single one was required. 3 of them were vendor or technology certifications that had nothing to do with testing. And even in the single instance where a testing certification was nice to have, it was the CSTE offered by QAI, rather than the much more hyped ISTQB. So it would seem that testing certifications are not much in demand. The bottom line is that someone looking to improve their marketability would be much better served by upskilling to a new proficiency rather than picking up an irrelevant certification.

And that’s about it for our study. If you’d like to dig through the raw data to look for any trends I may have missed, I’ll be happy to send it to you. Drop me a line.

Now back to the question about the number of QA/Test jobs out there. Could it be that there are fewer QA/Test positions? Was this just a matter of luck and timing, or is there a trend?

Alberto Savoia gave a talk titled “Test is Dead” at GTAC (dressed as the Grim Reaper). He may have used intentionally inflammatory hyperbole to make his point, but that doesn’t change the fact that he had interesting points to make.

Alberto points out that especially in web development, speed is paramount. Further, the biggest challenge isn’t in building “it” right, but in building the right “it.” So the goal is to get a minimum viable product out as quickly as possible, and get fast feedback from real users and customers. Traditional black box testing ends up taking a back seat in this type of development, and these projects often rely heavily on user feedback instead.

At STARWest 2011, James Whittaker of Google gave a talk titled “All That Testing is Getting in the Way of Quality” where he talked about the closest thing to a traditional testing role they have at Google. It’s called the “Test Engineer,” and they spend anywhere from 20%-80% of their time writing code. He also explains how Google utilizes their user bases to do almost all of their exploratory tests. As he puts it, “Users are better at being users than testers are, by definition.”

With James and Alberto’s talks firmly in mind, I can’t help but wonder if the difficulty I experienced in finding job ads that met my criteria is indicative of a sea-change in the industry rather than an anomaly. Could it be that we’re seeing a reduction in the number of QA/Test positions?

What do you think? Are you seeing fewer QA/Test positions in your organization or (if you’re looking) in your job search?

From the mailbox: selecting test automation tools

A long time ago, all the way back in 1999, I wrote an article on selecting GUI test automation tools. Someone recently found it and wrote me an email to ask about getting help with evaluating tools. I decided my response might be useful for other people trying to choose tools, so I turned it into a blog post.

By the way, so much has changed since my article on GUI testing tools was published back in 1999 that my approach is a little different these days. There are so many options available now that weren’t 12 years ago, and there are new options appearing nearly every day it seems.

Back in 1999 I advocated a heavy-weight evaluation process. I helped companies evaluate commercial tools, and at the time it made sense to spend lots of time and money on the evaluation process. The cost of making a mistake in tool selection was too high.

After all, once we chose a tool we would have to pay for it, and that licensing fee became a sunk cost. Further, the cost of switching between tools was exorbitant. Tests were tool-specific and could not move from one tool to another. Thus we’d have to throw away anything we created in Tool A if we later decided to adopt Tool B. Further, any new tool would cost even more money in licensing fees. So spending a month evaluating tools before making a 6-figure investment made sense.

But now the market has changed. Open source tools are surpassing commercial tools, so the license fee is less of an issue. There are still commercial tools, but I always recommend looking at the open source tools first to see if there’s anything that fits before diving into commercial tool evaluations.

So here’s my quick and dirty guide to test tool selection.

If you want a tool to do functional test automation (as opposed to unit testing), you will probably need both a framework and a driver.

  • The framework is responsible for defining the format of the tests, making the connection between the tests and test automation code, executing the tests, and reporting results.
  • The driver is responsible for manipulating the interface.

So, for example, on my side project entaggle.com, I use Cucumber (framework) with Capybara (driver).

To decide what combination of framework(s) and driver(s) are right for your context…

Step 1. Identify possible frameworks…

Consideration #1: Test Format

The first thing to consider is if you need a framework that supports expressing tests in a natural language (e.g. English), or in code.

This is a question for the whole team, not just the testers or programmers. Everyone on the project must be able to at least read the functional tests. Done well, the tests can become executable requirements. So the functional testing framework needs to support test formats that work for collaboration across the whole team.

Instead of assuming what the various stakeholders want to see, ask them.

In particular, if you are contemplating expressing tests in code, make very sure to ask the business stakeholders how they feel about that. And I don’t mean ask them like, “Hey, you don’t mind the occasional semi-colon, right? It’s no big deal, right? I mean, you’re SMART ENOUGH to read CODE, right?” That kind of questioning backs the business stakeholders into a corner. They might say, “OK,” but it’s only because they’ve been bullied.

I mean mock up some samples and ask like this: “Hey, here’s an example of some tests for our system written in a framework we’re considering using. Can you read this? What do you think it’s testing?” If they are comfortable with the tests, the format is probably going to work. If not, consider other frameworks.

Note that the reason that it’s useful to express expectations in English isn’t to dumb down the tests. This isn’t about making it possible for non-technical people to do all the automation.

Even with frameworks that express tests in natural language, There is still programming involved. Test automation is still inherently about programming.

But by separating the essence of the tests from the test support code, we’re able to separate the concerns in a way that makes it easier to collaborate on the tests, and further the tests become more maintainable and reusable.

When I explain all that, people sometimes ask me, “OK, that’s fine, but what’s the EASIEST test automation tool to learn?” Usually they’re thinking that “easy” is synonymous with “record and playback.”

Such kinds of easy paths may look inviting, but it’s a trap leads into a deep dark swamp from which there may be no escape. None of the tools I’ve talked about do record and playback. Yes, there is a Selenium recorder. I do not recommend using it except as a way to learn.

So natural language tests facilitate collaboration. But I’ve seen organizations write acceptance tests in Java with JUnit using Selenium as the driver and still get a high degree of collaboration. The important thing is the collaboration, not the test format.

In fact, there are advantages to expressing tests in code.

Using the same unit testing framework for the functional tests and the code-facing tests removes one layer of abstraction. That can reduce the complexity of the tests and make it easier for the technical folks to create and update the tests.

But the times I have seen this work well for the organization is when the business people were all technology savvy so they were able to read the tests just fine even when expressed in Java rather than English.

Consideration #2: Programming Language

The next consideration is the production code language.

If your production code is written in… And you want to express expectations in natural language, consider… Or you want to express expectations in code, consider…
Java Robot Framework, JBehave, Fitnesse, Concordion JUnit, TestNG
Ruby Cucumber Test::Unit, RSpec
.NET Specflow NUnit

 

By the way, the tools I’ve mentioned so far are not even remotely close to a comprehensive list. There are lots more tools listed on the AA-FTT spreadsheet. (The AA-FTT is the Agile Alliance Functional Testing Tools group. It’s a program of the Agile Alliance. The spreadsheet came out of work that the AA-FTT community did. If you need help interpreting the spreadsheet, you can ask questions about it on the AA-FTT mail list.)

So, why consider the language that the production code is written in? I advocate choosing a tool that will allow you to write the test automation code in the same language (or at least one of the same languages if there are several) as the production code for a number of reasons:

  1. The programmers will already know the language. This is a huge boon for getting the programmers to collaborate on functional test automation.
  2. It’s probably a real programming language with a real IDE that supports automated refactoring and other kinds of good programming groovy-ness. It’s critical to treat test automation code with the same level of care as production code. Test automation code should be well factored to increase maintainability, remove duplication, and exhibit SOLID principles.
  3. It increases the probability that you’ll be able to bypass the GUI for setting up conditions and data. You may even be able to leverage test helper code from the unit tests. For example, on entaggle.com, I have some data generation code that is shared between the unit tests and the acceptance tests. Such reuse drastically cuts down on the cost of creating and maintaining automated tests.

Consideration #3: The Ecosystem

Finally, as you are considering frameworks, consider also the ecosystem in which that framework will live. I personally dismiss any test framework that does not play nicely with both the source control system and the automated build process or continuous integration server. That means at a bare minimum:

  • All assets must be flat files, no binaries. So no assets stored in databases, and no XLS spreadsheets (though comma separated values or .CSV files can be OK). In short, if you can’t read all the assets in a plain old text editor like Notepad, you’re going to run into problems with versioning.
  • It can execute from a command line and return an exit code of 0 if everything passes or some other number if there’s a failure. (You may need more than this to kick off the tests from the automated build and report results, but the exit code criteria is absolutely critical.)

 

Step 2. Choose your driver(s)…

A driver is just a library that knows how to manipulate the interface you’re testing against. You may actually need more than one driver depending on the interfaces in the system you’re testing. You might need one driver to handle web stuff while another driver can manipulate Windows apps.

Note that the awesome thing about the way test tools work these days is that you can use multiple drivers with any given functional testing framework. In fact, you can use multiple drivers all in a single test. Or you can have a test that executes against multiple interfaces. Not a copy of the test, but actually the same test. By separating concerns, separating the framework from the driver, we make it possible for tests to be completely driver agnostic.

Choosing drivers is often a matter of just finding the most popular driver for your particular technical context. It’s hard for me to offer advice on which drivers are good because there are so many more drivers available than I know about. Most of the work I do these days is web-based. So I use Selenium / WebDriver.

To find a specific driver for a specific kind of interface, look at the tools spreadsheet or ask on the AA-FTT mail list.

Step 3. Experiment

Don’t worry about choosing The One Right tool. Choose something that fits your basic criteria and see how it works in practice. These days it’s so much less costly to experiment and see how things go working with the tool on real stuff than to do an extensive tool evaluation.

How can this possibly be? First, lots of organizations are figuring out that the licensing costs are no longer an issue. Open source tools rule. Better yet, if you go with a tool that lets you express tests in natural language it’s really not that hard to convert tests from one framework to another. I converted a small set of Robot Framework tests to Cucumber and it took me almost no time to convert the tests themselves. The formats were remarkably similar. The test automation code took a little longer, but there was less of it.

Given that the cost of making a mistake on tool choice is so low, I recommend experimenting freely. Try a tool for a couple weeks on real tests for your real project. If it works well for the team, awesome. If not, try a different one.

But whatever you do, don’t spend a month (or more) in meetings speculating about what tools will work. Just pick something to start with so you can try and see right away. (As you all know, empirical evidence trumps speculation. :-))

Eventually, if you are in a larger organization, you might find that a proliferation of testing frameworks becomes a problem. It may be necessary to reduce the number of technologies that have to be supported and make reporting consistent across teams.

But beware premature standardization. Back in 1999, choosing a single tool gave large organizations an economy of scale. They could negotiate better deals on licenses and run everyone through the same training classes. Such economies of scale are evaporating in the open source world where license deals are irrelevant and training is much more likely to be informal and community-based.

So even in a large organization I advocate experimenting extensively before standardizing.

Also, it’s worth noting that while I can see a need to standardize on a testing framework, I see much less need to standardize on drivers. So be careful about what aspects of the test automation ecosystem you standardize on.

Good luck and happy automating…