Checking Invisible Elements

This week, I’m investing a bunch of hours on my side project. Today, I’m working on a feature where a field is supposed to remain invisible until a user enters a combination of values.

There are a variety of ways to test this code including testing the javascript with something like Jasmine. However, in this case, I particularly want an end-to-end test around this feature. And in my case that meant using Cucumber with Capybara for my end-to-end tests.

I wanted to be able to say something in my Cucumber .feature file like:

And I should not see the "My Notes" field

However, my first attempt at implementing didn’t work the way I expected it to. The “My Notes” field existed on the page but was hidden. When I called Capybara’s “has_css?” method, it found the field and reported it present. So my test was failing even though the system behavior did exactly what I wanted it to. Whoopsie!

So now what?

After two hours of wrestling with Capybara and CSS selectors, I finally found a solution that I can live with. And since I know other people have had this problem, I thought I would share it here.

But first, a note: this particular technique won’t work on elements that are given the display attribute of none directly through styles. It requires you to set display to none through a CSS class. (But setting attributes through CSS classes is a better design anyway, so I think this is a reasonable limitation.)

In my particular case, because I’m using jQuery, I’m using the .ui-helper-hidden class. You’ll need to figure out the class name that sets the display attribute to none for your application. The sample code below uses “.ui-helper-hidden” as the class name.

Here’s the helper method that I came up with:

(If you have javascript disabled, you might not see the beautifully formatted gist from github above. In that case, you can see the helper method if you click here.)

I hope that little helper method saves someone some time. If so, it was totally worth the 2 hours I spent today figuring out how to write it.

The ATDD Arch

The ATDD Arch

It seems like everyone is suddenly talking about Acceptance Test Driven Development (ATDD) these days.

I have worked with several organizations as they’ve adopted the practice. And I’ve watched each struggle with some dimension or another of it. The concept behind the practice is so simple: begin with the end in mind. But in order to gain traction and provide value, ATDD requires massive, fundamental changes from the traditional organization mindset where testers test, developers develop, product managers or business analysts write requirements documents, and each role works in its own little silo.

As one person said to me, “ATDD is moving some people’s cheese really hard.”

Sometimes when organizations contact me about helping them with ATDD, they start by talking about tools. They tell me they’ve selected a tool to do ATDD, or that they want me to help them with tool selection. They’re suffering from delayed feedback and slow manual regression cycles and they want to do ATDD because they see it as a path to automated acceptance tests. They think ATDD stands for “Automated Test During Development.”

What they don’t see is that ATDD is a holistic practice that requires the collaboration of the whole team. We collaborate on the front end by working together to define examples with expectations for stories, then articulate those examples in the form of tests. On the back end, when the team implements the story, testers and developers collaborate on connecting the tests to the emerging software so they become automated.

Handoffs don’t work with ATDD. The product owners don’t establish examples with expectations unilaterally; they work with developers and testers. The testers don’t create the tests unilaterally; they work with the product owner and developers. And when the team is ready to hook those tests up to the emerging software, there is no automation specialist just waiting to churn out reams of scripts. Instead, testers and developers collaborate to create the test automation code that mades the acceptance tests executable.

Starting an adoption of ATDD with the tools is like building an arch from the top. It doesn’t work.

The tools that support ATDD—Fitnesse, Cucumber, Robot Framework, and the like—tie everything together. But before the organization is ready for the tools, they need the foundation. They need to be practicing collaborative requirements elicitation and test definition. And they need at a bare minimum to be doing automated unit testing and have a continuous automated build system that executes those tests.

It’s best if the engineering practices include full-on Continuous Integration, Collective Code Ownership, Pairing, and TDD. These practices support the kind of technical work involved with automating the acceptance tests. Further, they show that the team is already heavily test-infected and are likely to value the feedback that automated acceptance tests can provide.

The Agile Acid Test

A while ago I blogged about how I define Agile:

Agile teams produce a continuous stream of value, at a sustainable pace, while adapting to the changing needs of the business.

I’ve gotten a little flack for it. A handful of people informed me that there is only one definition of Agile and it’s in the values and principles expressed in the Agile Manifesto. The implication was that if my definition is different from the Manifesto, it must be wrong.

At Gary Brown’s urging, I reread the principles in the Manifesto. And I discovered that my “definition” is indeed in there. It’s in the principles: “…continuous delivery of valuable software…changing requirements…sustainable development…maintain a constant pace indefinitely.”

OK, so I’ll relent. Agile is defined by the Manifesto. And my “definition” is my Agile Acid Test.

Lots of organizations claim to be adopting Agile. Few have the courage and discipline to do more than pay lip service to it. Then they claim “Agile doesn’t work.” (My favorite take on this is Ron Jeffries’ “We Tried Baseball and it Doesn’t Work.”)

So, if a team tells me that they’re Agile, I apply my acid test to see if they’re really Agile. I ask:

How Frequently Do You Deliver?

When I say that Agile teams produce a continuous stream of value, I mean that they deliver business value in the form of shippable or deployable code at least monthly, and preferably more frequently than that. Shippable/deployable means ready for production. It’s done. There is nothing left to do. It is implemented, tested, and accepted by the “Product Owner.”

Some organizations are taking this to an extreme with continuous deploy. In those contexts, the time between when a developer checks in a line of code to the time when she can see her work in production is measured in minutes. Obviously continuous deploy isn’t necessarily appropriate in all situations. But even if you work in a context where continuous deployment to production doesn’t make sense, consider what continuous deployment to a testing or staging environment could do to shorten your feedback cycles.

In short, Agile teams deliver shippable product increments frequently. Delivering “almost done” or “done except tested” every month doesn’t cut it.

Could You Continue at This Pace Indefinitely?

“Sustainable pace” means that the team can continue to add capabilities to the emerging system at more or less the same velocity given no increases in team size.

There are two critical aspects to achieving a sustainable pace:

  1. people
  2. technical assets

Prior to working on Agile projects, I was accustomed to spending the last few weeks or months of any project in “Crunch Mode.” Everyone on the team would put in long hours (80 – 100 hour weeks were typical). We’d be hyped up on caffeine, stressed out, and cranky. But we’d do whatever it took to ship.

Having shipped, we’d celebrate our heroics. And then we’d go crash.

A few days later, we’d all drag ourselves back into the office. “This time we’ll do it right!” we would declare. We would spend buckets of time up front on planning, requirements, and design. And, let’s be honest, we were still exhausted, so we’d work at a slower pace. Inevitably, as the deadline loomed, we’d run short on time in the release and once again we’d be in Crunch Mode.

This is not a sustainable cycle. A few rounds of this and people are just too fried. Some leave for greener pastures, lured by the promise of higher pay and/or more sane schedules. Others “retire on the job.” The few remaining people who stay out of a sense of loyalty and who retain their work ethic find it impossible to get anything done because they’re surrounded by newbies and dead weight. Progress grinds to a screeching halt.

So caring for the people is the number one way to ensure work can continue at a sustainable pace.

But it’s not enough. The other side of sustainable pace is caring for the technical assets. Every time we take a shortcut, like copying and pasting huge swaths of code and not refactoring to remove duplication, shoving code somewhere expedient instead of putting it where it really belongs, or failing to write an automated test we know we really ought to write, we’re creating technical debt. As the technical debt mounts, the “interest” we pay on that debt also mounts.

Simple changes require touching multiple files. The code base becomes fragile. Eventually the team gets to the point that any change causes massive regression errors. For each new tiny bit of capability added, the team has to spend days playing “whack-a-bug” to get the features that used to work fine back to working. Once again, progress grinds to a screeching halt.

(Also note the connection between the human and technological aspects of sustainable pace: burnt out people tend to take more shortcuts.)

If the organization is not caring for the people, and the people are not caring for the technical assets, they will run into trouble. Maybe not today. Maybe not tomorrow. But soon, and for the rest of the life of that code base.

How Does the Team Handle Change?

I visited one team in the middle of a transition to Agile. The team was very pleased with their progress to date. They were delivering in 2 week sprints, and they were doing quite well with establishing and maintaining a sustainable pace.

But the kicker came when they showed me the project plan. They had every sprint laid out for the next 6 months. They were only a couple of sprints into the plan, but I could see trouble ahead. “What will happen if the requirements or priorities change?” I asked. The project manager squirmed a little. Promises had been made based on the master project plan. They weren’t allowed to deviate.

But change is inevitable. I don’t know the ending to that particular story, but my bet is that the project manager ended up redoing that Gantt chart a gazillion times before they shipped.

If the team is planning too far out, they won’t be able to adapt when, inevitably, priorities and needs shift. They’ll be able to continue delivering at a sustainable pace, but what they’re delivering will have substantially less value to the organization than it otherwise would.

Few Are Truly Agile

Often when I speak to an audience I ask how many people are on Agile projects. These days, no matter what audience I’m addressing, lots of hands go up. Agile is the new hot thing. All the cool kids are doing it. But when I ask audiences to self-assess on these three criteria, and then ask again how many are on an Agile project, hands stay down. Very few organizations are achieving this level of agility.

Not surprisingly, that means few organizations are really getting the benefits of Agile. In the worst cases, “Agile” is resulting in worsening quality, increased pressure, and more burnout. People on those projects are reporting that Agile is ruining their lives.

In such environments, Agile is often implemented as:

  1. Compress the schedule (because “Agile” means “faster,” right?)
  2. Don’t document anything (because “Agile” means no documentation, right?)
  3. Code up to the last minute (because “Agile” means we can change anything at any time, right?)

This is a recipe for pain: increasing levels of technical debt, burnout, chaos, and eventually inability to deliver followed by numerous rounds of Point the Finger of Blame. So yes, in these organizations, “Agile” (or the corrupted version in the form of a frAgile process) is indeed ruining lives.

My hope is that if you are in an environment like that, this Agile Acid Test can help you communicate with The Powers That Be to change minds about what Agile really means and what it looks like when done well.

Remember, just because someone says they’re doing “Agile” doesn’t mean they are. As Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four. Because calling it a leg doesn’t make it a leg.”

Agile Transitions and Employee Retention

A question from my mailbox this morning (paraphrased):

Our organization is transitioning to agile. I often hear that not everybody will suit an agile team. I’m concerned that some of the non-agile-minded will drop out. How do we keep everyone on board?

My correspondent had heard statistics and advice like “20% of the people in your organization will not make the transition. Be prepared for some turnover.” And he’s right to be concerned. Agile transitions are not easy. No significant change is ever easy.

Since this is a question I hear often, and since my response to my correspondent applies to any organization in transition, I decided to post my response here.

I offer four observations:

1. People sometimes surprise us.

The person who seemed complacent, satisfied to stay in their little comfort zone, resistant to taking ownership, may turn out to be a great collaborative team member when given half a chance. I’ve seen it happen. By contrast, the “top performer” who seems so pro-active and who everyone is desperate to retain may turn out to be toxic in the new organization because she prefers the mantle of hero to true collaboration.

2. Leaving isn’t the worst thing in the world.

One of my absolute worst screwups as a manager was to work too hard to “help” an employee that was not performing well.

He was on a performance improvement plan for months. Both of us were miserable about the situation. He’d been with the company for a while, and after many organizational changes ended up in my group. The organization had changed, and he wasn’t fitting in well in the new world order. No amount of training or coaching was helping.

When we finally mutually agreed that things weren’t working, he found another job at another company almost right away. The next time I ran into him at a conference he was brimming with happiness at his new success. His new organization loved him and he was thriving. His skills and temperament were a perfect fit there.

So while I thought I was being kind when I tried to give him every chance to succeed in my group, I was actually being cruel by prolonging his feeling of failure unnecessarily.

Similarly, at one of my clients, a QA Manager who had been resisting the transition to Agile ultimately left. Upper management was very, very nervous about what his departure would do to the QA group. But it turns out that everyone was better off.

Leaving isn’t the worst thing in the world, and sometimes it can be the best thing for all concerned.

3. Creating safety is more important than retaining individuals.

Transitioning to Agile inevitably results in increased visibility. That visibility can be incredibly scary, particularly in a political organization where people have historically practiced information hiding, and information hoarding, as a survival strategy.

Instead of trying to retain specific individuals, it’s more important that managers focus on making people feel safe. Much of creating safety is about not doing things: don’t use velocity as an assessment mechanism; don’t add pressure by blaming the team if they miss sprint targets; don’t foster a culture of competition within a team.

Even more important is what managers can actively do to promote safety: talk to individuals about their concerns; get whatever resources people say they need in order to be successful; reward collaboration over individual achievement.

4. Treat people well.

The people in the organization are humans, not fungible “resources.” They deserve support and compassion. As long as managers treat people as people consistently throughout the transition, it will all be OK, even if some people decide that the new organization isn’t a good fit for them.

Do Testers Have to Write Code?

For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.”

The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program.

But not all testers are doing test automation.

Testers who specialize in exploratory testing bring a different and extremely valuable set of skills to the party. Good testers have critical thinking, analytical, and investigative skills. They understand risk and have a deep understanding where bugs tend to hide. They have excellent communication skills. Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing. But some of the very best testers I’ve worked with could not have coded their way out of a For Loop.

So unless they’re automating tests, I don’t think that testers should be required to have programming skills.

Increasingly I’ve been hearing that Agile teams expect all the testers to know how to write code. That made me curious. Has the job market really shifted so much for testers with the rise of Agile? Do testers really have to know how to code in order to get ahead?

My assistant Melinda and I set out to find the answer to those questions.

Because we are committed to releasing only accurate data, we ended up doing this study three times. The first time we did it, I lost confidence in how we were counting job ads, so we threw the data out entirely. The second time we did it, I published some early results showing that more than 75% of the ads requested programming skills. But then we found problems with our data, so I didn’t publish the rest of the results and we started over. Third time’s a charm, right?

So here, finally, are the results of our third attempt at quantifying the demand for programming skills in testers. This time I have confidence in our data.

We surveyed 187 job ads seeking Software Testers or QA from across the 29 states in the US posted between August 25 and October 16, 2010.

The vast majority of our data came from Craigslist (102 job ads) and LinkedIn (69 job ads); the rest came from a small handful of miscellaneous sites.

The jobs represent positions open at 166 distinct, identifiable companies. The greatest number of positions posted by any single company was 2.

Although we tried to avoid a geographic bias, there is a bias in our data toward the West Coast. (We ended up with 84 job listings in California alone.) This might reflect where the jobs are, or it could be because we did this research in California so it affected our search results. I’m not sure.

In order to make sure that our data reflected real jobs with real employers we screened out any jobs advertised by agencies. That might bias our sample toward companies that care enough to source their own candidates, but it prevents our data from being polluted by duplicate listings and fake job ads used to garner a pool of candidates.

Based on our sample, here’s what we found:

Out of the 187 jobs we sampled, 112 jobs indicate that programming of some kind is required; an additional 39 jobs indicate that programming is a nice to have skill. That’s just over 80% of test jobs requesting programming skill.

Just in case that sample was skewed by including test automation jobs, I removed the 23 jobs with titles like “Test Automation Engineer” or “Developer in Test.” Of the remaining 164 jobs, 93 required programming and 37 said it’s a nice to have. That’s still 79% of QA/Test jobs requesting programming.

It’s important to understand how we counted the job ads.

We counted any job ad as requiring programming skills if the ad required experience or knowledge of a specific programming language or stated that the job duties required using a programming language. Similarly, we counted a job ad as requesting programming skills if it indicated that knowledge of a specific language was a nice to have.

The job ads mentioned all sorts of things that different people might, or might not, count as a programming language. For our purposes, we counted SQL and shell/batch scripting as programming languages. A tiny number of job ads (6) indicated that they required programming without listing a specific language by listing broad experience requirements like “Application development in multiple coding languages.” Those counted too.

The bottom line is that our numbers indicate approximately 80% of the job ads you’d find if searching for jobs in Software QA or Test are asking for programming skills.

No matter my personal beliefs, that data suggests that anyone who is serious about a career in testing would do well to pick up at least one programming language.

So which programming languages should you pick up? Here were the top 10 mentioned programming languages (including both required and nice-to-haves):

  • SQL or relational database skills (84)
  • Java, including J2EE and EJBs (52)
  • Perl (44)
  • Python (39)
  • C/C++ (30)
  • Shell Scripting (27) note: an additional 4 mentioned batch files.
  • JavaScript (24)
  • C# (23)
  • .NET including VB.NET and ASP.NET but not C# (19)
  • Ruby (9)

This data makes it pretty clear to me that at a minimum, professional testers need to know SQL.

I will admit that I was a little sad to see that only 9 of the job ads mentioned Ruby. Oh well.

In addition, there were three categories of technical skills that aren’t really programming languages but that came up so often that they’re worth calling out:

  • 31 ads mentioned XML
  • 28 ads mentioned general Web Development skills including HTTP/HTTPS, HTML, CSS, and XPATH
  • 17 ads mentioned Web Services or referenced SOAP and XSL/XSLT

We considered test automation technologies separately from programming languages. Out of our sample, 27 job ads said that they require knowledge of test automation tools and an additional 50 ads said that test automation tool knowledge is a nice to have. (As a side note, I find it fascinating that 80% of the ads requested programming skills, but only about half that number mentioned test automation. I’m not sure if there’s anything significant there, but I find it fascinating nonetheless.)

The top test automation technolgies were:

  • Selenium, including SeleniumRC (31)
  • QTP (19)
  • XUnit frameworks such as JUnit, NUnit, TestNG, etc. (14)
  • LoadRunner (11)
  • JMeter (7)
  • Winrunner (7)
  • SilkTest (6)
  • SilkPerformer (4)
  • Visual Studio/TFS (4)
  • Watir or Watin (4)
  • Eggplant (2)
  • Fitnesse (2)

Two things stood out to me about that tools list.

First, the number one requested tool is open source. Overall, of the number of test automation tool mentions, more than half are for free or open source tools. I’ve been saying for a while that the commercial test automation tool vendors ought to be nervous. I believe that this data backs me up. The revolution I predicted in 2006 is well under way and Selenium has emerged a winner.

Second, I was surprised at the number of ads mentioning WinRunner: it’s an end-of-lifed product.

My personal opinion (not supported by research) is that this is probably because companies that had made a heavy investment in WinRunner just were not in a position to tear out all their automated tests simply because HP/Mercury decided not to support their tool of choice. Editorializing for a moment: I think that shows yet another problem with closed source commercial products. Selenium can’t ever be end-of-lifed: as long as there is a single user out there, that user will have access to the source and be able to make whatever changes they need.

But I digress.

As long as we were looking at job ads, Melinda and I decided to look into the pay rates that these jobs offered.

Only 15 of the ads mentioned pay, and the pay levels were all over the map.

4 of the jobs had pay ranges in the $10-$14/hr range. All 4 of those positions were part time or temporary contracts. None of the ads required any particular technical skills. They’re entry-level button-pushing positions.

The remaining 11 positions ranged from $40K/year at the low end to $130K/year at the high end. There just are not enough data points to draw any real conclusions related to salary other than what you might expect: jobs in major technology centers (e.g. Massachusetts and California) tend to pay more. If you want more information about salaries and positions, I highly recommend spelunking through the salary data available from the Bureau of Labor Statistics.

And finally I was wondering how many of the positions referred to Agile. The answer was 55 of the job ads.

Even more interesting, of those 55 ads, 49 requested programming skills. So while 80% of all ads requested programming skills, almost 90% of the ads that explicitly referenced Agile did. I don’t think there’s enough data available to draw any firm conclusions about whether the rise of Agile means that more and more testers are expected to know how to write code. But I certainly think it’s interesting.

So, that concludes our fun little romp through 187 job listings. I realize that you might have more questions than I can answer. If you want to analyze the data for yourself, you can find the raw data here.

WordCount: a Happy Surprise

Sometimes teams that run through my WordCount simulation succeed wildly beyond anyone’s expectations. They go beyond the limits of the simulation, achieving a level of effectiveness and efficiency that’s off the charts. I’m always delighted by such occurrences.

One such happy surprise occurred at a private, onsite offering.

Before running the simulation, one of the managers in the organization took me aside. “This group is resistant to Agile. They also have a reputation of being the worst group here. Good luck.”

He shook his head at me a little ruefully. The “good luck” wasn’t sarcastic. It was a genuine expression of hope. The WordCount simulation can transform the way people think. The manager was hoping for just such a transformative experience.

Little did I know that I was about to be the one transformed.

With the manager’s comments ringing in my ears, I introduced the exercise. I was prepared for a difficult session and was emotionally geared up to deal with whatever resistance or anger came my way.

The simulation started off with the predictable pattern. In the first round the group stuck to the silos that I had put them in. As with so many groups that had run through the simulation before them, they didn’t have enough test results or customer feedback to come anywhere close to producing anything useful. They failed to recognize revenue in the first round. That’s normal: no one ever manages to ship in Round 1.

After the first round, we debriefed the results. Then the group tackled the problems they had observed in the first round by adjusting their practices. I pulled back from the group, allowing them to decide for themselves how they wanted to work together for the second round.

That’s where things took a surprising turn.

Often groups that are resisting Agile will hold tight to the process that I impose in the first round. They’ll continue to have designated working areas for Testers and Developers. Even if they do away with the role of “interoffice mail courier,” they will continue to work primarily through artifact handoffs. Sometimes they will even add new roles like “Project Manager” to enforce the process and coordinate activities, creating a level of bureaucracy that surpasses even the cumbersome initial process that I saddled them with.

This group didn’t do any of that. Instead, they gutted the initial process, removing all barriers to communication and collaboration, leaving only just enough process in place to ensure their work didn’t devolve into chaos.

When we started the second round, the Developers, Testers, and Product Managers were all co-located around a single huge table. The Product Managers immediately engaged with me to make sure they understood my requirements, then fed all the information they gleaned from me back to the other participants. The Developers and Testers collaborated on designing and executing tests. The group brought me in for demonstrations and acted on all the feedback I gave them.

The result: they delivered a working system and recognized revenue in the second round. That doesn’t happen very often. I was impressed.

The group fine-tuned their process for the third round, tightening their feedback loops even further. By the end of the third round they’d delivered on all my standard feature requests. I had to start making up new stuff. They further refined their practices for the fourth round. At that point I was scrambling for feature ideas.

I had been prepared for a difficult session in which I would have to nudge and coach and guide participants into recognizing the power of Agile practices. Instead, I had a group that so thoroughly embraced the principles behind Agile that I could hardly keep up with them.

Cognitive dissonance set in. I was almost dizzy trying to reconcile the picture the manager painted for me with the reality I had just witnessed. This group was among the most effective I had ever had the honor to work with. So why did they have a reputation for being among the worst, most resistant?

When we debriefed the whole exercise, I asked some probing questions about their perspective on Agile in the real world.

I learned that they didn’t have anything against Agile per se. But they did have a problem delivering within their organization. Their success as a group was not wholly within their control.

It was a classic case of Conway’s Law. The organization resembled the architecture. This particular group happened to be responsible for a chunk of architecture that depended on other parts of the system. The dependency was one-way, so the other groups tended to ignore this group’s needs. Unfortunately that’s all too common with anything that’s not considered part of the core system: localization, installers, configuration tools, reporting, etc.

Moreover, the group had been living in that context for a very, very long time.

In order to survive, this group had learned how to work within their context to deliver consistently. They were slow by the organization’s standards. But given that this group was completely hamstrung by dependencies on other parts of the system, the fact that they had managed to deliver at all was nothing short of miraculous.

Now that I understood their context, I understood how they had managed to fly through WordCount. In comparison to their real world situation, WordCount, even with all its initial artificial constraints, was a walk in the park.

The whole experience illustrated so nicely how Agile doesn’t solve problems, it reveals them. But it doesn’t always give us a clear picture of the root cause. The manager who took me aside recognized a problem with this group. The problem was very real. But he thought the problem was with the group. It wasn’t. It was with the context in which the group had to operate.

There’s a more general principle at play here. It’s much easier to point the finger at something we can see: an underperforming group. It’s much harder to discover the underlying systemic maladies that led to the problem. We can see the symptoms, not the disease.

And I learned that the team with the worst reputation may be stronger than anyone imagines.

Agile Backlash? Or Career Wakeup Call?

I’ve been reading accounts of how Agile has ruined lives. It’s quite the hot topic at the moment. Initially I thought it was yet another Agile backlash.

But unlike some of the previous anti-Agile rhetoric I’ve encountered, this isn’t by Traditional Consultants accusing Agile Consultants of playing with post-its instead of doing “real software engineering.” Nor is it by Traditionalists looking at Agile from the outside and saying “that dog don’t hunt.” Rather, it’s by practitioners who have been on “Agile” projects and burned in some way. These are working programmers with skin in the game. And they’re hurting.

So I started reading more carefully.

As I read through the vitriol in the comments, I noticed two general patterns.

The first pattern involves people who have been burned by a corrupted flavor of “Agile” foisted on them by clueless managers in dysfunctional organizations. By “clueless,” I mean the kind of manager who thinks the way to find a qualified coach is to find someone with a certification from a 2-day CSM class. Or the kind of manager who thinks that if we do a global search and replace on key phrases in our process docs that we will somehow be transformed into an “Agile” organization. By this I mean:

phase -> iteration
project manager -> scrum master
requirements -> stories
estimated hours -> points
status meeting -> standup

Sadly, there’s nothing we can do to prevent this bastardization of the word “Agile.” It happens with every buzzword. ISO, CMM, CMMi, RUP, take your pick.

I was on a project once where management decided that UML was the right cure for all our ills. Of course, it wasn’t. Didn’t help a bit. Actually made things worse. After everyone got trained up on UML we had to have meetings to review useless artifacts created only so we could check off a task item: “Create Use Case.” But UML was not the problem. The real problem was an executive team in so far over their heads that they were willing to believe in magic, and a staff so burned out and sick from the verbal abuse that they were willing to pretend magic existed.

So, to the people who are victims of Fake Agile, I extend my deepest sympathies. “Agile” ruined your life only in that it provided an all-too-glib buzzword for your organization and management to latch onto. I dearly hope that you’re able to experience real Agile: frequent delivery of value at a sustainable pace while adapting to the changing needs of the business.

Actually, maybe you already had experienced Agile, but your clueless management screwed it up by initiating an “Agile Transition” with so-called Agile Consultants or Coaches who dictated and enforced cumbersome variations on Agile practices in a sad command-and-control parody of Agile with the  ironic result of decreased communication, collaboration, visibility, feedback, and alignment. For you, I hope you can escape the nightmare and find a healthier organization.

But there’s a second pattern of responses that I find both disturbing and fascinating. The responses in this category do not appear to be motivated by a misunderstanding of Agile, but rather are attacks on Agile practices: standups, pairing, collaborative teams, open team rooms, and the like.

At the mild end of the range in this second pattern are people who object to being “interrupted” by meetings or derailed by having to collaborate with others.

I suspect that some of these folks are introverts working with a whole passel of extroverts who took to the social nature of Agile like ducks to water. Introverts need time and space to process stuff internally. If they don’t get enough time to themselves during the workday, they burn out. If this sounds like you, I hope that instead of seeing Agile practices as evil you can work with your team to achieve a workable balance of collaboration time and alone time.

At the more extreme end of this second pattern, however, the comments get nasty. Some refer to the business stakeholders as idiots, use dismissive labels like “bean counters,” and decry the work of other “crappy programmers” that they have to clean up after. These aren’t just attacks on Agile, they’re attacks on people.

Perhaps some of these comments come from good people who have been in toxic environments too long. I want to give everyone the benefit of the doubt.

But I think that at least some of these folks have a vested interest in the status quo. They liked the old way of working because it allowed them be the magicians behind the curtain. No one knew what they did or how they did it. They could pick and choose the work they liked.

These are the folks who think they are at their most productive behind a closed door, headphones on, music blaring, being “Programmer Man” (or woman, as the case may be). They dismiss requests for help with an airy wave of their hand and a distracted, “RTFM.” They think Dogbert is right about just about everything and they resent being stuck working with a whole pack of Wallys. They like to work from 3PM to 1AM because those are their best hours: things are quiet and they’re not interrupted. They write what they want, the way they want to, without concern for what the business stakeholders asked for.

Many of these people are undeniably brilliant. They produce more code—working code, even—in an afternoon than an average programmer can write in a month. And it may even be beautiful code, too. Well-factored. Clean. Really good stuff.

And yet these folks are not nearly as effective as they think they are.

Their code might be elegant, but because they dismiss Product Managers and BAs as “idiots,” their code doesn’t do what the business actually needed.

Even if their code does do what the business needed, the fact that it was created in isolation means that doesn’t play well with the rest of the system. So it takes weeks to integrate and test it. These brilliant programmers, self-satisfied with their own competence, drastically discount the work required to turn raw code into something that has business value. They blame everyone else for long integration cycles, unable to see their own hand in the mess.

And even if their code works within the context of the rest of the system, the organization is frequently bottlenecked through them because they’ve decided to take on the mantle of the hero.

For these folks, Agile represents loss. A forcible expulsion from their comfort zone of darkened offices with closed doors and noise canceling headphones. The loss of autonomy afforded by opaque processes. The loss of sovereignty over “their” code.

And so I wonder if perhaps at least some of the current backlash is offered by those who are being forced out of their self-imposed isolation, into an open team culture. It feels like a crisis to them. It may even feel like Agile is ruining their lives. But as Jerry Weinberg says, “it’s not a crisis; it’s the end of an illusion.” In this case I think it’s the end of the illusion that they were the best because they cranked out more code than anyone else. Or the illusion that all that teamwork stuff is a bunch of fluffy nonsense and not nearly as important as the technical heavy lifting they do.

The truth is that creating even a modestly complex software system is of necessity a social activity, a team sport. It is not enough to be brilliant. It never was. Effective team members have social skills. They listen, collaborate, share, and contribute.

So, to the people who think Agile ruined their lives because it requires an uncomfortable level of teamwork, I say: what a marvelous learning opportunity for you. Yes, you are brilliant. But to advance as a professional, you need to be so much more than that. A deeper understanding of algorithms, OO design, or functional programming will not help you move forward further. It’s time to get on with the difficult and painful work of learning to be a good team member. Welcome to the next stage of your career.

On Winning the Gordon Pask Award at Agile2010

On Friday August 13, I accepted the Agile Alliance’s Gordon Pask award at the Agile 2010 conference in Orlando.

I wasn’t even aware that I had been nominated, so when David Hussman called me at home shortly after 7:30AM on Tuesday August 10 to tell me that I had won, I was beyond surprised. Gobsmacked? Flummoxed? Yes, those words fit. Also grateful, honored, and delighted. I immediately made arrangements to go to Orlando to accept the award in person.

Even now I find it difficult to articulate what the award means to me. I am amazed to have been nominated. To have won? I am enormously pleased to have my work in the Agile community validated by such an honor. I think back to the prior winners and am ecstatic to be in their company. And I feel incredibly flattered to have been chosen alongside Liz Keogh who I respect and admire tremendously.

My inability to articulate my feelings led to near-paralysis in the days leading up to the official (short) ceremony. I had most of Tuesday and all of Wednesday and Thursday to organize my thoughts, but I made very little progress.

I thought about what to say during the long flight to Orlando on Thursday. When we landed in Orlando I had no more idea what I would say than when we left San Francisco.

Confessing my uncertainty about what to say to Matthew Barcomb during the banquet on Thursday night, I joked that I could use my time on stage to help him find his missing VGA adapter. “Thanks for the award!” I said, “Now, where is Matthew Barcomb. Yes, there you are. Stand up. OK, now who borrowed Matthew’s VGA adapter? Could you go over there and return it to him please?”

Matthew laughed. Brian Marick chimed in. “You could thank me,” he joked.

“OK,” I said. “What shall I thank you for?”

“For resigning from the Pask committee so that I’m no longer there to blackball women or testers.”

We laughed.

Then we drank some more. As a group we found our way to “Mexico” in Epcot Center where we hung with the Version One folks. Then we meandered over to “England” where Shane Hastie put some delicious but deadly concoction of a cocktail into my hands. We made our way to the Dance Hall on the boardwalk where we danced to pounding music and I lost track of my head.

Keenly aware that I had to be onstage between 9AM and 9:30, I switched to water at midnight and made my way back to my hotel at 1AM. Before going to sleep, I set two alarms, a primary for 7AM and a backup for 7:30AM. Then I fell asleep, blissfully unencumbered by any thoughts whatsoever.

Friday morning dawned.

The “strum” sound from my iPhone woke me with a start. It was my backup alarm. That meant it was 7:30AM. I shook away the cobwebs of a surreal dream involving Craftsman architecture and escaping cookie dough. Bleary-eyed, turned off the alarm. No time to hit Snooze.

I wondered why my primary alarm on my iPad, set for 7:00AM, hadn’t gone off. I looked over at the iPad. The display showed the alarm clock app prompting me to dismiss the alarm. Apparently it had gone off, but silently. Whoopsie. Good thing I always set two alarms. I turned off the alarm clock on the iPad and shuffled toward the coffee maker.

I took a mental inventory. I felt better than I had any right to given that I’d been in California less than 24 hours prior, had flown across country, and had been out partying until 1AM. I congratulated myself for not overindulging the night before, at least not too much, and for going to bed early enough to get 6 hours of sleep.

My mind turned to preparing for the day. 7:30AM. That left me an hour in which to make myself presentable, have some coffee, and reflect on what I would say on stage before I had to walk over to the Dolphin. I still needed to iron my shirt, but I would have plenty of time for that. And I could organize my acceptance speech in my head while I ironed.

Then my glance fell on the hotel room clock.

As a general rule, I don’t use hotel clocks. I find the alarms on my iPhone and iPad tend to be more reliable, and their time is usually more accurate.

But this time I wished that I had consulted the hotel room clock sooner.

It read 9:17AM.

If it really was 9:17… then I was due on stage RIGHT NOW. I could picture JB or Jim Newkirk calling my name, looking for me.

My mind raced. Which clock was right? I looked outside. Bright. Sunny. Of course. It’s Florida. Bright and sunny are normal. That didn’t mean much. I checked my own internal sense of time. Because I’d slept in 3 different time zones in the space of a week, my internal sense of time was completely broken.

My iPhone now read 7:33. I recalled my stop-over in Denver. I have had my iPhone get “stuck” on the wrong time zone before. And given that my iPad was another Apple product, it was possible that it could have the same issue.

Nagging doubts remained. If it was really that late why hadn’t someone called me? Surely someone would have called me. But maybe they couldn’t find me. I was staying at the Yacht Club because the Dolphin was full. And maybe they didn’t have my cell number. Another doubt surfaced. If the problem was that my iPhone and iPad were “stuck” on Denver time, why was it not an exact 2-hour difference? Why did my iPhone now say 7:34 and the hotel clock said 9:18?

But I had to admit to myself that it was entirely plausible that the hotel room clock was closer to correct than my own devices.

My stomach plummeted.

I quickly searched my address book for cell phone numbers of everyone I could think of. Phil Brock. Jessica Ambrose. JB Rainsberger. David Hussman. No luck. All my contact information was on my computer back in California. In an attempt to travel light I only had my iPad with me and it didn’t have all my contact info. So I had no way to contact anyone at the conference.

Full panic mode set in. I was seized with the notion that it was after 9AM. Despair and disbelief washed over me. I had flown all the way across the country just to accept the award. I felt so honored to have been selected. Then I stayed out too late and slept through my chance to say thank you. In doing so, I threw the honor back in the face of the committee. I’ve made mistakes before. I have disappointed people. But this would be among the worst of my screwups.

I threw on some clothes. Maybe, just maybe, if I got out the door fast enough and ran full tilt to the Dolphin, I could make it before 9:30. I would look like a disaster, but I would be there.

Grabbing my phone and room key I glanced at the mirror on my way out. It showed me an unforgiving image. I was a total mess. “What will people think?” I wondered. “How will they interpret my appearance?” Drunk, I decided. People would look at my hair and mismatched clothes and think I’m drunk from the night before. That’s the only reason someone would show up looking like this. My stomach sank lower.

I decided that showing up looking like a complete mess was preferable to not showing up at all. I raced out the door.

The nagging doubts resurfaced. The first thing to do, I decided, was to find out exactly what time it really is.

A hapless tourist wandered into my field of view. I raced up to him.

“WHAT TIME IS IT??!?” I demanded.

His gave me a look that plainly said he thought I was nuts. I privately agreed with his assessment. He checked his watch.

“About 7:48,” he reported.

The adrenaline that had fueled my scrambled exit from my hotel room abated. I shook with relief. My mind stopped racing. It would all be OK after all. I felt a sense of joyous reprieve. The sinking feeling reversed, roller-coasted into elation.

“THANK GOD! THANK YOU! I FEEL LIKE SCROOGE!” I shouted at the tourist with the watch. The bit about Scrooge made perfect sense in my head, but I suspect it served to confirm the tourist’s diagnosis of CRAZY.

I ran back into my room and spent the next 5 minutes just remembering how to breathe.

The remainder of my morning went as originally planned. In the next 40 minutes, I made myself presentable, had some coffee, and reflected on what to say on stage. And I laughed a little at my foolishness over the time confusion.

At 8:30AM (10:17AM HCT – Hotel Clock Time) I made my way from the Yacht Club to the Dolphin. I contemplated possible acceptance speeches as I walked, my heels thunking on the wood of the boardwalk.

I considered telling the story of my morning and the clock mixup, but decided it was too off-topic.

I considered doing an Oscars-style “Thank you to…” in which I would thank everyone I’d learned from in the Agile community, but decided that it was a massively long list, it would take too long, and no matter how careful I tried to be I would forget someone important.

I considered saying something about the past controversy around the award and the fact that two women had won this year when none had one before, but decided that was too divisive a message.

I considered gushing appreciations about the award and what it meant to me, but decided I’d probably end up babbling “Thank you I’m so honored!” over and over.

My mind was still churning even as the morning session began.

Mercifully, Liz was called up to the stage first. She spoke eloquently of community. I appreciated and agreed with her sentiments. But I knew that I could not get away with uttering, “Yeah. What Liz said!” I had to say something of my own.

Then it was my turn. As I walked up the steps I still did not really know what I would say.

But I had the germ of an idea. Brian had said I should thank him.

He had been joking. But he was right. I did need to thank him. Just not for the reason he suggested.

Brian is responsible for my starting down the path to learn about Agile, and he ushered me into the Agile community.

Brian started telling me about Extreme Programming sometime around 1999 or 2000. I ignored him for a couple years. Then at his urging, I went to see Kent Beck speak in 2001 and finally understood what Brian had been talking about. Set on the path to learning about Agile, I sought more sources of learning. I participated in one of Josh Kerievsky’s XP immersion classes. I finagled my way onto a Pivotal Labs project. I attended both XP Universe and the Agile Developer Conference (the forerunners to the Agile20XX conferences). Then Brian suggested that we do a session together at ADC2003 on Exploratory Testing in Agile. And so I started presenting at Agile conferences.

In short, if Brian had not introduced me to Agile concepts, principles, values, and the surrounding community, I wouldn’t be doing what I do today.

So, in my acceptance speech I said thank you to Brian. My words met with a few laughs, and I am still half expecting Brian to shoot me an email saying “WTF??!?”

But while I singled out Brian, I also want to thank members of the broader community. Fellow consultants and coaches. People I’ve worked with at my clients. Members of the AA-FTT community. Members of the BayXP and BayAPLN communities. Members of the local user groups that I’ve spoken to. The fine folks who work at Pivotal Labs, Atomic Object, CodeCentric, and Reaktor Innovations. Everyone who has come to Agilistry for events. Agile Alliance members.

All of you have been part of this journey. And I am immensely grateful to each and every one of you.

Communities take on a life of their own and deserve to be recognized and celebrated in their own right.

And I also wish to celebrate the individuals that make up the community.

So many, many thanks to all of you for being part of my Agile journey. I am honored. And grateful. And flattered. And extremely appreciative to be surrounded by a circle of such incredible individuals, part of an amazing community.

Random Thoughts on Record-and-Playback

Some years ago I had lunch with a QA specialist who invited me to visit him at work. He wanted to show off how he had used a macro recorder to automate his testing. Over lunch I offered the opinion that test automation is a programming activity. The QA specialist vehemently disagreed with me.

“I don’t need to program to automate my tests!” he said, waving his fork. “And I don’t want to program!” His fork punctuated his words. “All those FOR loops. Never could get the hang of FOR loops. And what’s the point anyway!” The fork was becoming dangerous. I considered ducking.

I couldn’t help but notice that he seemed angry that FOR loops were too complicated, and yet he was trying to automate tests. I haven’t visited that company again and I have no idea how they’re doing. But that guy? He scared me. No, it wasn’t the fork that scared me. It was the attitude about programming and automated tests.

Not Everyone Has to Code

I am often asked whether QA/Test people need to know how to program.

I always answer the same: QA/Test people do not need to know how to program, even on Agile projects. Everyone on a team brings something to the table. QA/Test people my not bring programming skills, but that’s OK, they don’t need to. The programmers already have that covered. Rather, QA/Test people bring testing skills and analysis skills and critical thinking skills and communication skills. Some bring other non-programming technical skills like database admin or network admin or system admin skills. And some do bring programming skills, and that’s great too. Whatever their skills, everyone brings something to the table.

And non-programming testers can collaborate very effectively with programmers to create awesome test automation (just ask Lisa Crispin).

But someone who is scared of FOR loops doing the coding part of test automation in isolation by recording and playing back stuff? Seems to me like that’s a good way for everyone on the project to waste a huge amount of time and become very frustrated.

Is Record-and-Playback a Solution to the Wrong Problem?

I’ve been thinking about that guy today as I’ve been thinking about record-and-playback test automation. I have had several conversations about record-and-playback test automation over the last few months with a wide variety of people.

Some people have said, “Yes, we discovered that what you are saying is true. So while we may start by recording a script, we modify it so much that it is not recognizable as a recorded script when we’re done.”

Others have said, “I only use the record-and-playback for quick and dirty test-assistance scripts. I know it can’t create real, robust test automation.”

Still others – usually vendors – have said, “Yes, I know you don’t like record-and-playback. But others do, and they want that capability. They want to automate their tests without programming.”

So individual contributors generally recognize that record-and-playback can be a helpful stepping stone but it is not a strategy. Yet at least some vendors still are very attached to record-and-playback, and customers apparently still demand it.

I wonder if record-and-playback is an attempt to solve the problem that QA/Test groups think they have rather than the real, underlying problem? It seems to me that the reasoning usually goes something like this:

  1. We need automated tests to provide fast feedback.
  2. Someone needs to write those tests.
  3. It has the word “test” in it, so it must belong to the QA/Test group.
  4. The QA/Test group doesn’t have much in the way of programming skills.

Therefore, the reasoning concludes, we must need something that will allow QA/Test groups to automate tests without knowing how to program. Or we must make all QA/Test people learn to code. That second thing isn’t gonna happen, so we need a tool to take care of the first solution.

The problem is that item #3 in the list is a total fallacy. Just because something has the word “test” in it does not automatically mean that it should be assigned to a designated, independent tester. If we get rid of the original constraint #3, we simplify the problem and open a world of alternative solutions. So let’s state the situation instead as:

  1. We need automated tests to provide fast feedback.
  2. Someone needs to write those tests.
  3. The QA/Test group may not have much in the way of programming skills.

So the people who are really good at coding can do the programming part of automating tests, and the non-programming testers can collaborate with them to make sure the automated tests are useful and effective. And I do mean collaborate. Not “hand off a specification to…”, not “make a request of…”, and certainly not “supervise.” I mean collaborate as in work together on, at the same time.

Worse, is Record-and-Playback a Substitute for the Real Solution?

So if the reason customers demand record-and-playback capability in test automation tools is that it enables people who don’t know how to code to automate tests, it makes me wonder why they’re making non-programmers do programming work.

The most common reason I hear from QA/Test people is that the programmers won’t automate tests, so the testers have to do it. The most common reason I hear from the programmers is that they don’t have time, and besides, there is a whole QA/Test group assigned to that kind of work.

But it seems to me like the real issue here is that we’re trying to use a tool to as a substitute for something else. We’re using it as an alternative to real collaboration.

So now when I hear someone tell me that they’re using record-and-playback a lot in their test automation strategy, it suggests that perhaps the test effort and development effort aren’t integrated, that the organization is still operating in silos, and that we need to work on breaking down barriers to collaboration before we start talking seriously about tools.

Acceptance Tests as a Customer Deliverable

There’s a discussion going on over on the software-testing discussion group about a customer’s delivery requirement that the software be handed over with an acceptance test script.

I want to illustrate my perspective on this topic with a short story. If stories aren’t your thing, skip to the end of the post. I make my real point there.

A Fictional Digression with Little Green Men, a Magic Transporter Stick, and a Jar of Mustard

It’s just after dusk. You’re hanging out on your back porch enjoying the evening.

Suddenly with a whir and a whine, an alien space ship lands in your back yard. Two little green guys pop out. At first, you hear a bizarre series of clicks and whistles, but soon you hear a mechanical voice. You guess it must be a translator device. The voice says.

“Greetings! Do you have a jar of mustard? We will trade you our advanced technology for mustard.”

You aren’t sure you heard them right. This is too surreal. Mustard?

You stare at the green guys. They’re kind of roly-poly. Each one has five eyes on stalks on the top of its head, and five squat arms with a mass of tentacles coming out of the middle of their bodies. And they are green. Decidedly, definitely green.

“Little green men,” you mutter under your breath.

They repeat their message: “Greetings! Do you have a jar of mustard? We will trade you our advanced technology for mustard.”

You shake your head to clear it.

One half of your brain is still boggling at the fact that there are aliens hanging out by your rhododendrons.

The other half of your brain is remembering the huge economy-sized jar of mustard you got the last time you were at MegaStuff. You weren’t ever going to finish it anyway.

“Yeah, hang on,” you say. You go to fetch the jar of mustard. After all, you are the neighborly, helpful sort.

You are also the careful sort. As you are rummaging in the pantry it occurs to you that you don’t know what this “advanced technology” is. What does it do? How does it work? Is it safe? What if this is a plot to get a human to press the Go button on a Planet Death Device?

You can’t be too careful, after all. Time to ask some questions.

You come back outside, holding up the jar of mustard. The aliens nearly dislocate their eye stalks staring at it. All ten eyes are gleaming at you. It’s disconcerting.

Undaunted, you start your questions. “Before I give this to you,” you say, “what is this advanced technology? What does it do?”

The aliens each swivel one eye stalk up toward your face. The other four eyestalks remain firmly rooted on the mustard jar.

“It’s a portable personal surface transporter,” says the mechanical voice. “It will take you wherever on Earth you want to go.”

You consider this for a moment. A portable personal surface transporter. That could be handy. You consider your 45 minute commute each way every day, the long drives to see family, and the last time you paid an arm and a leg for a cramped economy seat in the back of the plane because you promised yourself you’d make it off your home continent while you were still young.

You look at the jar of mustard in your hand. You consider the possibilities.

Seems like a more than fair trade. A jar of mustard for instant transportation anywhere? That’s even worth some risks.

“OK,” you say. You walk over to the aliens and thrust the jar of mustard toward them. You have to bend down to get close enough. One of the aliens seizes the jar from your grasp with three of its hands. You’re surprised at how easily it hefts the weight of the jar given its size. All five of its eyestalks are on the jar, waving around the lid. Its hands are a sea of motion as its tentacles move over the large plastic container.

You look at the other alien. It’s riveted on the jar too. “What is it about these guys and mustard?” you wonder.

Then you notice that the other alien is holding a stick with a ball on the end in one of his hands. On him, it’s the right size for a walking stick. For you, it’s a little larger than a magic wand. A magic wand with a bulbous end.

“Magic wand indeed,” you mutter under your breath.

The alien hands it to you. It’s heavier than it appeared. Smooth, polished. “How does it work?” you ask.

A short series of clicks and whistles emits from one of them, you can’t tell which. Then the mechanical voice says: “Tap the bottom to turn it on.” Both aliens still absorbed by the mustard. One is holding the bottom of the jar while the other unscrews the lid.

You tap the bottom of the stick part of the device. The ball on the top starts glowing.

“Now what?” you ask.

One alien eyestalk swivels in your direction. “Touch the top wherever you want to go.”

You look at the ball again. It’s now a globe. A blue-green globe. A model of the Earth.

“So I touch New Zealand and I’m there? Really?” you turn it trying to find New Zealand. Once found, your hand hovers over the country.

“Wait!” one of the aliens swivels all five eyestalks over to you. He has a hand poised over the mustard jar. “WAIT!”

He moves toward you, eyestalks waving wildly. You decide that’s probably an indication of panic. You stop moving your hand toward the globe.

“What?” you ask.

The alien shuffles over and peers at the globe then your hand. He reaches up with three of his arms and grasps your hand in his tentacles.

“Oh dear,” the mechanical voice now seems to be coming from the vicinity of your feet. “Your hands are too big. This won’t work. No. If you touch the globe you could end up anywhere. Anywhere. You might not end up in New Zealand at all. Your hand is so big you could even end up in Tasmania. More likely you’ll drown in the middle of the ocean. Can’t have that. I’d feel terrible. You’ll have to enter the geocode instead of touching the globe.”

The mechanical voice is flat, emotionless. But you sense concern and perhaps fear in the clicks and whistles that the alien is making.

“Enter the geocode?”

“Yes,” the alien nods his eyestalks. “Let me show you.”

The alien takes the stick and manipulates it with his tentacles. He disappears.

Then, from over the fence, you hear: “Over here!” You look. The alien is waving at you from across the neighbor’s yard. Then a moment later he’s back by your side.

“See,” says the alien, pointing to the stick. “You need to press the sequence of buttons here with the geocode of where you want to go.” He hands you the stick. You look at it. The stick appears smooth and featureless.

The alien is acting like this is the most natural thing in the world. You’re baffled.

“Enter the geocode?” you ask again, hoping he’ll explain in more detail.

In the meantime you look over at the other alien. He appears to be enjoying the mustard. A lot. Noticing your attention, the second alien nods an eyestalk in your direction. “This is good stuff,” he says. Two eyestalks swivel to his compatriot. “Dude, you have to try this.” He holds out the jar.

The alien who has been helpfully explaining the portable personal surface transporter to you shuffles over to his friend. He is now lost in the mustard. An arm reaches into the jar. His eyestalks relax, drooping slightly.

“Oh, that is good. It’s been too long. Glad we stopped by Earth!” Both the aliens turn back toward their ship.

“WAIT!” you shout. “We’re not done. I don’t know how to use this thing! I don’t know how to tell if it’s working right! How do I enter the coordinates? How can I tell if I entered them correctly? HOW DO I KEEP MYSELF FROM LANDING IN THE MIDDLE OF THE OCEAN?”

The aliens seem impatient, but the helpful one turns back. He shuffles back to you and grasps your hand again. Manipulating your fingers with his tentacles he touches your fingers to the stick. “Here,” he says. “We’ll go over to there together.”

You can feel slight, ever so slight, impressions on the stick. Under his guidance, you tap on them. As you tap each one, it pulses back, apparently indicating that it received the tap. Partway through the sequence, you see a light appear on the side of the stick. A few more taps and another light appears. One last tap, and the globe flashes briefly. The alien tugs at your finger, pulling it to the other side of the stick and presses it down firmly. The next thing you know you’re on the other side of the rhododendrons.

“And back,” he says. He guides you to repeat the actions, but this time with a different sequence of taps. Different lights appear on the stick. The globe flashes again. Suddenly you’re back in front of the alien ship.

You have the hang of manipulating the buttons on the device now. But how will you know what the sequences are for different areas? What do the lights mean? How will you know if something has gone wrong?

You realize the aliens have turned back to their ship again.


You watch ten eyes all roll at the same time on only two bodies. The effect is disconcerting. But you need answers. You hold your ground, fearful and angry. You do not want to try randomly entering sequences. Who knows where you could end up?

“Don’t you have some documentation or something?” you plead.

One of the aliens shuffles up the ramp into the ship, then returns faster than you thought possible with a stack of paper.

“Here,” he says. “It’s a translation. It won’t be perfect. But it tells you what you should expect.”

You scan the document. It contains instructions for tapping, but even more importantly, it contains expected results. You realize that the first light you saw on the stick indicated that the stick accepted the latitude. The second indicated it accepted the longitude. The flash on the globe, had you looked at it carefully, would have given you some idea of where the coordinates would be taking you. The final tap on the other side of the stick confirmed the destination and activated the device. You notice that there is a cancel button next to the confirm button.

The next time you look up, you realize that the aliens are already in their ship. A soft whine from their engines and they’ve taken off. A scrap of paper floats down to you. “So long and thanks for the mustard!”

You go inside to look up the geocode for New Zealand.

Back to the Point

Now, let’s imagine for a moment that you are the hero in our story.

Unless you have a death wish, you’re probably not going to just start testing that thing to see what it does. Not really. You’re not going to push any buttons unless you have a very clear idea of what the expected result is.

Instead, you’re going to follow the instructions in the documentation exactly. And you want to know what you should expect to happen each step of the way. If something is going wrong at any point, you want to know right away so you can minimize the damage.

That’s not really testing: you only take actions where you know exactly what to expect. Testing involves experimenting with unknowns.

To frame your expectations, you need information. But the information you need is different from what you find in a typical user guide.

It is, however, very much like what you would find in a scripted acceptance test:

Touch buttons – 3 6 . 8 6 1 9 7
See the 3rd blue light.
Touch buttons 1 7 4 . 7 7 4 1 6 6
See the 4th green light.
Touch the Destination button.
See a flash on the Globe at Auckland.
Touch the Confirm button.
Arrive in Auckland Domain, Auckland New Zealand.
See greenery.

Now let’s imagine that you work for a huge company that has bought a software package. It’s not exactly alien technology. It may not even be particularly advanced technology. But it’s important to the business or the business would not have purchased it.

Let’s say that you’re not the end user for this software. You are in IT. This is a system that the sales team has purchased. Since you’re part of central IT, and you drew the short straw this month, you are responsible for the successful roll out of this new software.

You don’t actually understand what it does. In fact, you don’t really understand your sales people or what they do. You think they might be aliens. But you know that it’s gonna be on your head if you screw this up. So you want to make sure that everything is operating as expected.

You also know that there are finicky details about how the software is deployed and configured that make our imaginary portable personal surface transporter stick look like an etch-a-sketch.

How are you supposed to know if you set up the software right? If it’s configured correctly? If it’s doing what it’s expected to do?

Note that you don’t want to test the software. Even if you had time to test the software (you don’t), it’s not your job to do so. Someone else in your company made the decision to buy the software, so you’re not evaluating it. And the vendor presumably already tested that the software does what they intended it to do, at least some of the time. You’re not trying to find problems. You just want to get it to work and present evidence to the sales team that their precious new sales automation solution is ready to go.

In short, you need to conduct a final acceptance test after you have everything set up and configured.

But you are not qualified to design an acceptance test for this package. You do not know the software. You do not understand what it does. Your job is to install it and make sure that it’s wired up correctly.

The sales team is not qualified to conduct that test. They’re not even trained on the new system yet. The committee that decided to buy the software isn’t qualified to conduct the test. Some of them can barely retrieve their email unassisted.

So you need the people who made the software to tell you what to do and what you should expect.

Back to the software-testing group discussion: this is why customers ask for an acceptance test script to be delivered with software.

It is a perfectly reasonable request. The customer is not stupid or lazy or incompetent. The customer is asking for a deliverable that they need in order to ensure that the product that they purchased is operating as expected in their environment: with their permissions and authentication and security schemes, connecting to their customized databases, on their network with packet filters and firewalls. They just need to know it’s working. And reading the user guide won’t help them figure that out. Not efficiently. So they want an acceptance test. Think of it as a diagnostic checklist.

Oh, and if your organization is doing ATDD, you’d already have an acceptance test script: you could just give them a prettied-up version of the natural language expectations without the automation.