Acceptance Test Driven Development (ATDD): an Overview

“Begin with the end in mind.” — Stephen R. Covey

Acceptance Test Driven Development (ATDD) is a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins. It’s the best way I know to ensure that we all have the same shared understanding of what it is we’re actually building. It’s also the best way I know to ensure we have a shared definition of Done.

Obviously I think this is an important Agile development practice. In fact, it’s one of the core pieces of my Agile Testing class. Yet somehow I have neglected to write about it much on this blog. Time to rectify that.

So, for your reading pleasure, I now present a PDF document with a detailed example of the whole ATDD flow.

Enjoy! (Comments/questions welcome.)

Beware the Hero

The team in the WordCount simulation was floundering. We were midway through the third round and it looked to me like the team wasn’t even close to shipping.

Most teams are able to produce a basic system that I’ll accept in my role as the Customer in the third round, or early in the fourth. That’s important to me; teams need to enjoy at least a modest success. Total failure is depressing, and I have found that teams that fail are more likely to point fingers than glean insights about Agile Testing.

But sometimes, like in this case, a team will struggle to ship anything at all. So I have a variety of techniques for increasing the probability that the team will be able to produce an acceptable system by the end of the fourth round. Sometimes I give the team a little extra time in a round. Sometimes I ease up on my more persnickety requirements. Sometimes I push for specific practices that I think that they’re missing during the mini-retrospectives between rounds. And sometimes, particularly if I sense that a little individual coaching would help, I gently and quietly intervene during the simulation rounds.

In this case, I had noticed that every time the Developers installed new code on the Computer, it caused some kind of problem. The most recent update caused the Computer to throw purple catastrophic error cards with every input. Worse, the code updates were coming very slowly, so I didn’t think the Developers would be able to fix all the problems before time ran out.

I read the code deployed on the Computer, and immediately understood why the Computer was throwing catastrophic errors: there was a simple error in logic. I could also see why the Developers had not detected the problem: the error in logic was hidden in instructions that were so overly complex they were nearly incomprehensible.

So I decided that a Developer intervention was in order.

I walked over to where the Developers were working. One Developer was standing up, a little away from the group, writing code on green cards taped to the wall. The other five Developers were sitting around the table, talking and sketching ideas on notepads. I walked up to the guy standing up, working on green code cards. He was, as far as I could tell, the only one actually writing code.

“How’s it going, Fred*?” I asked.

“Rough,” he grumbled. “I have all these bugs I’m fixing!” He waved his hands at the collection of red bug cards taped to the wall next to the code. “And I’m also having to talk to the Product Managers to understand the real requirements, and also tell the Testers what to test.”

“OK, so you’re overloaded,” I said, reflecting Fred’s frustration back to him. “Are you the only one writing code? What are they doing?” I asked, gesturing to the other Developers.

The other Developers looked up at us. Fred replied, “I’m doing the bug fixes on this release. They’re working on the next release.”

“Yeah,” said a Developer, holding up his notepad. “We’re figuring out how to do the Sorting requirement.”

“So your fictional company hasn’t shipped anything yet, and you have one guy working on making the current release shippable and five people theorizing about how the next set of features could maybe, perhaps, be implemented?” I summarized the current state of affairs.

They all nodded.

“How about if someone helps Fred with the current release?” I asked. “Maybe it would help if someone paired with him on the code?” I looked to the other five Developers for a volunteer.

Fred looked offended. “No,” he said. “That’s not necessary. Let them work on the next release. I’m fixing the bugs. I’m almost done.”

At first, I thought the problem was that the other five Developers were having too much fun theorizing instead of coding, leaving Fred to do the dirty work. But I suddenly realized that there was another dynamic at play.

Fred was enjoying being the go-to guy for this release. Yes, he complained that he had a rough job. But that complaining was his way of saying, “See how hard I work? See how much I can take on? I am carrying this whole release on my shoulders, by myself. I am a Super Star!”

I was not going to be able to talk Fred into accepting help. So I decided to leave Fred to his work, and check in with the Testers. I’d noticed that some of the “bugs” that Fred was supposed to fix weren’t really bugs from the Customer’s point of view, and I wanted to find out how the Testers were designing their test cases.

After checking in with the Testers, I checked in with the Product Managers. Now I was sure that all the requirements and tests were in sync and represented the real Customer requirements. I returned to Fred. He was plugging away with code fixes, batching them up for one big bang install.

“How’s it going?” I asked.

“Good, good. Almost there!” Fred seemed frantic but cheerful.

I worried that Fred was about to install a lot of changes all at once. Installing one change at a time would have given Fred much better feedback.

Then I noticed a new bug on the wall, one with a purple catastrophic error card attached. In his harried state, Fred was ignoring incoming bug reports and test results. “What’s this?” I asked, pointing to the bug.

Fred frowned and shook his head as he read the bug report. He tore it off the wall and rushed over to the Computer table. I followed. Fred slapped the purple card in the middle of the table. “What’s this?” he demanded.

“An error,” replied one of the Computers. “Because of this instruction,” she pointed to a green card. “It doesn’t make sense.”

Fred leaned over, reading. “Of course that makes sense,” he declared as he pointed emphatically at the offending green code card, “That instruction means that you ignore single quotes unless it’s an apostrophe!” Fred walked through the instruction set with the Computer again, step by step, explaining what the code meant. Finally, satisfied that the Computer understood his intent, he went back to his coding wall.

After Fred left the Computer, we ran a Customer acceptance test. I fed the Customer acceptance test phrases as input to the Computer. They were a little closer to having an acceptable system: two of my four acceptance test cases were passing. Closer, but not there yet.

Time was up for the round. So I called “Stop!” and we went into our short debrief and mini-retrospective. The team was grim. The tang of failure hung in the air and made everyone a little edgy.

For half an hour, we talked about bugs and acceptance tests and requirements and alignment and feedback. The team decided to make some changes to improve their practices, and we began again. I felt confident that the team had the information they needed, and much more effective team practices in place, and that they would have an acceptable system early in the fourth round.

However, as we got into the fourth round, I had the sinking feeling that they might actually still fail.

Most of the organization was doing well, as I thought they would. They were working together effectively, collaborating to ensure that tests were automated, that the test results were visible, that requirements and tests were aligned, and that the bugs and new requirements were appropriately prioritized. The team as a whole was on track.

But Fred was all over the place. He was still working on those bugs. He was coding, he was talking to Testers, he was talking to Product Managers, he was arguing with the Computer about how to interpret his instructions. He raced around the room, frantically busy. And Fred’s fixes were the only thing standing between the team and success. No matter how well the rest of the team did, Fred’s inability to get those few remaining bugs fixed was dooming them.

It was not because Fred was incompetent. Far from it. Fred was very capable. Yes, the code he wrote was a little convoluted. But that wasn’t the real problem. The real problem was that because Fred insisted on doing all the fixes for the first release himself, the entire capacity of the team was throttled down to what a single person could do. And that one person was so overloaded he couldn’t process the information he was getting—the test results and bug reports—that he desperately needed in order to make the code work correctly.

By taking so much on his own shoulders, Fred was doing something Ken Schwaber cautioned me about when I took the Certified Scrum Master class: he was being responsible to the point of irresponsibility.

Eventually the team did succeed. 16 minutes into the 15 minute round, the team shipped acceptable software and recognized revenue, and there was much rejoicing.

During the end-of-simulation debrief, I offered my observations about how everything landed on Fred’s shoulders, and how that caused some of the pain they experienced. But I chose not to offer my opinion that Fred relished his role as The Guy Who Did Everything. I don’t believe in public humiliation as a teaching mechanism.

I did, however, want to take Fred aside for a private discussion about his role in the team’s near-failure. Unfortunately I didn’t have a chance before the end of class. It’s entirely possible that Fred still does not realize that he almost single-handedly brought down the whole company. In his mind, he’s the guy who made the software work.

Fred’s experience taught me a crucial lesson: Beware the Hero.

The Hero mindset is deeply ingrained in the software industry culture. As an industry, we’ve romanticized the image of the lone programmer hyped up on caffeine and pizza pulling all nighters, performing technological miracles.

Fred wrote the vast majority of the final shipping code. He fixed the vast majority of the bugs. He coaxed the Product Managers into clarifying the requirements. He helped the Testers know what to test. By all possible measures, Fred was a super star, a Hero.

And yet the team only just barely managed to ship as the fourth round went into overtime. It was an unnecessary near-miss: I’ve seen numerous teams run this simulation, and given the practices this team had in place by the end of the fourth round, they were well-positioned to crank out feature after feature and rack up a tidy sum in fictional revenue. The only thing that held the team up was the bug fixes that Fred had been working on, by himself, for half the simulation.

The team very nearly failed.

All because Fred insisted on being the Hero.

* Not his real name.