Back-of-a-Napkin Agile Assessment

I am often asked: “How do I know if my team is really Agile? They claim they’re Agile, but I think they’re cheating.”

In response I usually ask a barrage of questions aimed at discovering how well the team is doing at delivering valuable and potentially shippable increments frequently (at least once a month), consistently (month after month after month), all while adapting to changing priorities and business needs.

To me, that’s the essence of Agile. It’s not about whether or not a team is doing TDD or CI or pairing or automated regression testing, although I do strongly believe that those are all good practices, and I evangelize them wherever I go.

Ultimately, being Agile means delivering business value frequently and consistently while adapting to changing business needs. No matter what practices we’re following, if we aren’t doing that, we’re not Agile.

So as I am working on a next-generation revision of the materials for the upcoming Agile Testing class that Dale Emery and I are co-leading, I decided that it would be nice to include an Agility self-assessment in the materials, enabling people to answer the question of whether or not their team is really Agile for themselves. And as long as I’m writing it down, I wanted to share it here and get feedback on it.

And so with no further ado, here’s my new back-of-the-napkin Agile assessment checklist:

  1. The team knows, for sure, that at any given time they are working on deliverables that have the greatest value for the business.
  2. When the implementation team claims to be Done with something, the business stakeholder usually agrees that it is, in fact, done and Accepts it.
  3. When something is Accepted, it is sufficiently well-built and well-tested that it would be safe to deploy or ship it immediately.
  4. The team delivers Accepted product increments at least monthly.
  5. When the product increments are shipped or deployed, the users and customers are generally satisfied.
  6. If the business stakeholder changes the priorities or the requirements, the implementation team can adapt easily, switching gears to deliver according to the updated business needs within the next iteration.
  7. The business stakeholders express confidence that they will get the capabilities they need in a timely manner.
  8. The business can recognize real value from the deliverables: each product increment ultimately has a positive impact on the bottom line.
  9. The team has been working at the same pace, delivering roughly the same amount every iteration, for a while.
  10. The people on the implementation team agree that they could keep working at the current pace indefinitely.

So how Agile is your team? How many of the statements above would you say characterize your team?

And while I’m asking questions, is there anything that you think should be added, removed, or modified?

The WordCount Simulation

I’ve mentioned my WordCount simulation here before, and some folks have expressed curiosity about it. I started writing a blog post about it, and quickly realized that it would take a whole lot of blog posts to tell all the stories I want to tell. So I’ll start by explaining the simulation in more detail, and in subsequent blog posts, I’ll tell stories. Some will be amusing little anecdotes. Others will be tragic tales of woe. You’ll laugh, you’ll cry…oh, never mind. Let me just describe the simulation.

So this is the simulation that I use in my Agile Testing class, as well as in other contexts where I want to teach lessons about increasing Agility. The mechanics of the simulation itself are very general: the simulation models the organization of a software company. It just happens to work really well for making Agile concepts very visible, and visceral.

At this point WordCount is a mature simulation: I have honed and refined it over the last several years, and have run it countless times. The simulation requires a lot of moving parts. Just printing the supplies to run the simulation can be a chore. It involves 26 files whose contents are printed on 5 different colors of index cards, 5 different colors of paper and cardstock, and 2 different sizes of stickers. Fortunately for my sanity, I now outsource most of the printing work to a local printer. (Though I swear I can hear the manager groaning when he sees me walk in the door.)

All those moving parts support a relatively simple organization: WordCount, Inc. is a fictional company that makes word counting software. Each participant chooses a role within the company from the following descriptions:

The Product Managers interact with the Customer (played by me or a co-instructor/co-facilitator), define the product, and write requirements on blue index cards.

The Developers turn the requirements from the Product Managers into executable instructions (“code”) on green index cards. The code is then installed on the Computer.

The Testers design test cases on yellow index cards and execute them against the code installed on the Computer by submitting input on white index cards, and receiving output, also on white index cards. Of course, not all input can be processed. If the Computer cannot process the input, it throws a catastrophic error on a purple index card. This usually prompts the Testers to write a bug report on a red index card.

The Computer follows the instructions that the Developers wrote on the green index cards (the “code”) to take the input on the white index cards and process it to generate output, either word counts on a white index card or an error on a purple catastrophic error card. Multiple participants can play the role of the Computer, but the Computer team must coordinate their efforts internally. As you can imagine, sometimes different participants playing the role of Computer interpret the code differently, and give different output for the same input. This inevitably leads to tremendous confusion among the Developers and Testers.

The lone Interoffice Mail Courier (there can be only one) delivers messages and Project Artifacts between groups.

The Observers watch the interactions and share their observations with the group during debriefs. They’re not really part of the company, but they are active participants in the simulation. I usually hand one of the Observers a camera so we have photographic evidence that we can reflect on later as we distill lessons learned from the simulation to apply in the real world.

Throughout the simulation, we work in rounds. So we work on the simulation for 15 minutes, then pause to hold a mini-retrospective in which we reflect and adapt the process to increase Agility.

When we start, there is an existing process in place that resembles a very traditional organization with groups working in silos and very constrained communication. The process explicitly defines responsibilities and communication paths by specifying things like “Communication between Developers, Testers, and Product Managers occurs only through Interoffice Mail,” and “Only Developers may create or modify Code.”

Each group starts with an initial set of artifacts. So the Product Managers have notes from their fictional predecessor’s conversations with the customer, the Developers and Testers have an initial set of requirements, the Developers and the Computer have version 0 of the code, and the Testers have an existing set of tests.

As we begin the simulation, I explain that: as the Customer I desperately need their word counting software to run my business; I have been promised a delivery of the basic word counting system Real Soon Now; so far nothing has been delivered to me. I also make it clear that the more features they ship, the more money they make, and that their goal should be to maximize revenue.

We then work for 15 minutes.

In a typical first round, Product Managers work furiously to produce requirements on blue cards while Developers frantically write code on green cards and Testers crank out test cases on yellow cards. Pens go flying as participants scribble madly on colored 3×5 index cards.

The Interoffice Mail Courier stands by, ready to deliver messages, but typically only has to deliver a handful. One Interoffice Mail Courier was so concerned about the potential volume of mail that he recruited an assistant. They both stood idle for most of the first round, bored. Similarly, the Computer is also usually idle for the first round, and often starts throwing errors just to amuse itself. The Product Managers, Developers, and Testers are so busy producing artifacts that they don’t have time to communicate or even execute the software that already exists.

When we debrief the first round, the Product Managers, Developers, and Testers often report feeling stressed, pressured, and frustrated while the Interoffice Mail Courier and Computers report being under-utilized, idle, and bored.

But all that stress and self-imposed pressure doesn’t yield success. As of today, no group has ever made money in the first round. No group has even come close. In fact, out of all the times I have run this simulation, only one group has ever managed to demonstrate the software to the customer during the first round.

And yet some managers look at the efforts of the Product Managers and Developers and Testers in the first round and see focused productive work happening. In a couple of cases, managers have walked up to me during the first round and said, “Look at how focused each of the teams is! And notice how quiet it is in here! Everyone is working so hard! This is great!” One of the managers was kidding: she knew that the lack of communication would be a problem. But the other manager? She was not kidding. She was dead serious. That’s what she thought productive teams looked like.

As you can probably imagine, I have lots of stories about this simulation. There’s the story of the Hero who single-handedly almost caused the failure of the whole company. There’s the story of the group that actually did fail entirely, and then blamed me for their failure, as though I had played some trick on them. There’s the story of the Micromanaging Manager. There’s a story about a Project Manager who quit in disgust after 15 minutes. And there’s a story about a Product Manager Who Always Said No.

There are stories about Developers arguing with the Computer about the correct interpretation of the code. And there are numerous stories of Product Managers who got so wrapped up writing requirements they forgot to talk to the customer, and Developers who wrote Code that was so complex even they had no idea how to interpret it, and Testers who spent so much time documenting test cases that they forgot to execute them, and myriad other forms of organizational dysfunction that just happen to look a whole lot like real life. That’s why simulations are so much fun: they give us the opportunity to experience all the complexities of the real world distilled into a microcosm.

Along the way I’ve learned numerous lessons like “Never Use a Communication Solution to Solve a Visibility Problem,” and “When Someone Says They Want Control, They Might Just Need Visibility,” and “If the Customer Offers You an Example, Take It.” So I’ll tell stories about those lessons too.

But all those stories will have to wait for another day. This entry is long enough as it is.