Lost in Translation

A colleague recently described the requirements process in his (non-Agile) organization to me. In their process, the business people talk to the business analysts who talk to the systems analysts who give requirements to the programmers.

As he was explaining all this, I couldn’t help but reflect on all the possible points of failure.

I’ve seen conversations around requirements go horribly astray with just two people: a business person specifying what they want, and a developer who is supposed to implement it. How much more likely must misunderstandings be when requirements are coming from people who are multiple levels of indirection removed from the originators of the requirements?

The first thing that came to my mind is the Telephone Game. In case you’ve never played it, in the Telephone Game, one person thinks up a story, then whispers it to the next person. That next person whispers it down the line to the next person. And so on. Eventually when the story reaches the end of the line, the last person tells the story as they heard it to the whole group, usually with much laughter as the group hears how the story evolved through many successive tellings.

But then it occurred to me that this is less like the Telephone Game and more like a succession of translations. The business person tells a story in BizSpeak to the Business Analyst who interprets it and retells it in BA-Speak (perhaps in terms of “Must,” “Should,” and “May” functional requirements) to the Systems Analyst who retells it in SysSpeak (perhaps UML Use Cases and/or Sequence Diagrams) to the Programmer who must translate it into TechSpeak (maybe State Models, Class Diagrams, and ERDs) and then code. The languages may have some commonalities, but they have a lot of differences too.

And that made me wonder what would happen if I played the game with the Google translator.

So I picked an innocuous sentence out of one of my blog posts, and translated it from English to German, then to French, then to Italian, then to Spanish, and then back to English. Here’s the result.

English: Agile means delivering business value frequently and consistently while adapting to changing business needs.

to German: Agile Business Wert bedeutet, liefern häufig und konsequent, während die Anpassung an sich wandelnde Bedürfnisse.

to French: Agile Business valeur signifie, et fournissent souvent de cohérence, tandis que l’adaptation à l’évolution des besoins.

to Italian: Agile Business valore significa, e spesso prevedono la coerenza, mentre l’adeguamento alle mutevoli esigenze.

to Spanish: Agilidad de negocios de valor medio y, a menudo, la coherencia, mientras que la adaptación a las necesidades cambiantes.

back to English: Agility and business value, often, the coherence, while adapting to changing needs.

Hmmm. Something got lost in translation.

Now, I wondered, what would happen if I chose more radically different languages?

English: Agile means delivering business value frequently and consistently while adapting to changing business needs.

to Indonesian: Tangkas berarti memberikan nilai bisnis yang sering dan konsisten sementara beradaptasi dengan perubahan kebutuhan bisnis.

to Hungarian: Fleet átlagos üzleti és következetes, alkalmazkodva a változó üzleti igényekhez.

to Dutch: Gemiddelde vloot en samenhangende activiteiten, aanpassing aan de veranderende zakelijke behoeften.

to Finnish: Keskimääräiset laivaston ja siihen liittyvää toimintaa, mukauttaa muuttuviin liiketoiminnan tarpeisiin.

back to English: Average fleet and related activities, to adapt to changing business needs.

(Hmmm. This makes me wonder whether “average fleet” is one example of a set of related activities, or whether one needs to average “fleet and related activities.”)

In any case, perhaps the lesson here is that the farther apart the languages are, the more gets lost in translation. And perhaps that points firmly toward the need for a ubiquitous language, as Eric Evans and the Domain-Driven Design community describe it, across the business stakeholders and the implementation team.

Or perhaps it’s just further evidence that it’s amazing that any software ever does anything useful. (Truly, given how many ways things can go wrong when developing software, I’m often astounded that anything ever works.)

But going back to the original story that prompted this post, where the business people talk to the business analysts who talk to the systems analysts who give requirements to the programmers: I think this simple example in natural language illustrates the risk of having multiple levels of indirection between the business stakeholders and the technical team.

This is why close collaboration between the Product Owner and the implementation team is so important in Agile. Otherwise, the real meaning and intent of the requirements will be lost in translation, and perhaps all the business value as well.

Visiting AboutUs

I recently had the opportunity to visit AboutUs in Portland, OR. In case you’re not familiar with AboutUs, they’ve created a community-driven wikified guide to websites.

(So how appropriate that I’m blogging this on the 14th birthday of the Wiki, as Ward Cunningham reminded us over Twitter, another catalytic technology.)

Anyway, AboutUs is doing very cool stuff. For example, here’s the Quality Tree Software, Inc. profile. AboutUs automatically generated that profile the first time I searched on qualitytree.com. Then I was able to modify the automatically generated profile, and add relevant tags. And as a result of tagging, here’s what happens when I search AboutUs for “Agile Testing.”

I really enjoyed the open and inviting atmosphere in the AboutUs offices. Huge windows run along the outer walls of the offices, letting in a ton of natural light. And the office has an open floor plan featuring 100% mobile workstations. The result is a highly functional, adaptable space that feels light and airy. (This picture that the AboutUs folks posted on Flikr captures the feel of the space well.) Add in the openness of the people there, and you have a groovy, fun, and productive work environment. I was sorry I could only stay for a few short hours.

While I was there, Ward interviewed me for his series of lightning interviews.

Ward’s gentle interview style made the whole process feel very natural, like a normal conversation. And I really enjoy conversations with Ward. So if it looks like I’m having a blast in the interview, it’s because I am!

I hope they let me come back and visit again soon!

Agile Certifications

Certification of software professionals has been a hot topic for quite a while. At least 15 years. Maybe longer.

I keep hoping that the whole thing will blow over.

But it hasn’t. And it’s not going to. Too many people have too much of a financial stake in the success of certifications. Certification customers, including individuals and their employers, want certifications to have value. Certification providers want to continue making money.

But while I’ve been able to ignore most development and test certification initiatives up until now, I don’t think I am going to be able to ignore Agile certifications for much longer.

So I guess it’s time for me to talk about this publicly. I’ll start with tester certifications.

General certification programs, like the ISTQB tester certification, focus on knowledge of “best practices” and definitions. I have nothing against learning the material in the ISTQB Syllabus. There’s good stuff in there (even if the most recent books in the Foundations bibliography were published in 2004). However, I do have a problem with charging a whopping huge amount of money for test preparation classes, and testing people on their ability to memorize the contents on a Body of Knowledge, then slapping a Certified Tester stamp on their forehead.

The classes surely have some value. The trainers I know who teach certification prep classes certainly have much to offer. I see no harm in learning what these people have to teach.

But the cost of these classes is high. Certification preparation class can cost hundreds of dollars more than comparable non-certification classes, and the ISTQB test fee is another couple of hundred dollars.

And for what?

It is not clear to me that there is any evidence demonstrating a positive correlation between competence at software testing and possession of an ISTQB certification. (Some wags have argued that there is a negative correlation. I’m not going there.)

Rather, I suspect there is no correlation. I do not believe that certified software testers are any better at testing, on average, than uncertified testers.

And because I do not think there is a correlation between tester certification and competence, I see no value in software testing certifications. I think they’re a marketing scheme concocted to increase training revenues.

But people buy into this stuff, and classes leading to certification outsell classes that don’t lead to certification.

It’s important for me to note that I don’t have any problem with certifications in a specific technology. When Microsoft certifies someone as an MCSE, it means that Microsoft, the creator of a technology, is certifying that the candidate has met minimum competency requirements in that technology. Microsoft is not pretending to certify someone as a developer; they’re certifying that the candidate knows some specific fiddly details about a specific technology related to development.

It may seem like I’m splitting hairs. But it’s an important difference. There is a right and a wrong answer for specific technical questions like how to change a Windows Registry setting (hint: it’s not the form you submit to Microsoft to register your copy of Windows). General topics, like Software Testing, are not so clear cut. What’s right in one context could be dead wrong in another.

So, enough on tester certifications. I’ve successfully ignored them up until now, and plan to continue ignoring them in the future.

What I’m really concerned about are general Agile certifications.

I started hearing the rumblings around Agile certification some years back. In response, the Agile Alliance published a statement about certifications. It’s a good statement. I’m delighted the AA published it. I was in the room when Brian Marick and a small group decided to write the statement and I think they did a fabulous job on it. That can’t have been an easy thing to write.

And I was delighted when Laurent Bossavit and Brian Marick started WeVouchFor, a different kind of certification involving endorsements of competence rather than tests of knowledge.

Sure, there’s always a risk with endorsements that it becomes a kind of you-scratch-my-back-I-scratch-yours mutual love-fest. But I think that even such reciprocal endorsement arrangements say far more than commercial exam-based certifications that have a pass rate in the high 90%s. I’m not alone in that perspective, but endorsements alone are not enough for a lot of people. They want the certification.

I do understand the desire to have Agile certifications.

Agile is relatively new. There are a lot of people, and companies, sporting big ol’ “Now with Agile!” stickers slapped on top of their old RUP/CMM/CMMi/current-hot-thing stickers. So it can be difficult to tell those with deep Agile understanding from those who think they’ll make more money by adopting the hot buzzword-of-the-month.

And so employers look for objective evidence, like certifications, that someone who claims to know Agile actually knows what they’re talking about. And individuals want those certifications as a form of evidence.

The only real Agile certifications that I am aware of right now are the various Scrum certifications. Since Scrum is now the most popular Agile process, it’s no surprise that the Certified Scrum Master (CSM) is the most commonly-held Agile-related certification available today.

As an aside, it’s my understanding that the CSM designation started as a kind of an in-joke. I got my CSM by taking the CSM class. In it, Ken Schwaber said that the certification meant that we probably knew a little more at the end of class than we did at the beginning. But he wasn’t guaranteeing it. And then he taught us all the “secret handshake” (woof) so we could prove to other CSMs that we were in the club. (For the record, I took the CSM class so I could meet Ken Schwaber and learn about Scrum from one of the originators. The certification was a side effect of taking the class. The AHA! moments and resulting deep learnings are far more valuable to me than the certification.)

Then the Scrum Alliance decided to take the CSM, and other Scrum certifications, seriously. They put teeth in the certification.

And that’s fine. It seems to me that the Scrum certifications are like technology certifications. The Scrum Alliance is certifying knowledge of Scrum, a specific process. They’re not trying to certify general knowledge across all things Agile. They’re not saying that being a CSM means you’re generally competent. They’re just saying that being a CSM means you know what a Scrum Master does within the Scrum process. You know the mechanics.

Further, the CSM class is fabulous with or without certification. It’s experience-based, participative, interactive and leads to deep learning. I recommend it.

But this week, I am again hearing the rumblings for general Agile certifications, not just Scrum certifications.

People are asking me how to become Certified Agile Testers. The very thought makes me queasy. Agile Testing isn’t a process or a technology. It’s testing in an Agile context. And that’s not something I know how to certify someone in.

And just today I ran across a site run by the self-proclaimed World Agile Qualifications Board. And that made me angry. Really angry. Angry enough to write this post and not to link to their site. You can search them out if you want to, but I won’t drive traffic their way.

Alan Page suggested on Twitter that perhaps it’s an April Fool’s Joke, like Waterfall2006.com. I’m hoping he’s right. I’d rather feel stupid for falling for the joke than outraged at the reality.

However, even if it turns out to be a practical joke, the strength of my anger surprised me.

On reflection, and being brutally honest, I realized that it’s an anger borne of fear.

I fear that the quest for certification and availability of general Agile certifications, no matter how dodgy, will lead people away from the non-certification classes and services that I, and others like me, offer.

In this economy, that could be disastrous for my business.

I would like to believe that people in this industry are sufficiently discerning that they will come to my Agile Testing classes because my classes are valuable. The kinds of things I teach are not the kinds of things that are certifiable.

How do you certify someone on the realization that they’ve been playing the hero all too often? Or on the bone-deep visceral understanding around the effects of changes in feedback latency. These are the kinds of lessons that I believe participants in my Agile Transformation simulation learn. And they are not things that translate to a certification exam.

And so I am afraid, even though I know that fear is a lousy compass.

Being afraid makes me feel pressure to offer something certification related. But doing so goes against my principles for all the reasons I’ve already explained.

Maybe the best thing I could do would be to offer a self-certification. Want a certification? Declare yourself certifiably Test Obsessed. Here’s the certificate:

Certifiably Test Obsessed

Handling Bugs in an Agile Context

I was honored to be included on the lunch and learn panel at the Software Quality Association of Denver (SQuAD) conference this week. One of the questions that came up had to do with triaging bugs in an Agile context. Here’s my answer, in a bit more detail than I could give at the panel.

The short answer is that there should be so few bugs that triaging them doesn’t make sense. After all, if you only have 2 bugs, how much time do you need to spend discussing whether or not to fix them?

When I say that, people usually shake their head. “Yeah right,” they say. “You obviously don’t live in the real world.” I do live in the real world. Truly, I do. The problem, I suspect, is one of definition. When is a bug counted as a bug?

In an Agile context, I define a bug as behavior in a “Done” story that violates valid expectations of the Product Owner.

There’s plenty of ambiguity in that statement, of course. So let me elaborate a little further.

Let’s start with the Product Owner. Not all Agile teams use this term. So where my definition says “Product Owner,” substitute in the title or name of the person who, in your organization, is responsible for defining what the software should do. This person might be a Business Analyst, a Product Manager, or some other Business Stakeholder.

This person is not anyone on the implementation team. Yes, the testers or programmers may have opinions about what’s a bug and what’s not. The implementation team can advise the Product Owner. But the Product Owner decides.

This person is also not the end user or customer. When end users or customers encounter problems in the field, we listen to them. The Product Owner takes their opinions and preferences and needs into account. But the Product Owner is the person who ultimately decides if the customer has found something that violates valid expectations of the behavior of the system.

Yes, that does put a lot of responsibility on the shoulders of the Product Owner, but that’s where the responsibility belongs. Defining what the software should and should not do is a business decision, not a technical decision.

Speaking of expectations, let’s talk about that a little more.

When the Product Owner defines stories, they have expectations about what the story will look like when it’s done. The implementation team collaborates with the Product Owner on articulating those expectations in the form of Acceptance Criteria or Acceptance Tests.

It’s easy to tell if the software violates those explicit expectations. However, implicit expectations are a little more difficult. And the Product Owner will have implicit expectations that are perfectly valid. There is no way to capture every nuance of every expectation in an Acceptance Test.

Further, there are some expectations that cannot be captured completely. “It should never corrupt data or lose the user’s work,” the Product Owner may say, or “It should never jeopardize the safety of the user.” We cannot possibly create a comprehensive enough set of Acceptance Tests to cover every possibility. So we attend to both the letter of the Acceptance Tests and the spirit, and we use Exploratory Testing to look for unforeseen conditions in which the system misbehaves.

Finally, let’s talk about “Done.” Done means implemented, tested, integrated, explored, and ready to ship or deploy. Done doesn’t just mean coded, Done means finished, complete, ready, polished.

Before we declare a story “Done,” if we find something that would violate the Product Owner’s expectations, we fix it. We don’t argue about it, we don’t debate or triage, we just fix it. This is what it means to have a zero tolerance for bugs. This is how we keep the code base clean and malleable and maintainable. That’s how we avoid accumulating technical debt. We do not tolerate broken windows in our code. And we make sure that there are one or more automated tests that would cover that same case so the problem won’t creep back in. Ever.

And since we just fix them as we find them, we don’t need a name for these things. We don’t need to prioritize them. We don’t need to track them in a bug tracking system. We just take care of them right away.

At this point someone inevitably asks, “But don’t we need to track the history of the things we fix? Don’t we want to collect metrics about them?” To that I answer “Whatever for? We’ve caught it, fixed it, and added a test for it. What possible business value would it have to keep a record of it? Our process obviously worked, so analyzing the data would yield no actionable improvements.”

If we are ever unsure whether something violates the Product Owner’s expectations we ask. We don’t guess. We show the Product Owner. The Product Owner will say one of three things: “Wow, that’s a problem,” or “That’s outside the scope of this story, I’ll add it to the backlog,” Or “Cool! It’s working exactly as I want it to!” If the Product Owner says it’s a problem, we fix it.

If the Product Owner says “Technically, that’s a bug, but I would rather have more features than have you fix that bug, so make a note of it but leave it alone for now” then we tell the Product Owner that it belongs on the backlog. And we explain to the Product Owner that it is not a bug because it does not violate their current expectations of the behavior of the software.

Someone else usually says at this point, “But even if the Product Owner says it’s not a problem, shouldn’t we keep a record of it?” Usually the motivation for wanting to keep a record of things we won’t fix is to cover our backsides so that when the Product Owner comes back and says “Hey! Why didn’t you catch this?” we can point to the bug database and say “We did too catch it and you said not to fix it. Neener neener neener.” If an Agile team needs to keep CYA records, they have problems that bug tracking won’t fix.

Further, there is a high cost to such record keeping.

Many of the traditional teams I worked with (back before I started working with Agile teams) had bug databases that were overflowing with bugs that would never be fixed. Usually these were things that had been reported by people on the team, generally testers, and prioritized as “cosmetic” or “low priority.”

Such collections of low priority issues never added value: we never did anything with all that information. And yet we lugged that data forward from release to release in the mistaken belief that there was value in tracking every single time someone reported some nit picky thing that the business just didn’t care about.

The database became more like a security blanket than a project asset. We spent hours and hours in meetings discussing the issues, making lists of issues to fix, and tweaking the severity and priority settings, only to have all that decision making undone when the next critical feature request or bug came in. If that sounds familiar, it’s time to admit it: that information is not helping move the project forward. So stop carrying it around. It’s costing you more than it’s gaining you.

So when do we report bugs in an Agile context?

After the story is Done and Accepted, we may learn about circumstances in which the completed stories don’t live up to the Product Owner’s expectations. That’s when we have a bug.

If we’re doing things right, there should not be very many of those things. Triaging and tracking bugs in a fancy bug database does not make sense if there are something like 5 open issues at any given time. The Product Owner will prioritize fixing those bugs against other items in the product backlog and the team will move on.

And if we’re not doing things right, we may find out that there are an overwhelming number of the little critters escaping. That’s when we know that we have a real problem with our process. Rather than wasting all that time trying to manage the escaping bugs, we need to step back and figure out what’s causing the infestation. Stop the bugs at the source instead of trying to corral and manage the little critters.