That’s a Nice Theory

Dale Emery has taught me an enormous amount about using resistance as a resource.

I’m grateful. I use his ideas every time I set foot in a classroom or start consulting with a new client.

In particular, I channel my inner Dale whenever discussing any of the various controversial things I advocate, such as:

The whole team is responsible for testing and quality, not just QA or the designated testers.

If the regression tests aren’t automated and the team is having a hard time finishing all the testing within a sprint, have programmers help execute them.

Do away with traditional (and often punitive) defect metrics like % missed by phase. Focus instead on metrics related to accomplishments: story points completed, cycle time, and test coverage.

In many organizations these suggestions fly in the face of their accepted “best practices.” Such ideas also tread on political toes. So one response I hear a lot is: “That’s a nice theory, but it won’t work here.”

Before learning techniques for reframing that resistance into a resource, I would end up in a position based argument that amounted to the professional equivalent of “Will too!” “Will not!” “Will too!” “Will not!” Not useful.

Dale’s nicer and wiser than I am, so even when I don’t handle the interaction as well as I would like, by leaning on Dale’s techniques, I certainly handle the conversation better than I otherwise would have. (Although I’m far from perfect at this. Sometimes people succeed in pushing my buttons in such a way that I forget everything I know about how to communicate effectively.)

The first thing I need to know is whether the results of doing whatever it is would be useful. So I ask something like:

Do you think this practice could help improve things?

If I hear “No,” as the reply, we have a fundamental and possibly insurmountable difference in perspective. Nothing I can say will make them try the practice if they do not believe there is any value in it.

I can explore their reasoning. I can say, “That’s interesting. Why not?” But if someone flat out does not believe that a practice I advocate will help, and I disagree even after listening to their reasoning, there is a good chance that I will not be able to help them. Further discussion will cause more harm than good. The best thing for me to do is to stop.

On the other hand if I hear “Yes…but…” then we have a different conversation. First I have to understand what follows the “but…” Often it’s:

It won’t work here.

At this point I am tempted to ask, “Why not?”

But I don’t.

“Why not?” won’t get us anywhere. We’ll end up running down a rathole of excuses starting with “our context is different.” (And of course, they’re right. Their context is different. Every context is different.)

So instead of asking “Why not?” I flip it around. I ask:

What would have to change for this practice to work here?

Now we get a list of objections, but each one is framed as a neat little impediment statement.

It would need to allow for this inherent complexity in our situation.

We’d need to allow time for it.

We’d need executive support.

We’d need money in the training budget.

We’d need to get the programmers to buy in.

We’d need the QA manager to agree.

And we can work on each of those impediments in much the same way, following the trail of reasons why this is a nice theory that can’t possibly work in their real world context all the way down to the bottom.

In what way would it have to accommodate the complexity?

What would have to happen in order to make time for it?

What would have to happen in order to get executive support?

What would have to happen in order to get budget money?

What would have to happen in order for the programmers to buy in?

What would have to happen in order for the QA manager to agree?

The answers usually reveal perfectly practical steps. We can talk to the people in a position of authority or influence who can get us resources, training, budget money. We can try a small pilot. We can experiment with variations.

The simple reframe from “Why not?” to “What would have to change?” opens up possibilities. What could have become an argument becomes instead a brainstorming session. The result is a chain of steps we can take to go from where we are now to where we want to be.

It’s a Book!

Happy New Year!

A funny thing happened on my way to inbox 0 last week: I wrote a book in 4 days.

I didn’t mean to. And actually it’s not true to say that I wrote it in just 4 days. I assembled it in 4 days; I wrote it over 15 years. Allow me to present There’s Always a Duck, now available on Leanpub.

To fully explain, I need to back up a step.

Last Thursday I learned that Laurent Bossavit, who I admire tremendously, had published a work-in-progress book, The Leprechauns of Software Engineering, on Leanpub. Leanpub is a relatively new service designed to make it easy to publish before your book is complete so you can get feedback while you write. Their motto is “publish early, publish often.”

So I immediately purchased Laurent’s book. I found it to be a delightful read. In it he chronicles his attempts to track down the source of some of our most cherished beliefs: the cost of change curve, 10x productivity differential between star programmers and average programmers, etc.

Laurent’s current draft is 79 pages with many more sections outlined. And the nice thing about the way Leanpub works is that Laurent can keep writing, and I can re-download the book any time. Further, Laurent can notify everyone who bought the book when he’s made a substantial addition. I’m really looking forward to future drafts.

Since I hadn’t heard of Leanpub before, I was intrigued. I’ve investigated various other self-publishing channels including CreateSpace and SmashWords. But Leanpub seemed different. So I watched their introductory video, an XtraNormal animated short. Within a minute I was laughing out loud. 2 minutes into the 10 minute video I made myself a Leanpub account.

Leanpub made it absurdly easy to turn my blog into a book. They imported my content from my RSS feed and converted it from HTML into Markdown (the markup language they use for publishing). They put the resulting manuscript into a DropBox folder. I already use DropBox, so getting set up was absolutely trivial.

The result: within a few minutes of signing up, I had a 300 page book of my blog posts organized chronologically.

I started sifting through the content, deciding what would go into a book and rearranging the posts into chapters by topic. By Thursday evening I had a draft.

On Friday I had every intention of attending to my backlog of To Dos. But the book called to me. “I’ll just make a few tweaks,” I told myself.

As I continued arranging the content, I realized that some of my older content hadn’t been imported. Some of it was still on my blog but just wasn’t in the RSS feed. I manually pulled in a handful of older posts that I wanted to include in the book.

But I realized some of my oldest content was missing from my blog. Then I remembered that I’d purged all the really old content from my site and I discovered that I didn’t have backups. Whoops!

Down the rabbit hole I went, digging up all my old stuff from The Internet Wayback Machine.

By this time I was feeling guilty about how much time I was spending on an unscheduled project. Thanks to Leanpub’s book announcement page and a few tweets, I had 30 people who had signed up to be notified when the book went live by Friday afternoon. I resolved to hold off on working on the book until at least 50 people indicated interest. So I set the book aside and worked on an overdue client proposal.

My resolution lasted all of 12 hours. Saturday morning found me hunkered over my keyboard, selecting and arranging content. By late Saturday night the book had come together into a cohesive draft. It just needed a good cover, a little more new prose, and another editing pass. I went to sleep at 1AM, tired but happy.

I awoke Sunday possessed with the idea of finishing. It was just SOOOO close. So I spent most of Sunday polishing the final bits.

The cover took a little longer than I had anticipated. I knew I had the perfect picture for it, a picture I took of a heated duck pond in front of the Finlandia concert hall in Helsinki during winter. But I couldn’t find the picture. My husband saved me: he found a copy of it on one of our old backup drives. Then I had to figure out how to reduce the image size so that a 500K download didn’t balloon to 4MB just for the pretty cover shot.

Despite the delays, it all came together within a few hours and I hit “Publish” on Sunday around 3PM.

So that’s how I published a book in 4 days.

Of course the marvelous thing about Leanpub is that while I’ve published, I can also update. I can fix mistakes (I’ve found a couple small wording glitches already). And I can even add entirely new content. So hitting Publish wasn’t much more nervewracking than publishing a blog post.

And yet it was.

This is a BOOK. An actual honest to goodness BOOK. The running joke between me and my friends for years has been “How’s that book coming?” I’ve been working on various books off and on for years. I’ve abandoned most of those projects. So this is a momentous occasion. Even if it is a self-published eBook, it’s still an important step.

Now that I’ve gotten the first one done, there will be more. I suspect that 2012 will be my year of publishing. I have other things in the works that I’m not ready to talk about yet.

2012 is off to a great start!

Agile Adjustments: a WordCount Story

I originally wrote this for the AYE website in 2007. It’s no longer published there so I’m posting it here. Despite itching to tweak some words and add a better conclusion, I resisted the temptation to edit it other than formatting it for this blog. It’s as I wrote it in 2007. (Despite being 4 years old, I think this post is still relevant…perhaps even more so today with Agile having crossed the chasm.)

We were in the middle of my Agile Testing class, and the simulation had run for two rounds so far. Some of the participants created “software” on index cards. Others tested it. Still others deployed it. The participants were wholly engaged in their work for the fictitious “Word Count, Inc.” As the facilitator, I was running the simulation in 15 minute rounds followed by 15 minute reflect-and-adjust mini-retrospectives.

After the second round, during the mini-retrospective, I asked, “What do you see happening?”

“The deployment team looked like they were twiddling their thumbs for most of the round,” one participant observed.

Another participant added, “I think that’s because most of the cards are still on the QA table,” she said. “QA is a bottleneck.”

“No, the problem is that development didn’t deliver anything until the very last minute.” objected one of the QA team members.

“Well that’s because it took us most of the last round to coordinate with the deployment team,” one of the Developers countered.

“Your cards were all mixed up when you delivered them. We sent them back so you could sort them out. That’s hardly a ‘coordination’ problem.” scowled a Deployment team member.

Mixed up source code, software stuck in QA, late deliverables. Sounded like a real world project to me.

I shifted the conversation: “What would you like to change to improve the outcome in the next iteration?”

The answers varied: “Hold more project meetings to coordinate efforts!” “Appoint a project manager to keep everything on track!” “More people in QA!” “Define a source code control process!” The suggestions may all have been different, but there was a general trend: the participants wanted to add control points, process steps, and personnel in an attempt to reduce the chaos.

For the next round, the team adopted new practices: adding a new role of project manager; adding more meetings; and adding a strict change control process. During the next round I observed the team use half their available time standing in a big group discussing how to proceed. It seemed to me that in their attempt to control the chaos, they created a process in which it was almost impossible to get anything done. Once again, they weren’t able to deploy an updated version. And at the end of the round, the project manager quit the role in disgust and went back to “coding” on cards.

The team meant well when they added the role of project manager, and added more meetings, but their strategy backfired.

Most groups that go through the WordCount, Inc. simulation encounter problems similar to the ones that this team encountered. Some react by attempting to introduce the same kinds of controls as this group, with similar results. But some respond differently.

One group responded to the mixed-up-source-code problem by creating a centralized code repository that was visible and shared by all. Instead of creating a change control process to manage the multiple copies of the source code floating around, they posted one copy to be shared by all in a central location: the paper equivalent of source control.

Another group responded to coordination and bottleneck problems by co-locating teams. Instead of holding meetings, they coordinated efforts by working together.

Yet another group established an “automated” regression test suite that the deployment team always ran prior to each deployment. They then posted the test results on a Big Visible Chart so everyone knew the current state of the deployed system.

These steps all had the effect of making the team more Agile by increasing visibility, increasing feedback, improving collaboration, and increasing communication. And the end result for each group was success.

When reflecting-and-adjusting, it’s easy to reach for command-and-control solutions, to add quality gates and checkpoints and formal processes. But the irony is that such process changes often increase the level of chaos rather than reducing it. They introduce delays and bloat the process without solving the core problem.

It happens in the real world too.

One organization struggling with buggy code decided to create a role of Code Czar. Before any code could be checked into the source control system, it had to go through the Code Czar who would walk through the proposed changes with the programmer. The Code Czar role required someone very senior. Someone with tremendous experience with the large, complex code base under development. Someone who was also very, very busy. The result: code checkins were delayed whenever the Code Czar was unavailable. Worse, despite having more experience than anyone else on the team, the Code Czar couldn’t always tell what effect a given set of changes might have. The delays in checkins weren’t worth it; they did not result in an overall improvement in code quality.

By contrast, many teams find that automated unit tests work far better as a code quality feedback mechanism than a designated human code reviewer. Instead of waiting for a very busy person to become available, programmers can find out for themselves in minutes if their latest changes will have undesired side effects.

Even Agile teams that regularly reflect-and-adapt in iteration retrospectives are not immune to the temptation to revert to command-and-control practices. For example, Agile teams struggling to test everything during an iteration sometimes create a formal testing phase outside the iteration. I even heard of one organization that was struggling with completing all the tasks in an iteration attempt to solve the problem by having their Scrum Master do a Work Breakdown Structure (WBS) and delegate tasks to specific team members. Not surprisingly, both solutions caused more problems than they solved.

So how can you tell if a given process change will actually be an improvement and make a team more Agile? Before implementing a process change, consider how (or if) the proposed change supports Agile values like visibility, feedback, communication, collaboration, efficiency, and rapid and frequent deliveries. Also ask yourself these questions:

Does the process change rely on humans achieving perfection? To succeed in the role, the Code Czar would have had to have perfect knowledge of all the interdependencies in the code. Similarly, some processes rely on having perfect requirements up front. Successful practices don’t rely on perfect knowledge or perfect work products. Instead, they rely on fast feedback and visibility to enable the team to detect problems early, correct them while they’re small, and enable the team to improve iteratively.

Does it result in more time talking than working? Beware any process improvement that involves more meetings. More meetings rarely solve either communication or coordination problems. As the project manager in the simulation discovered, talking about work doesn’t increase the amount of work actually accomplished. As an alternative to meetings, consider collaborative working sessions where team members do the work rather than talking about it.

Does it introduce unnecessary delays or false dependencies? Whenever a process change increases the number of formal hand-offs, it slows things down but may not improve the overall outcome. The Code Czar learned this the hard way.

What Software Has in Common with Schrödinger’s Cat

In 1935, physicist Erwin Schrödinger proposed a thought experiment to explain how quantum mechanics deals only with probabilities rather than objective reality.

He outlined a scenario in which a cat is placed inside a sealed chamber. Inside the chamber is a flask containing a deadly substance. There is a small bit of radioactive material that has a 50% chance of decaying within a specified time period, say an hour.

If the radioactive material decays, a hammer breaks the flask and the cat dies. If it does not decay, the contents of the flask are flushed safely away and the cat lives.

(This would be a barbaric experiment if it were real, but remember that this is only a thought experiment. No actual cats were harmed.)

If we were to leave the apparatus alone for a full hour, there is an equal probability that the cat lived or died.

Schrödinger explained that in the moment before we look inside the box to discover the outcome, the cat is both alive and dead. There is no objectively measurable resolution to the experiment…yet. The system exists in both states. Once we peek (or by any other means determine the fate of the kitty), the probability wave collapses.

When I first read of Schrödinger’s Cat in my physics class, I was befuddled. A cat is alive, or dead, not both. I did not understand the idea of a probability wave that contained both possible states.

So I can understand completely if you are thinking, “Look, the dang cat is dead. Or not. And besides, this is not related to software AT ALL.”

Ah, but it is.

You see, in the moment we release software, before users* see it, the system exhibits the same properties as Schrödinger’s feline.

There is some probability that we have done well and our users will be delighted. There is another possibility: we may have missed the mark and released something that they hate. (Actually there are an infinite number of possibilities involving various constituents with varying degrees of love and hate.)

Until the actual users start using the software, the probability wave does not collapse. We do not know, cannot tell, the outcome.

For teams that believe they are building awesome stuff, the moment before users get their hands on our work is a magical time full of excitement and wonderment.

For teams that believe they are building a pile of bits not suitable for human usage, it is a time of fear and panic.

But both fear and excitement stem not from observable reality but rather from speculation.

We are speculating that the bugs that we know about and have chosen not to fix are actually as unimportant to our users as they are to us.

We are speculating that the fact we have not found any serious defects is because they don’t exist and not because we simply stopped looking.

We are speculating that we knew what the users actually wanted in the first place.

We are speculating that the tests we decided not to run wouldn’t have found anything interesting.

We are speculating that the tests we did run told us something useful.

None of it is real until it is in the hands of actual users. I don’t mean someone who will poke at it a bit or evaluate it. And I don’t mean a proxy who will tell you if the users might like it. I mean someone who will use it for its intended purpose as part of their normal routine. The experience those users report is reality. Everything else is speculation.

This is what teams forget in that heady moment just before release. They experience all their excitement or terror, confidence or insecurity, as real. We forget that reality is meta-surprising: it surprises us in surprising ways.

And this is why Agile teams ship so often.

It’s not because Agile is about going faster. It’s because structuring our work so that we can ship a smaller set of capabilities sooner means that we can collapse that probability wave more often. We can avoid living in the land of speculation, fooling ourselves into thinking that the release is alive (or dead) based on belief rather than fact.

In short, frequent delivery means we live in reality, not probability.

Facing reality every day is hard. Ignorance is bliss, they say. But living in the land of comforting illusions and declared success is only blissful as long as the illusion lasts. Once the illusion is shattered, the resulting pain escalates with the length of time spent believing in a fantasy and the degree of discrepancy between our beliefs and the actual results. Given sufficient delusion and lengthy schedules, the fall to Earth can be downright excruciating.

I’ll take small doses of harsh reality over comforting illusions and the inevitable ultimate agony any day.

* I use the term “users” here to represent both users (the people who use the software) and customers (the people who decide to buy the software).

If you are buying yourself a game to play, you are both the user and the customer. In sufficiently enterprisey systems, the customer might never even see the software. In that situation the customer and users have very different concerns, so it’s a more complicated probability wave. After all, if the customers love it but the users hate it, was it a success or failure? I’ll leave that discussion as an exercise for the reader.