Archive | Uncategorized RSS feed for this section

It’s a Book!

Happy New Year!

A funny thing happened on my way to inbox 0 last week: I wrote a book in 4 days.

I didn’t mean to. And actually it’s not true to say that I wrote it in just 4 days. I assembled it in 4 days; I wrote it over 15 years. Allow me to present There’s Always a Duck, now available on Leanpub.

To fully explain, I need to back up a step.

Last Thursday I learned that Laurent Bossavit, who I admire tremendously, had published a work-in-progress book, The Leprechauns of Software Engineering, on Leanpub. Leanpub is a relatively new service designed to make it easy to publish before your book is complete so you can get feedback while you write. Their motto is “publish early, publish often.”

So I immediately purchased Laurent’s book. I found it to be a delightful read. In it he chronicles his attempts to track down the source of some of our most cherished beliefs: the cost of change curve, 10x productivity differential between star programmers and average programmers, etc.

Laurent’s current draft is 79 pages with many more sections outlined. And the nice thing about the way Leanpub works is that Laurent can keep writing, and I can re-download the book any time. Further, Laurent can notify everyone who bought the book when he’s made a substantial addition. I’m really looking forward to future drafts.

Since I hadn’t heard of Leanpub before, I was intrigued. I’ve investigated various other self-publishing channels including CreateSpace and SmashWords. But Leanpub seemed different. So I watched their introductory video, an XtraNormal animated short. Within a minute I was laughing out loud. 2 minutes into the 10 minute video I made myself a Leanpub account.

Leanpub made it absurdly easy to turn my blog into a book. They imported my content from my RSS feed and converted it from HTML into Markdown (the markup language they use for publishing). They put the resulting manuscript into a DropBox folder. I already use DropBox, so getting set up was absolutely trivial.

The result: within a few minutes of signing up, I had a 300 page book of my blog posts organized chronologically.

I started sifting through the content, deciding what would go into a book and rearranging the posts into chapters by topic. By Thursday evening I had a draft.

On Friday I had every intention of attending to my backlog of To Dos. But the book called to me. “I’ll just make a few tweaks,” I told myself.

As I continued arranging the content, I realized that some of my older content hadn’t been imported. Some of it was still on my blog but just wasn’t in the RSS feed. I manually pulled in a handful of older posts that I wanted to include in the book.

But I realized some of my oldest content was missing from my blog. Then I remembered that I’d purged all the really old content from my site and I discovered that I didn’t have backups. Whoops!

Down the rabbit hole I went, digging up all my old stuff from The Internet Wayback Machine.

By this time I was feeling guilty about how much time I was spending on an unscheduled project. Thanks to Leanpub’s book announcement page and a few tweets, I had 30 people who had signed up to be notified when the book went live by Friday afternoon. I resolved to hold off on working on the book until at least 50 people indicated interest. So I set the book aside and worked on an overdue client proposal.

My resolution lasted all of 12 hours. Saturday morning found me hunkered over my keyboard, selecting and arranging content. By late Saturday night the book had come together into a cohesive draft. It just needed a good cover, a little more new prose, and another editing pass. I went to sleep at 1AM, tired but happy.

I awoke Sunday possessed with the idea of finishing. It was just SOOOO close. So I spent most of Sunday polishing the final bits.

The cover took a little longer than I had anticipated. I knew I had the perfect picture for it, a picture I took of a heated duck pond in front of the Finlandia concert hall in Helsinki during winter. But I couldn’t find the picture. My husband saved me: he found a copy of it on one of our old backup drives. Then I had to figure out how to reduce the image size so that a 500K download didn’t balloon to 4MB just for the pretty cover shot.

Despite the delays, it all came together within a few hours and I hit “Publish” on Sunday around 3PM.

So that’s how I published a book in 4 days.

Of course the marvelous thing about Leanpub is that while I’ve published, I can also update. I can fix mistakes (I’ve found a couple small wording glitches already). And I can even add entirely new content. So hitting Publish wasn’t much more nervewracking than publishing a blog post.

And yet it was.

This is a BOOK. An actual honest to goodness BOOK. The running joke between me and my friends for years has been “How’s that book coming?” I’ve been working on various books off and on for years. I’ve abandoned most of those projects. So this is a momentous occasion. Even if it is a self-published eBook, it’s still an important step.

Now that I’ve gotten the first one done, there will be more. I suspect that 2012 will be my year of publishing. I have other things in the works that I’m not ready to talk about yet.

2012 is off to a great start!

Comments { 7 }

What Software Has in Common with Schrödinger’s Cat

In 1935, physicist Erwin Schrödinger proposed a thought experiment to explain how quantum mechanics deals only with probabilities rather than objective reality.

He outlined a scenario in which a cat is placed inside a sealed chamber. Inside the chamber is a flask containing a deadly substance. There is a small bit of radioactive material that has a 50% chance of decaying within a specified time period, say an hour.

If the radioactive material decays, a hammer breaks the flask and the cat dies. If it does not decay, the contents of the flask are flushed safely away and the cat lives.

(This would be a barbaric experiment if it were real, but remember that this is only a thought experiment. No actual cats were harmed.)

If we were to leave the apparatus alone for a full hour, there is an equal probability that the cat lived or died.

Schrödinger explained that in the moment before we look inside the box to discover the outcome, the cat is both alive and dead. There is no objectively measurable resolution to the experiment…yet. The system exists in both states. Once we peek (or by any other means determine the fate of the kitty), the probability wave collapses.

When I first read of Schrödinger’s Cat in my physics class, I was befuddled. A cat is alive, or dead, not both. I did not understand the idea of a probability wave that contained both possible states.

So I can understand completely if you are thinking, “Look, the dang cat is dead. Or not. And besides, this is not related to software AT ALL.”

Ah, but it is.

You see, in the moment we release software, before users* see it, the system exhibits the same properties as Schrödinger’s feline.

There is some probability that we have done well and our users will be delighted. There is another possibility: we may have missed the mark and released something that they hate. (Actually there are an infinite number of possibilities involving various constituents with varying degrees of love and hate.)

Until the actual users start using the software, the probability wave does not collapse. We do not know, cannot tell, the outcome.

For teams that believe they are building awesome stuff, the moment before users get their hands on our work is a magical time full of excitement and wonderment.

For teams that believe they are building a pile of bits not suitable for human usage, it is a time of fear and panic.

But both fear and excitement stem not from observable reality but rather from speculation.

We are speculating that the bugs that we know about and have chosen not to fix are actually as unimportant to our users as they are to us.

We are speculating that the fact we have not found any serious defects is because they don’t exist and not because we simply stopped looking.

We are speculating that we knew what the users actually wanted in the first place.

We are speculating that the tests we decided not to run wouldn’t have found anything interesting.

We are speculating that the tests we did run told us something useful.

None of it is real until it is in the hands of actual users. I don’t mean someone who will poke at it a bit or evaluate it. And I don’t mean a proxy who will tell you if the users might like it. I mean someone who will use it for its intended purpose as part of their normal routine. The experience those users report is reality. Everything else is speculation.

This is what teams forget in that heady moment just before release. They experience all their excitement or terror, confidence or insecurity, as real. We forget that reality is meta-surprising: it surprises us in surprising ways.

And this is why Agile teams ship so often.

It’s not because Agile is about going faster. It’s because structuring our work so that we can ship a smaller set of capabilities sooner means that we can collapse that probability wave more often. We can avoid living in the land of speculation, fooling ourselves into thinking that the release is alive (or dead) based on belief rather than fact.

In short, frequent delivery means we live in reality, not probability.

Facing reality every day is hard. Ignorance is bliss, they say. But living in the land of comforting illusions and declared success is only blissful as long as the illusion lasts. Once the illusion is shattered, the resulting pain escalates with the length of time spent believing in a fantasy and the degree of discrepancy between our beliefs and the actual results. Given sufficient delusion and lengthy schedules, the fall to Earth can be downright excruciating.

I’ll take small doses of harsh reality over comforting illusions and the inevitable ultimate agony any day.

* I use the term “users” here to represent both users (the people who use the software) and customers (the people who decide to buy the software).

If you are buying yourself a game to play, you are both the user and the customer. In sufficiently enterprisey systems, the customer might never even see the software. In that situation the customer and users have very different concerns, so it’s a more complicated probability wave. After all, if the customers love it but the users hate it, was it a success or failure? I’ll leave that discussion as an exercise for the reader.

Comments { 14 }

2nd Annual QA/Test Job Posting Study

This is a guest blog post by Daniel Frank, my assistant. Daniel took on the challenge of updating the QA/Test job study for 2011, just in time for making New Year’s resolutions. Enjoy! Elisabeth

It’s been a little over a year since Elisabeth published “Do Testers Have to Write Code,” the results of an in-depth survey of job ads that she and Melinda conducted to see if employers expect testers to program. The resounding conclusion, with 80% of tester job ads requesting some kind of programming skill, was “Yes.”

This year we wanted to see if things have changed, so I conducted the same study again. I also wanted to add a bit more granularity to the study, to see if there were any trends that were missed last time.

I screened the lists with the same basic guidelines as our previous study. That means I restricted my search to the US only. I only counted a job if it was described as a testing/QA position in the job title. I did not include recruiter listings in order to avoid the risk of including duplicate jobs or even fake jobs used to gather pools of applicants.

Our final sample size this year is 164 jobs. That’s a little less than last year. Why?

The lists were sparse. There just aren’t that many job ads out there. Many of the job ads I found were from recruiters or were repeats, with the same company listing the same position several weeks in a row.

The simple fact that I had a hard time finding the same number of ads as last year is interesting information all on its own. From an overall economic standpoint, the country is in no more of a slump than we were in 2010. So why are there fewer listings for testers? Could it be that Alberto Savoia, who recently declared testing dead, is correct? We’ll come back to that question later.

Back to the study…

Like last year, the majority of our jobs came from Craigslist (90) and LinkedIn (64). The rest of them came from a smattering of other sites.

The data includes an even higher proportion of jobs in California than last year: 102 of the listings were in CA, with the remainder divided in small chunks between 28 other states. Unsurprisingly,Texas, Massachusetts, and Washington are the three runners up.

Last year there was some question of whether or not the sample was biased simply because we’re located in California. However, I took extra steps to try and get equal representation. The simple fact is that a search that might find 70 jobs when I filter the location for CA will result in 30 jobs or fewer if I filter for another area. If anything, I’d estimate that California is actually under represented.

I kept track of the job titles. By far the most popular title is “QA Engineer” (99 of the listings). 136 of the titles contained “QA” compared with only 32 containing the word “Test.”

An interesting side note: when I searched for the word “test” in the body of job ads, I found far more developer positions than similar searches for “qa” did. It would seem that at the same time QA/Test positions are requiring more coding skills, developer positions are requiring more testing skills. That might be another interesting job ad survey project.

So how much coding are testers expected to do?

Of the 164 listings, 102 jobs say they require knowledge of at least one programming language, and 38 jobs indicate coding is a nice to have. That’s 140 out of 164, or 85.37% of the sample. That’s an even higher percentage than last year. It’s difficult to say if the 5% uptick represents a real increase in demand, but at the very least it’s fair to say that demand for testers who code remains high.

I used the same criteria that Elisabeth and Melinda used last year. That means that I counted a job as requiring programming if the job required experience in or knowledge of a specific language, or if the job duties mentioned a language. There were 7 jobs which listed broad experience requirements like “must be able to script in multiple languages,” which also counted as requiring programming.

There were some judgment calls to be made about what may or may not count as a programming language. For the purpose of the results here, I counted SQL or other relational database knowledge as a programming language in order to be consistent with last year. However, unlike last year, I tracked proficiency in relational databases separately. This will let me track specific trends more easily in future studies.

One of the questions Elisabeth wanted to answer last year was whether jobs with self-identified Agile organizations required testers to code more than other jobs. This year 46 of of the 58 Agile job ads list programming skills as required or nice to have. That’s 79.31%, which is actually a lot less than last year’s 90%. However, this is one of those places where the small sample size has to be taken into consideration. In 2010, 49 out 55 agile jobs mentioned programming. Today, 46 out of 58 jobs mention it. Just a few jobs result in a 10% variation.

An enduring question about any kind of job is how much it pays. I saw even less mentions of pay this time around. Only 7 jobs even listed it, and 5 of those were button-pushing game testing positions in the $10-$20/hour range. The other two ran around $85,000-$105,000. Most positions simply don’t provide up front salary estimations, so we cannot draw any real conclusions from these data points.

Just for fun, I also noted whenever a job requested a certification. In 164 jobs I found exactly 4 mentions of certification, and not a single one was required. 3 of them were vendor or technology certifications that had nothing to do with testing. And even in the single instance where a testing certification was nice to have, it was the CSTE offered by QAI, rather than the much more hyped ISTQB. So it would seem that testing certifications are not much in demand. The bottom line is that someone looking to improve their marketability would be much better served by upskilling to a new proficiency rather than picking up an irrelevant certification.

And that’s about it for our study. If you’d like to dig through the raw data to look for any trends I may have missed, I’ll be happy to send it to you. Drop me a line.

Now back to the question about the number of QA/Test jobs out there. Could it be that there are fewer QA/Test positions? Was this just a matter of luck and timing, or is there a trend?

Alberto Savoia gave a talk titled “Test is Dead” at GTAC (dressed as the Grim Reaper). He may have used intentionally inflammatory hyperbole to make his point, but that doesn’t change the fact that he had interesting points to make.

Alberto points out that especially in web development, speed is paramount. Further, the biggest challenge isn’t in building “it” right, but in building the right “it.” So the goal is to get a minimum viable product out as quickly as possible, and get fast feedback from real users and customers. Traditional black box testing ends up taking a back seat in this type of development, and these projects often rely heavily on user feedback instead.

At STARWest 2011, James Whittaker of Google gave a talk titled “All That Testing is Getting in the Way of Quality” where he talked about the closest thing to a traditional testing role they have at Google. It’s called the “Test Engineer,” and they spend anywhere from 20%-80% of their time writing code. He also explains how Google utilizes their user bases to do almost all of their exploratory tests. As he puts it, “Users are better at being users than testers are, by definition.”

With James and Alberto’s talks firmly in mind, I can’t help but wonder if the difficulty I experienced in finding job ads that met my criteria is indicative of a sea-change in the industry rather than an anomaly. Could it be that we’re seeing a reduction in the number of QA/Test positions?

What do you think? Are you seeing fewer QA/Test positions in your organization or (if you’re looking) in your job search?

Comments { 8 }

Checking Alignment, Redux

I’ve been writing a lot lately. Writing for long stretches leaves me mentally drained, nearly useless. The words dry up. I stop making sense. I find it increasingly difficult to form coherent sentences that concisely convey my meaning. Eventually I can’t even talk intelligibly.

I recall attending a party after a week of solid writing a few years ago.

“How are you?” my host asked when I arrived.

“Unh.” I muttered. “Good.”

“What have you been up to?” she inquired.

“Um. Writing.” I stopped talking and stared back at her expectantly.

I wanted to be social, but no more words would come. I stood there just staring at her. It didn’t even occur to me to ask how she was doing or what she was up to.

My host looked at me sideways, unsure how to respond to my blank stare. It wasn’t a Halloween party, and yet I was doing a passable impression of a zombie. How does one respond to zombified guests?

Anyway, my point is that I’m in one of those states now. And thus I may have great difficulty making myself understood. Producing words that fit together to express ideas is becoming increasingly difficult.

I’m guessing this is why I failed to explain myself well in my last post. Or at least I am inferring from the response to that last post that there is a gap between what I intended to say and what most people understood me to be saying.

I had three points that I wanted to make in my last post:

  1. It’s easy to speculate about the connection between actual needs, intentions, and implementation.
  2. Empirical evidence trumps speculation. Every single time.
  3. Testers are NOT the only people who gather that empirical evidence.

Given that’s what I meant to say, I certainly didn’t expect UTest, a testing services company, to like the post so much that they would tweet:

We couldn’t agree more! It’s all about the testing!

Yes, it is all about the testing. But—and this is a crucial BUT—it is not all about the testers.

In fact, much of the kind of testing that goes into ensuring alignment between intentions/implementation and actual need is something that testers have very little to do with, and it’s something that cannot ever be outsourced to a testing services company.

Let’s look at the sides of the triangle of alignment again:

Actual Need: the value our users and/or customers want.

Intentions: the solution we intend to deliver in order to serve the Actual Need. The product owner, product manager, business analyst, or designer is the one who typically sets the intentions. It’s their job to listen to the cacophony of conflicting requests and demands and suggestions in order to distill a clear product vision. For now let’s just call this person the product owner. They own the product vision and decide what gets built.

Implementation: the solution the team actually delivers.

So who makes sure that the intentions and implementation match the actual needs?

The best person to do this is usually the person who set the intentions in the first place: the product owner. They’re supposed to be steering the project.

If the product owner has no way of verifying that they asked for the right thing and can’t tell whether or not the resulting software delivers the expected value, the project is doomed.

Seriously, I’ve lived through this as a team member and also seen it from the sidelines. The person responsible for setting the intentions needs a way to tell whether the actual needs are being met. They need feedback on the extent to which the intentions they set for the team pointed us in the right direction. Otherwise we end up in a painful cycle of requirements churn that can ultimately end in organizational implosion if we hit the end of the runway before we deliver real value.

Michael Bolton’s story of getting out of the building and picking up sample checks on his lunch hour is fabulous. But to me, it’s not a story about testing. Rather it’s a great story about how having multiple examples are key to truly understanding requirements.

Further, I’ll suggest that in this story Michael was acting as a Team Member rather than a Tester. The fact that Michael is a world class tester is not the most salient part of the story. The important thing is that he noticed the team needed something and he went out of his way to get it.

It is important not to confuse Michael’s initiative as a team member with an exclusive job responsibility of testers. Michael took the initiative. That’s one of the reasons why he is a world class tester. But picking up that sample check is something that a programmer could have done. Or the product owner. Everyone on a project can contribute to establishing a shared understanding of the full scope of the requirements. And everyone has a hand in gathering empirical evidence, not just testers.

Testers happen to be really good at gathering information. Teams need testers. But teams also need the testing mindset to be baked into the culture. Team members need to ask these key questions before taking action:

  • How will I know my efforts had the effect I intended?
  • How will I know my intentions were correct?
  • How will I know my results are delivering real value?

These questions are at the core of the test-first mindset. And the answer to these questions is never, “I’ll just ask the testers.”

Comments { 4 }

Exploratory Testing in an Agile Context Materials

I’m giving a session at Agile2011 in Salt Lake City at 9AM Wednesday on Exploratory Testing in an Agile Context. The session itself will be entirely hands on: we will explore a hand-held electronic game that I brought while discussing how ET and Agile fit together hand-in-glove. However, I did produce materials for the session: a PDF that’s almost a booklet. Thought you all might like to see it.

Comments { 7 }

Files shuffled around

When I moved my blog, I didn’t do a good enough job of verifying that all the assets moved over. Several folks have contacted me asking for their favorite content to be restored. Whoopsie!

Many many thanks to everyone who contacted me. Please accept my apologies both for breaking links and also for taking so long to fix the issue.

I’ve finally started putting things back to rights. However, the media uploader automatically put all the content I restored into the uploads folder for April 2011. And in the interest of getting the content back as quickly as possible, I’m leaving it there. That means the old PDF links don’t work, and unless I hear a great outcry I’m probably not going to spend the time to put everything back exactly where it was.

Instead, you can find the most requested items under “Quick Links” on the right side of the page.

If you notice something still missing that you want access to, please let me know. I’ll be happy to restore it and put a link under Quick Links to it.

Thanks!

Comments { 0 }

Checking Invisible Elements

This week, I’m investing a bunch of hours on my side project. Today, I’m working on a feature where a field is supposed to remain invisible until a user enters a combination of values.

There are a variety of ways to test this code including testing the javascript with something like Jasmine. However, in this case, I particularly want an end-to-end test around this feature. And in my case that meant using Cucumber with Capybara for my end-to-end tests.

I wanted to be able to say something in my Cucumber .feature file like:

And I should not see the "My Notes" field

However, my first attempt at implementing didn’t work the way I expected it to. The “My Notes” field existed on the page but was hidden. When I called Capybara’s “has_css?” method, it found the field and reported it present. So my test was failing even though the system behavior did exactly what I wanted it to. Whoopsie!

So now what?

After two hours of wrestling with Capybara and CSS selectors, I finally found a solution that I can live with. And since I know other people have had this problem, I thought I would share it here.

But first, a note: this particular technique won’t work on elements that are given the display attribute of none directly through styles. It requires you to set display to none through a CSS class. (But setting attributes through CSS classes is a better design anyway, so I think this is a reasonable limitation.)

In my particular case, because I’m using jQuery, I’m using the .ui-helper-hidden class. You’ll need to figure out the class name that sets the display attribute to none for your application. The sample code below uses “.ui-helper-hidden” as the class name.

Here’s the helper method that I came up with:

(If you have javascript disabled, you might not see the beautifully formatted gist from github above. In that case, you can see the helper method if you click here.)

I hope that little helper method saves someone some time. If so, it was totally worth the 2 hours I spent today figuring out how to write it.

Comments { 10 }

The ATDD Arch

The ATDD Arch

It seems like everyone is suddenly talking about Acceptance Test Driven Development (ATDD) these days.

I have worked with several organizations as they’ve adopted the practice. And I’ve watched each struggle with some dimension or another of it. The concept behind the practice is so simple: begin with the end in mind. But in order to gain traction and provide value, ATDD requires massive, fundamental changes from the traditional organization mindset where testers test, developers develop, product managers or business analysts write requirements documents, and each role works in its own little silo.

As one person said to me, “ATDD is moving some people’s cheese really hard.”

Sometimes when organizations contact me about helping them with ATDD, they start by talking about tools. They tell me they’ve selected a tool to do ATDD, or that they want me to help them with tool selection. They’re suffering from delayed feedback and slow manual regression cycles and they want to do ATDD because they see it as a path to automated acceptance tests. They think ATDD stands for “Automated Test During Development.”

What they don’t see is that ATDD is a holistic practice that requires the collaboration of the whole team. We collaborate on the front end by working together to define examples with expectations for stories, then articulate those examples in the form of tests. On the back end, when the team implements the story, testers and developers collaborate on connecting the tests to the emerging software so they become automated.

Handoffs don’t work with ATDD. The product owners don’t establish examples with expectations unilaterally; they work with developers and testers. The testers don’t create the tests unilaterally; they work with the product owner and developers. And when the team is ready to hook those tests up to the emerging software, there is no automation specialist just waiting to churn out reams of scripts. Instead, testers and developers collaborate to create the test automation code that mades the acceptance tests executable.

Starting an adoption of ATDD with the tools is like building an arch from the top. It doesn’t work.

The tools that support ATDD—Fitnesse, Cucumber, Robot Framework, and the like—tie everything together. But before the organization is ready for the tools, they need the foundation. They need to be practicing collaborative requirements elicitation and test definition. And they need at a bare minimum to be doing automated unit testing and have a continuous automated build system that executes those tests.

It’s best if the engineering practices include full-on Continuous Integration, Collective Code Ownership, Pairing, and TDD. These practices support the kind of technical work involved with automating the acceptance tests. Further, they show that the team is already heavily test-infected and are likely to value the feedback that automated acceptance tests can provide.

Comments { 14 }

The Agile Acid Test

A while ago I blogged about how I define Agile:

Agile teams produce a continuous stream of value, at a sustainable pace, while adapting to the changing needs of the business.

I’ve gotten a little flack for it. A handful of people informed me that there is only one definition of Agile and it’s in the values and principles expressed in the Agile Manifesto. The implication was that if my definition is different from the Manifesto, it must be wrong.

At Gary Brown’s urging, I reread the principles in the Manifesto. And I discovered that my “definition” is indeed in there. It’s in the principles: “…continuous delivery of valuable software…changing requirements…sustainable development…maintain a constant pace indefinitely.”

OK, so I’ll relent. Agile is defined by the Manifesto. And my “definition” is my Agile Acid Test.

Lots of organizations claim to be adopting Agile. Few have the courage and discipline to do more than pay lip service to it. Then they claim “Agile doesn’t work.” (My favorite take on this is Ron Jeffries’ “We Tried Baseball and it Doesn’t Work.”)

So, if a team tells me that they’re Agile, I apply my acid test to see if they’re really Agile. I ask:

How Frequently Do You Deliver?

When I say that Agile teams produce a continuous stream of value, I mean that they deliver business value in the form of shippable or deployable code at least monthly, and preferably more frequently than that. Shippable/deployable means ready for production. It’s done. There is nothing left to do. It is implemented, tested, and accepted by the “Product Owner.”

Some organizations are taking this to an extreme with continuous deploy. In those contexts, the time between when a developer checks in a line of code to the time when she can see her work in production is measured in minutes. Obviously continuous deploy isn’t necessarily appropriate in all situations. But even if you work in a context where continuous deployment to production doesn’t make sense, consider what continuous deployment to a testing or staging environment could do to shorten your feedback cycles.

In short, Agile teams deliver shippable product increments frequently. Delivering “almost done” or “done except tested” every month doesn’t cut it.

Could You Continue at This Pace Indefinitely?

“Sustainable pace” means that the team can continue to add capabilities to the emerging system at more or less the same velocity given no increases in team size.

There are two critical aspects to achieving a sustainable pace:

  1. people
  2. technical assets

Prior to working on Agile projects, I was accustomed to spending the last few weeks or months of any project in “Crunch Mode.” Everyone on the team would put in long hours (80 – 100 hour weeks were typical). We’d be hyped up on caffeine, stressed out, and cranky. But we’d do whatever it took to ship.

Having shipped, we’d celebrate our heroics. And then we’d go crash.

A few days later, we’d all drag ourselves back into the office. “This time we’ll do it right!” we would declare. We would spend buckets of time up front on planning, requirements, and design. And, let’s be honest, we were still exhausted, so we’d work at a slower pace. Inevitably, as the deadline loomed, we’d run short on time in the release and once again we’d be in Crunch Mode.

This is not a sustainable cycle. A few rounds of this and people are just too fried. Some leave for greener pastures, lured by the promise of higher pay and/or more sane schedules. Others “retire on the job.” The few remaining people who stay out of a sense of loyalty and who retain their work ethic find it impossible to get anything done because they’re surrounded by newbies and dead weight. Progress grinds to a screeching halt.

So caring for the people is the number one way to ensure work can continue at a sustainable pace.

But it’s not enough. The other side of sustainable pace is caring for the technical assets. Every time we take a shortcut, like copying and pasting huge swaths of code and not refactoring to remove duplication, shoving code somewhere expedient instead of putting it where it really belongs, or failing to write an automated test we know we really ought to write, we’re creating technical debt. As the technical debt mounts, the “interest” we pay on that debt also mounts.

Simple changes require touching multiple files. The code base becomes fragile. Eventually the team gets to the point that any change causes massive regression errors. For each new tiny bit of capability added, the team has to spend days playing “whack-a-bug” to get the features that used to work fine back to working. Once again, progress grinds to a screeching halt.

(Also note the connection between the human and technological aspects of sustainable pace: burnt out people tend to take more shortcuts.)

If the organization is not caring for the people, and the people are not caring for the technical assets, they will run into trouble. Maybe not today. Maybe not tomorrow. But soon, and for the rest of the life of that code base.

How Does the Team Handle Change?

I visited one team in the middle of a transition to Agile. The team was very pleased with their progress to date. They were delivering in 2 week sprints, and they were doing quite well with establishing and maintaining a sustainable pace.

But the kicker came when they showed me the project plan. They had every sprint laid out for the next 6 months. They were only a couple of sprints into the plan, but I could see trouble ahead. “What will happen if the requirements or priorities change?” I asked. The project manager squirmed a little. Promises had been made based on the master project plan. They weren’t allowed to deviate.

But change is inevitable. I don’t know the ending to that particular story, but my bet is that the project manager ended up redoing that Gantt chart a gazillion times before they shipped.

If the team is planning too far out, they won’t be able to adapt when, inevitably, priorities and needs shift. They’ll be able to continue delivering at a sustainable pace, but what they’re delivering will have substantially less value to the organization than it otherwise would.

Few Are Truly Agile

Often when I speak to an audience I ask how many people are on Agile projects. These days, no matter what audience I’m addressing, lots of hands go up. Agile is the new hot thing. All the cool kids are doing it. But when I ask audiences to self-assess on these three criteria, and then ask again how many are on an Agile project, hands stay down. Very few organizations are achieving this level of agility.

Not surprisingly, that means few organizations are really getting the benefits of Agile. In the worst cases, “Agile” is resulting in worsening quality, increased pressure, and more burnout. People on those projects are reporting that Agile is ruining their lives.

In such environments, Agile is often implemented as:

  1. Compress the schedule (because “Agile” means “faster,” right?)
  2. Don’t document anything (because “Agile” means no documentation, right?)
  3. Code up to the last minute (because “Agile” means we can change anything at any time, right?)

This is a recipe for pain: increasing levels of technical debt, burnout, chaos, and eventually inability to deliver followed by numerous rounds of Point the Finger of Blame. So yes, in these organizations, “Agile” (or the corrupted version in the form of a frAgile process) is indeed ruining lives.

My hope is that if you are in an environment like that, this Agile Acid Test can help you communicate with The Powers That Be to change minds about what Agile really means and what it looks like when done well.

Remember, just because someone says they’re doing “Agile” doesn’t mean they are. As Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four. Because calling it a leg doesn’t make it a leg.”

Comments { 19 }

Agile Transitions and Employee Retention

A question from my mailbox this morning (paraphrased):

Our organization is transitioning to agile. I often hear that not everybody will suit an agile team. I’m concerned that some of the non-agile-minded will drop out. How do we keep everyone on board?

My correspondent had heard statistics and advice like “20% of the people in your organization will not make the transition. Be prepared for some turnover.” And he’s right to be concerned. Agile transitions are not easy. No significant change is ever easy.

Since this is a question I hear often, and since my response to my correspondent applies to any organization in transition, I decided to post my response here.

I offer four observations:

1. People sometimes surprise us.

The person who seemed complacent, satisfied to stay in their little comfort zone, resistant to taking ownership, may turn out to be a great collaborative team member when given half a chance. I’ve seen it happen. By contrast, the “top performer” who seems so pro-active and who everyone is desperate to retain may turn out to be toxic in the new organization because she prefers the mantle of hero to true collaboration.

2. Leaving isn’t the worst thing in the world.

One of my absolute worst screwups as a manager was to work too hard to “help” an employee that was not performing well.

He was on a performance improvement plan for months. Both of us were miserable about the situation. He’d been with the company for a while, and after many organizational changes ended up in my group. The organization had changed, and he wasn’t fitting in well in the new world order. No amount of training or coaching was helping.

When we finally mutually agreed that things weren’t working, he found another job at another company almost right away. The next time I ran into him at a conference he was brimming with happiness at his new success. His new organization loved him and he was thriving. His skills and temperament were a perfect fit there.

So while I thought I was being kind when I tried to give him every chance to succeed in my group, I was actually being cruel by prolonging his feeling of failure unnecessarily.

Similarly, at one of my clients, a QA Manager who had been resisting the transition to Agile ultimately left. Upper management was very, very nervous about what his departure would do to the QA group. But it turns out that everyone was better off.

Leaving isn’t the worst thing in the world, and sometimes it can be the best thing for all concerned.

3. Creating safety is more important than retaining individuals.

Transitioning to Agile inevitably results in increased visibility. That visibility can be incredibly scary, particularly in a political organization where people have historically practiced information hiding, and information hoarding, as a survival strategy.

Instead of trying to retain specific individuals, it’s more important that managers focus on making people feel safe. Much of creating safety is about not doing things: don’t use velocity as an assessment mechanism; don’t add pressure by blaming the team if they miss sprint targets; don’t foster a culture of competition within a team.

Even more important is what managers can actively do to promote safety: talk to individuals about their concerns; get whatever resources people say they need in order to be successful; reward collaboration over individual achievement.

4. Treat people well.

The people in the organization are humans, not fungible “resources.” They deserve support and compassion. As long as managers treat people as people consistently throughout the transition, it will all be OK, even if some people decide that the new organization isn’t a good fit for them.

Comments { 13 }