But Do You Know if It's Any Good

Originally published on Computerworld

“This new system is driving me crazy!” Janet, the hotel desk attendant, muttered as she punched at the keyboard buttons. She looked back at me, flashing her best customer smile. “Sorry, it will be a minute.”

She returned to scowling at the keyboard. Apparently the system finally accepted her input; she looked up at me with a satisfied expression. There was a pause as we waited for the system to respond. A long pause.

To fill the time, she asked, “So Ms. Hendrickson, what do you do?”

“I work with software development organizations to improve software quality.”

“OH!” she exclaimed. “I wish you were at corporate. I don’t know what they were thinking. This new software was supposed to be an improvement, but it’s much worse than the old system. It’s slow, and I can’t figure out what it wants from me half the time.”

I involuntarily began imagining the process at corporate.

A 15-person Steering Committee directed a five-person Requirements Task Force to analyze the business and user requirements. The Requirements Analysts sent out surveys, poured through help desk call records, and even interviewed a few users. They produced an 83-page tome that they handed off to the Designers.

A three-person Design team wrote a specification answering the requirements. The 96-page specification was nominally written in English, but because of the amount of jargon used it required a translation guide. The Design team sent it out for review with a deadline for comments. The specified date passed with no comments from the Steering Committee or the Requirements Analysts (who were off to new assignments so they couldn’t spare any more time for this project anyway). The specification went to the Programmers.

The Programmers implemented to the specification. There were a few things that were very difficult to do, so they compromised. It would be no big deal if users had to enter a few more keystrokes to access that information, right?

Then the Testers were given two weeks to test it. It took most of the first week to figure out how the new software worked. They found a few bugs, but no show-stoppers. The Programmers fixed a few things and the software was deployed to the field.

That’s where Janet comes in. Janet doesn’t know anything about the ins and outs of creating software. She probably doesn’t want to know. She just wants to serve her customers well. And this software is not helping.

Back at corporate, the Steering Committee, Requirements Analysts, Designers, Programmers and Testers are congratulating themselves on a solid release. What they don’t see is Janet’s pain.

All this flashed through my mind in an instant. I looked back at Janet. “Have you called corporate to tell them what you think?” I asked. “What good would that do?” Janet sneered. “I’ll wait on hold for 25 minutes before getting to someone at the help desk. And they’re never much help. No, I’ll deal with it. Maybe it will get easier. They’re sending me to training next week.”

So the feedback loop is broken. The team back at corporate has no mechanism to find out whether the software is any good. Oh, sure, they’ll detect catastrophic problems that cause servers to go down. But they won’t see the little things that cause long queues at the front desk of the hotel.

If we interviewed the team that created the system, they’d say: “This is our best release ever. We did all the right things. We analyzed requirements and wrote specifications before writing the software. We tested the software before we deployed it. How could the result be wrong?”

How indeed?

Perhaps important nuances were lost in the requirements and specifications verbiage. Perhaps the ship criteria, “no showstopper bugs,” could indicate either “solid code” or “not tested.” And perhaps the lack of a feedback loop from the field means they have no way of knowing how the users like the new system. “We’ve been deployed for a month and had only five calls!” the team crows. Like a broken pipe, they see only the trickle of complaints that make it through and miss the flood of complaints leaking away.

Of course, all this happened in my imagination. But I’ve seen it happen in reality. Ironically, organizations that control their software development process tightly don’t necessarily serve their users any better than organizations that cobble something together and throw it over the wall. It’s easy to become so tied up in process that we forget the reason for building the software in the first place. Unless we close the feedback loop, we don’t really know whether what we’ve produced is really any good.

Just ask Janet.