Defining Agile: Results, Characteristics, Practices

I think it’s important to define “Agile” when I talk about “Agile Testing.”

Agile is one of those capitalized umbrella terms, like Quality, that means many things to many people. And given that Agile Testing involves testing in an Agile context, it’s hard to talk about it if we have not established a shared understanding of the term “Agile.”

I define Agile in terms of results. Specifically, Agile teams:

  • Deliver a continuous stream of potentially shippable product increments
  • At a sustainable pace
  • While adapting to the changing needs and priorities of their organization


(Tip ‘o the hat due to various sources that inspired my definition, including the APLN’s Declaration of Interdependence for the phrase “continuous flow of value”, Scrum for the phrase “potentially shippable product increment”, XP for the core practice of “Sustainable Pace”, and Jim Highsmith plus too many other people/sources to mention for the idea of adapting to changing needs.)

Teams that are consistently able to achieve those results typically exhibit the following characteristics:

  • A high degree of Communication and Collaboration.
  • Seeking and receiving fast Feedback.
  • Seeking mechanisms to support Visibility so everyone knows what’s going on at any given time.
  • A high degree of Alignment so everyone is working toward the same goals.
  • A shared Definition of Done that includes Implemented, Tested, and Explored before being Accepted by the Product Owner.
  • A relentless Focus on Value.

And teams that manifest these characteristics typically have adopted a combination of Agile management and engineering practices including:

  • Prioritized Backlog
  • Short Iterations (or Sprints)
  • Daily Stand-ups (or Scrums)
  • Integrated/Cross-Functional Team
  • Continuous Integration
  • Collective Code Ownership
  • Extensive Automated Tests
  • etc.

Too many people equate practices (e.g. Prioritized Backlog) and methods (e.g. Scrum) with Agile. But that’s backwards. Agile practices and methods increase the odds of achieving Agility, but they’re not a guarantee. The practices serve the desired outcome, not the other way around.

From the Mailbox: Fully Automated GUI Testing?

Robert Small wrote me with a question (which he kindly gave me permission to post here, along with my answer):

My GUI developers are driving me nuts! They want to “fully automate” all testing for the GUI. I tried to explain that you cannot automate ease of use (usability) or look and feel and the like. They retort that I can’t give them a clear definition of usability due to the subjective nature of the topic. Advice?

My response:

I understand your frustration.

And I can also see that both you and the developers are right. I suspect you’re talking past each other. The problem is with the word “Test.” I think that you and the developers are both using the same word, but giving it two different meanings.

Let me explain…

First, translate “fully automate all testing for the GUI” as “automatically check that the GUI meets expectations.”

Expectations of a GUI may include: times when controls should be grayed out or invisible; circumstances under which a click should result in one behavior or another; interactions or affordances that should be consistent throughout the UI; or perhaps accessibility guidelines that are part of the acceptance criteria. We can automate tests for these kinds of concrete, explicit expectations.

Then translate “test for ease of use, look & feel, etc.” as “explore the current implementation to discover how it feels when used in practice.”

Until we use the system, we can’t know how it will feel. We can guess. We can speculate. But we can’t know. Exploring the emerging system gives us insight into how well it meets over-arching, subjective, abstract quality goals like “easy to use.”

Checking and Exploring yield different kinds of information. Checking tells us how well an implementation meets explicit expectations. Exploring reveals the unintended consequences of meeting the explicitly defined expectations and gives us a way to uncovers implicit expectations. (Systems can work exactly as specified and still represent a catastrophic failure, or PR nightmare. Just ask Facebook.)

As these translations show, you and the developers are talking about two different activities. They’re talking about Checking: verifying explicit, concrete expectations. You’re talking about Exploring: discovering the capabilities, limitations, and risks in the emerging system.

The developers are right: Checking can, and should, be automated.

And you’re right: Exploring is inherently a creative human-centric activity requiring keen observation and good judgment. We can use automation to support exploration, but we cannot automate the whole process of exploring.

Of course there is a relationship between Checking and Exploring: the information we discover when Exploring may yield new things that need Checking in the future.

However, the fact that the industry as a whole still lumps both Checking and Exploring under the more general term Testing results in disagreements like this, where two sets of people end up talking past each other because each is only seeing one side of the testing equation.

The bottom line is that the team as a whole needs the information, the feedback, afforded by both Checking and Exploring. Attempting to argue for one over the other, as though it’s an either-or choice, creates a false dilemma. The question is not which approach is right, but rather how to ensure we consistently do both.

(Oh, and by the way, this discussion around Checking and Exploring is related to the section I wrote on Exploratory Testing in The Art of Agile Development by James Shore and Shane Warden. I admit I’m biased, since I wrote a section, but I recommend the book.)