Robert Small wrote me with a question (which he kindly gave me permission to post here, along with my answer):
My GUI developers are driving me nuts! They want to “fully automate” all testing for the GUI. I tried to explain that you cannot automate ease of use (usability) or look and feel and the like. They retort that I can’t give them a clear definition of usability due to the subjective nature of the topic. Advice?
I understand your frustration.
And I can also see that both you and the developers are right. I suspect you’re talking past each other. The problem is with the word “Test.” I think that you and the developers are both using the same word, but giving it two different meanings.
Let me explain…
First, translate “fully automate all testing for the GUI” as “automatically check that the GUI meets expectations.”
Expectations of a GUI may include: times when controls should be grayed out or invisible; circumstances under which a click should result in one behavior or another; interactions or affordances that should be consistent throughout the UI; or perhaps accessibility guidelines that are part of the acceptance criteria. We can automate tests for these kinds of concrete, explicit expectations.
Then translate “test for ease of use, look & feel, etc.” as “explore the current implementation to discover how it feels when used in practice.”
Until we use the system, we can’t know how it will feel. We can guess. We can speculate. But we can’t know. Exploring the emerging system gives us insight into how well it meets over-arching, subjective, abstract quality goals like “easy to use.”
Checking and Exploring yield different kinds of information. Checking tells us how well an implementation meets explicit expectations. Exploring reveals the unintended consequences of meeting the explicitly defined expectations and gives us a way to uncovers implicit expectations. (Systems can work exactly as specified and still represent a catastrophic failure, or PR nightmare. Just ask Facebook.)
As these translations show, you and the developers are talking about two different activities. They’re talking about Checking: verifying explicit, concrete expectations. You’re talking about Exploring: discovering the capabilities, limitations, and risks in the emerging system.
The developers are right: Checking can, and should, be automated.
And you’re right: Exploring is inherently a creative human-centric activity requiring keen observation and good judgment. We can use automation to support exploration, but we cannot automate the whole process of exploring.
Of course there is a relationship between Checking and Exploring: the information we discover when Exploring may yield new things that need Checking in the future.
However, the fact that the industry as a whole still lumps both Checking and Exploring under the more general term Testing results in disagreements like this, where two sets of people end up talking past each other because each is only seeing one side of the testing equation.
The bottom line is that the team as a whole needs the information, the feedback, afforded by both Checking and Exploring. Attempting to argue for one over the other, as though it’s an either-or choice, creates a false dilemma. The question is not which approach is right, but rather how to ensure we consistently do both.
(Oh, and by the way, this discussion around Checking and Exploring is related to the section I wrote on Exploratory Testing in The Art of Agile Development by James Shore and Shane Warden. I admit I’m biased, since I wrote a section, but I recommend the book.)