Search Unity

Exploratory Testing at Unity

February 28, 2013 in Technology | 8 min. read
Placeholder image Unity 2
Placeholder image Unity 2
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

Hello everybody, my name is Claus Petersen and I am a fairly recent addition to the Unity family. I am in charge of leading the Software Test Engineers in QA.

Last week I finished reading the book ‘Exploratory Software Testing’ by James Whittaker. Not coincidentally at the same time we concluded the first round of exploratory testing (E.T.) sessions that I have been a part of here at Unity. I thought I would share my thoughts on the book as well as our current and future use of the E.T. *Spoiler Alert* … We have not seen the last of E.T. at Unity.

 

If you are interested in software testing and haven’t read the book, I encourage you to do so. It’s a relatively quick and easy read and even though the chapters vary greatly in usefulness, it’s never less than interesting.

The first two chapters – ‘The Magic of Software’ and ‘The Case for Manual Testing’ seem to be aimed at either novice testers getting into the craft or developers who believe that processes, engineering practices, defect prevention and automated testing (while all excellent) have or will make manual testing obsolete. James presents at least one damning evidence to the contrary – Windows Vista – and the fact that he worked at Microsoft during the development of Vista leads credence to his case.

The next four chapters are the core of the book. Chapters 3 and 4 are concerned with “testing in the small” and “testing in the large” respectively.

The former – testing in the small – is about the actual nuts and bolts of testing. Hands on keyboard and eyes on screen. It does not contain actual test techniques as such. It is more about useful ways to think about software and testing. There are lots of good terms and concepts. A select few:

  • Atomic vs. abstract input (specific input vs. groups of similar inputs)
  • Significance of input order and input combinations
  • Legal vs. illegal inputs (analogous to confirmatory vs. destructive mindsets)
  • A good definition of the concept of state (“A state of software is a coordinate in state space that contains exactly one value for every internal data structure”)
  • The importance of realistic user data and environment

None of this is revolutionary but useful for explaining the problems that James is trying to address in the book. However, as a source for concrete advice and test techniques, James’ own book ‘How to Break Software: A Practical Guide to Testing’, is much better.

Chapter 4 – ‘Exploratory Testing In the Large’ – is where the concept of exploratory testing “tours” is presented. James’ idea is to take different tourist metaphors and apply them to exploratory testing. For instance, the “guidebook tour” is about following the manual, a tour through the “historical district” examines legacy code and features and the “taxicab tour” explores all routes to a specific location or state. The concept of tours is the book’s main contribution to the world of software testing. The metaphor of software testing as exploration or travelling through terra incognita has often been used, not least in the “American” context-driven school of software testing. The tourist metaphor complements this view well. It is an easily grokkable way of framing and guiding the test activity.

Chapter 5 is an elaboration on some of the tours and chapter 6 presents some real-world examples of the tours in use. Both are required reading to understand the concept.

Even though the actual tours are presented in the preceding three chapters, chapter 7 – ‘The Five Pain Points of Software Testing’ – is perhaps my favourite. It is one of the best summations of the fundamental challenges of software testing that I have seen:

  • Aimlessness: Testing without a goal. Testing for the sake of testing. In my experience this is the most widespread problem I have encountered in the field of software testing. Not properly defining what needs to be tested, when to testas well as how to test. What are the questions that you seek to answer with your testing?
  • Repetitiveness: How do we make sure we inject enough variation into our testing so we don’t run into the pesticide paradox? How do we maintain our manual regression test suites.
  • Transiency: How do we make sure that our testing is representative of real-world usage of the software?
  • Monotony: The fact that testing can be mind-numbingly, life-drainingly boring (he actually means “checking” – the difference between “testing” and “checking” is topic for another blog-post)
  • Memorylessness: For testing in the small, how can we persist and relate what actually happened during a test-session so that we can use that info at a later point? For testing in the large it has to do with the concept of coverage – how can we tell what we have tested and how do we get access to historical data about this so we can track trends over time?

The last three chapters can easily be skipped. Chapter 8 is musings about the future of software test, none of which seem particular insightful or relevant to me. He spends a lot of time talking about reusability of test-cases. The space and importance he allocates to this seems a little at odds with his points in the early chapters about the importance of manual testing with sentient testers. The last two chapters are collections of James’ blogposts. Good reads – James’ colourful style and affection for British pubs really come through here. But many of the points presented are covered elsewhere in the book.

 

Exploratory testing tours are not a free lunch. I had actually encountered the concept of Whittaker’s tours before I started at Unity. First impression: “not impressed”. To me they seemed a little superficial and cute. We had tried running a couple of tours. Some of the obvious ones – supermodel, crime-spree, collectors etc. That was fine, but we didn’t find much that we wouldn’t have found using our normal methods. So we kind of moved on.

Because the concept of testing tours is so immediately graspable (which is good), it’s easy to fall into the trap of simply running the most basic and vanilla versions of the tours. But if your preparation consists of nothing but “inspect and interact with all UI-elements” (for the supermodel tour), then the results will reflect that. If the tours you pick exercise the same parts of the application that you would have tested anyway – well – your results will reflect that.

The sad truth is that the more you prepare for a test session, tour or not, the more you will get out of it (to a point, of course). This also seemed to be one of the general takeaways from our internal evaluation of the E.T. sessions last week. To me, this is perhaps best illustrated by five minutes of James’ talk on testing tours (from 37 to 41 minutes in). It shows some of the work James’ team at Google did as preparation for test of the Chrome OS back in 2009. They didn’t just flip out the Landmark Tour and went to town. Instead they broke down the software into attributes, components and capabilities (ACC-breakdown). They filled out spreadsheets, talked to developers and hosted risk sessions. And THEN they decided on the tools they needed to test this – some of it automated testing, some of it carefully chosen E.T. tours.

If, for whatever reason, you have no preparation, the tours in their basic form can be used to good effect. Certainly they are much better than complete ad-hoc testing. But they are not a silver bullet which was the impression I got from how they were initially presented to me.

 

The way I see it, exploratory testing tours are a great tool for overcoming the five pain points of testing outlined in James Whittaker’s book:

  • Aimlessness because it frames the test activity and gives it a purpose
  • Repetitiveness because different tours are different lenses that examines the software in different ways. And because by the nature of exploratory testing, no two sessions are ever exactly the same
  • Transiency can be addressed by choosing tours that emulate real-world usage and because tours combine well with use cases/user scenarios
  • Monotony because exploratory testing is a challenging mental activity that requires human thought and reasoning
  • Memorylessness because the concept of tours provide a vocabulary to talk about test sessions which can otherwise be an elusive activity. And because the output of different tours can be tracked and compared to previous and other sessions

My goal is to integrate exploratory testing more with the way we work. Not big bang overnight but rather slowly and surely. For starters, I am sure we could benefit from having one or more tours as part of at least full test pass for each area. There is some work to do before this can happen. We need to do our homework and figure out our concepts for describing coverage. And then we need to just start doing it. Trying out different tours and see what gives results and what doesn’t. Maybe eventually even come up with some tours of our own.

I also see uses of exploratory testing that do not necessarily fit into the tours concept. This could be exploratory testing used as part of early feedback on new features pre-alpha. One thing is certain – we have not seen the last of exploratory testing at Unity.

February 28, 2013 in Technology | 8 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered