Industrial Strength Exploratory Testing - part 3

Let's pick up mythbusting from where we left off

Myth 2: There is no way to measure exploratory testing

I have heard this often from test managers – “how do we measure testers’ productivity in exploratory testing?” Scripted testing allows for measuring rate of tests executed per hour, test pass progress percentage etc. but how do we measure progress or productivity in exploration?

First up, there are no absolute metrics that is a silver bullet for everyone. One could easily argue against any metric if the situation does not suit measuring that particular component. For instance, I could say number of bugs found per hour in exploratory testing is a metric, but it is a function of how buggy it originally was, what kind of software this was, the amount of churn went into it etc. So, first we need to come up with metrics that are meaningful to measure on a specific team. Once you determine that, it should be easy to wire those metrics up with the tools you use to do exploratory testing.

For instance, on my team we measure progress in exploratory testing like any other task in the daily standup meeting. We report on time spent, time remaining, adjustments made if any based on bug density or risk. We look at metrics like tour effectiveness based on bug priorities or numbers found per tour, code coverage across tours to assess if we have holes in testing, user story complexity to bugs ratio to see if we have spikes in certain stories etc. Since TFS is the underlying data repository for all our testing, it is easy for us to pull this data in reports from TFS.

An interesting visualization we tried was built up on code coverage was to show a heat map that looked something like this:

The colors in the components indicate how well covered a specific component is in terms of code coverage. You can track coverage while exploring specific requirements or user stories of your application and figure out where there is a lack in code coverage to direct where your future testing efforts should be. There could be similar heat maps built on bug density or bug numbers for instance. Your map could show those stories in red that have a high bug density in terms of bugs per line of code to inform future testing.

In summary, there are various ways that you could measure and direct your exploration efficiently. Choose metrics wisely!

Comments

  • Anonymous
    November 03, 2011
    The comment has been removed
  • Anonymous
    November 03, 2011
    As always, good work!
  • Anonymous
    December 21, 2011
    Hi Anu,Great post! Can you provide me some more information about how you build this visualization? Really interested in that![Anu] Hmmm...long answer -that should be a post of it's own - will publish one soonThanks!
  • Anonymous
    February 17, 2012
    "Anu-tations" - Nice play on words on the title. Excellent post also.
  • Anonymous
    August 08, 2012
    Do we have that heat map shipped with latest VS 2012?