Better Bug Reports
When we report a bug our hope is that bug is fixed. But, of course we know that isn’t always the case which is why there are usually several alternative resolutions developers, project managers, or managers may choose for resolving a bug such as postponed, won’t fix, and by design. It is unfortunately quite common to see a tester metaphorically explode into passionate fits of outrage when one of their bugs is resolved as postponed, won’t fix, or by design. It is unfortunate because these tantrums often involve the tester hurling personal insults (e.g. “How can the developer be so stupid not to fix this bug"?”), decrying product quality (e.g. “If we don’t fix this bug this product will totally suck!”), and playing the whiny customer card (e.g. “We will loose customers if we don’t fix this bug.”). Yes, in my early years I was also guilty of these sorts of irrational outbursts of hyperbole when a bug that I thought was important was resolved not fixed. But, of course, I quickly learned that such sophistical speculations rarely resulted in the bug being fixed, and mostly lessened my credibility with developers and managers.
The other day I was speaking with a tester who was a bit miffed because the developer had resolved a few of her bugs as by design and won’t fix and she asked how she could ‘fight’ these resolutions. “Well,” I began, “Getting people to change their minds usually involves negotiation and the logical presentation of facts in a non-judgmental approach. Sometimes you will succeed, and sometimes you will not succeed. As testers surely we want all our bugs to be fixed; however, from a practical standpoint that may not always be the case especially if the bug is subjective.” I previously wrote about 10 common problems with bug reporting, but, in this case I proceeded to discuss a few strategies I use to advocate bugs.
Make it easy for the developer to fix the bug
As a minimum a tester must provide a description of the problem, the environmental conditions in which the problem occurred (if localized to a specific environment), the shortest number of exact steps to reproduce the bug, and the actual results versus the expected results. Occasionally a screen shot may be beneficial, but mostly if there is a contrasting example. But, I will also point the developer to my test; especially if it is automated. Providing the developer an automated mechanism to reproduce a problem reduces a lot of overhead. Of course, in this case I am talking about an automated test case that runs in a few seconds, or an automated script that even assists the developer reproduce the problem quickly.
Provide specific contradictions to specified and/or implied requirements or standards
Of course, if the product design or functionality deviates from stated requirements pointing this out in a non-confrontational way is a no-brainer. The key here is our argument must be non-confrontational because sometimes we may misinterpret the requirements, and sometimes the requirements may change without us being aware of those changes. There are also occasionally deviations from implied requirements such a UI design guidelines as a result of the introduction of new technologies, or changes in how customers use the product based on usability studies. Other implied standards include competing products or previous versions of the product. In any case, when arguing for a bug fix based on specified or implied requirements I recommend using a compare and contrast type of approach to better illustrate the problem as I perceive it.
Provide concrete examples of customer impact
This is really important! Providing a real world scenario that clearly illustrates not only how this bug will manifest itself to the customer, but also providing corroborating evidence from customers presents a strong case in favor of a bug fix. There are several useful repositories of customer feedback testers can use to bolster their point of view such as newsgroups, popular blogs, trade journal reviews of past or similar products, at Microsoft we also have Watson and SQM data, and product support reports. Using ‘real-world’ constructive feedback is often more meaningful than an internal mutiny by a portion of the test team.
Know your primary target customer profile
Testers often like to think we are representative of our customers. However, this may not always be the case. (It has always puzzled me as to why testers seem to think they have some greater affinity to the end user customer as compared to others on the product team.) Yes, it is important that testers understand who the primary target customer is for the current project or release and that is why many teams have detailed personas of primary, secondary, and sometimes even tertiary customer audiences. Of course, if we are in the commercial software business we want our customer base to be as large as possible. But, as the number of customers increase so does the diversity of value, and as they say…you can never please everyone! So, when defending your position to fix a particular bug it is always better to frame the discussion from the point of view of the primary customer persona as compared to your own personal bias.
Use your brain, not your emotions
Passion has long been an admired trait in software testers. However, unbridled passion fraught with antagonistic accusations can be detrimental to a successful bug resolution (and sometimes even a career). Some bugs obviously need to be fixed, while others may be more dependent on several mitigating (and competing) factors such as where you are in the software lifecycle, business impact, primary customer impact, risk, etc. I think it is largely agreed that perhaps the primary role of testers is to provide information, but that means we must also gather the pertinent information and represent that information logically within the relevant context to the management team (or decision makers). Remember…reckless rants rarely render reasonable results!
Comments
- Anonymous
May 20, 2009
PingBack from http://microsoft-sharepoint.simplynetdev.com/better-bug-reports/