Myth Busting Testing and Test Automation
I've been reading a lot lately, and wow, there are so many opinions on the advantages and disadvantages of writing test automation. I thought I’d share my observations as to the common misperceptions that anyone in the software development business (engineer or manager, development, test or pm) needs to understand about testing in general and test automation in particular.
Myth #1: There are a set of tests that I would execute manually, but would never write test automation for because I'd only ever run them once.
If you would only run a test once, then you've written the wrong test, or you haven't started testing soon enough.
I am going to say it again (because I can, it’s my blog, and because it’s important)
If you would only run a test once, then you've written the wrong test, or you haven't started testing soon enough.
Quite simply, if there is a test that you’d only run one time, your test design is wrong. The test should be more generalized, perhaps use data sampling to provide variability to the test. Well designed tests are the key to testing, whether that testing is automated or manual.
OK, Adam, assume that I believe you about writing more general tests, but what do you mean about testing soon enough? I thought that writing test automation too soon is a bad thing. It breaks all the time, and my maintenance cost is too high.
My point here is that if you’ve waited to test a component such that if you run all your tests and, assuming that there were no failures, it’s ready to ship, then you haven’t been involved in the development process early enough. I think that this is a great topic for a future more detailed post, so for now, I'll focus on the test design part of the problem.
A quote from James Bach from Test Automation Snake Oil:
Once a specific test case is executed a single time, and no bug is found, there is little chance that the test case will ever find a bug, unless a new bug is introduced into the system. If there is variation in the test cases, though, as there usually is when tests are executed by hand, there is a greater likelihood of revealing problems both new and old. Variability is one of the great advantages of hand testing over script and playback testing.
And another from Brian Marick on When Should a Test be Automated
The fact that humans can’t be precise about inputs means that repeated runs of a manual test are often slightly different tests, which might lead to discovery of a support code bug. For example, people make mistakes, back out, and retry inputs, thus sometimes stumbling across interactions between error-handling code and the code under test.
While I completely agree with what James and Brian are saying regarding variability, I completely disagree that you can’t have the same variability in your automated testing as your manual testing. It’s harder to design and implement test systems that solve the variability problem and yet can be reproducible, but hey, that’s one of the reasons we build software: to solve complex problems. It does mean that typical techniques that James mentions (scripted and playback testing) are not useful mechanisms for more effective test automation.
Now there may be a set of tests that may be too expensive to automate, but automating because of expense is an entirely different reason, and one that I will discuss another time.
Here’s another great quote from James Bach’s snake oil:
One day, a few years ago, there was a blackout during a fierce evening storm, right in the middle of the unattended execution of the wonderful test suite that my team had created. When we arrived at work the next morning, we found that our suite had automatically rebooted itself, reset the network, picked up where it left off, and finished the testing. It took a lot of work to make our suite that bulletproof, and we were delighted. The thing is, we later found, during a review of test scripts in the suite, that out of about 450 tests, only about 18 of them were truly useful. It's a long story how that came to pass (basically the wise oak tree scenario) but the upshot of it was that we had a test suite that could, with high reliability, discover nothing important about the software we were testing. I've told this story to other test managers who shrug it off. They don't think this could happen to them. Well, it will happen if the machinery of testing distracts you from the craft of testing.
Again, I agree entirely with the sentiment, but I completely disagree when he indicates that automated testing was the culprit. This is not a test automation issue, it’s a test design issue. You could just as easily have run 450 manual tests that were poorly designed that would also tell you nothing useful about the code under test.
That it’s harder to write well designed automated tests is a given, but heck, as I stated above, why do we write any software? To solve some highly complex problems that would take us longer to do manually. Test automation, while having its own set of unique problems, is still, at the end of the day, just software that is attempting to verify the state of another piece of software at any given point of time.
When tests are correctly designed, and you are involved in the process early enough, there should never be a test that you’d only run once.
Comments
- Anonymous
June 24, 2005
Visual Studio Team System User Education - Process Planning Guide
David has written a nice guideline... - Anonymous
July 04, 2005
Adam Ulrich is busting testing myths here ⊕ and here ⊕.
Eric Jarvi on:
VSTS Tip: making sense... - Anonymous
July 11, 2005
Visual Studio Team System User Education - Process Planning Guide
David has written a nice guideline... - Anonymous
August 03, 2005
I think my team - much of Microsoft, in fact - is going about testing all wrong.
My team has a mandate... - Anonymous
May 31, 2009
PingBack from http://woodtvstand.info/story.php?id=1108 - Anonymous
June 12, 2009
PingBack from http://insomniacuresite.info/story.php?id=4838