Why we automate
I never really understood why so many people external to Microsoft seem to be against the Microsoft strategy to increase the amount of automation we rely on to test our products. Test automation has become sine qua non at Microsoft for many valuable reasons. Although there are some uninformed managers who propose the nonsensical notion of 100% automation, Microsoft as a company does not have some magical goal of 100% automation (whatever that means).
I have come to the conclusion these (mostly) external naysayers ("people who tend to be negative, who have difficulty thinking outside of the box, who go around, consciously or unconsciously, squelching dreams, ideas and visions") of automation simply do not, cannot, or perhaps refuse to comprehend the complexity of the systems or the problem space we deal with at Microsoft or how we design and develop our automated tests (in general, we do not write scripted tests). Or perhaps the anti-automation folks are SOS (stuck on stupid) and believe that any test (including automation) that does not find a bug is not a reasonable test or that executing that test does not provide some perception of reasonable value to an organization.
So, for those of you who are not afraid of being replaced by automation (if you are afraid a machine will take your job, then it probably will), and for those of you who have an open mind and know that real testing is not a simple comparison of scripted vs. exploratory, and that successful testing requires a myriad of techniques, methods, and approaches, then read on and I will discuss some of the reasons why we automate at Microsoft.
- Increasingly complex test matrices. (Whether you like it or not) the Windows operating system proliferates throughout the world. But, even the different Windows operating systems in common use throughout the world are staggering. Although we no longer support Windows 9x (including ME) there is still support for Windows NT 4.0, Windows 2000, Windows Xp, Windows 2003 Server, and Windows Vista. Then add in various combinations of service packs and hotfixes released for each of these, and test environments composed of upgrade scenarios (Windows Xp upgraded to Windows Vista) approximately 30 language versions, 16- bit, 32-bit, and 64 -bit and the number of operating system platforms alone becomes mathematically staggering. But, instead of writing several automated tests for each combination of environment parameters our testers ideally design a single test that detects the OS platform and environments, and develop the test in a way it can decide how to achieve its objective based on platform profiling and other design techniques. (For example, a test for defrag has to take into account the German version of the OS does not contain the defrag utility (long story), or a security test on the French version considers the French version does not contain 128-bit encryption). One automated test instead of an army of button pushers doing the same thing on 5+ operating environments and 30 languages just makes sense!
- Sustained maintenance. Microsoft supports many of its products for 7 to 10 years. So, the more test automation we can hand off to our sustained engineering teams, the less cost the company has to bear in the long term. Do these regression tests find a lot of defects. Perhaps not, but they do provide confidence that changes to the code base introduced by hotfixes and service packs do not adversely impact previous functionality. They also eliminate the burden of having to maintain an army of testers for the entire shelf life of the product.
- Increase breadth of coverage. Automation is distributed across the network, it is not ran on one or two desktops or on a few lab machines. Many product teams have extensive test harnesses that are capable of scheduling tests, configuring environments, then distributing tests to hundreds of machines in a lab, or even on idle machines on the network. This way we not only run test automation on pre-defined environments, but also on work machines and other systems. Test automation is running 24 hours a day and collecting and providing valuable information.
- We don't rely on automation to find bugs, we use test automation to provide information. Sometimes automated tests can expose unexpected and/or unpredictable behavior, but we know that automated tests do not find a great deal of defects after the design and development phase of that automated test. However, many of our products have daily builds and instead of rerunning a bunch of tests to ensure changes from a previous build have not affected previously tested functionality an automated test is more efficient. The minefield analogy sometimes applied as an argument against regression testing only really makes sense if the test is being executed on a static code base. But, if you test in a real world where you might get daily or even weekly builds of complex systems then you probably realize that executing a set of tests after someone changes the minefield, has a probability that following the same path may expose a defect, and if it doesn't then it still provides valuable information.
- Increased job satisfaction! This is a big side benefit. I know some people are afraid of automation, and some people may lack the skill to design and develop effective automation, but professional testers realize it is a huge challenge to design and develop effective test automation for complex systems. To quote one tester, "the hardest code I ever wrote was the code needed to test other code."
There are many more justifiable reasons why increased automation simply makes sense at Microsoft, but this should give you some idea of why without automation our jobs would be much more difficult than they already are.
Comments
- Anonymous
September 08, 2007
The comment has been removed - Anonymous
September 08, 2007
The comment has been removed