The authors give you a top level description of why automated software testing is highly desirable, along with detailed guidelines for doing so. The tone is very realistic, making you aware of many issues associated with the topic.
For one thing, you are cautioned to avoid the blandishments of a vendor who might suggest that her product will meet all your testing needs. In the authors' experience, there is no single tool that covers all major operating systems. The book also advises you to look at open source freeware. There is a surprising amount of good stuff freely available, that you might want to check first before considering proprietary products.
The book mentions many reasons for automation. These include manual tester fatigue. But also that some things are very difficult to test in a manual fashion. Often this could be because manual testing is at the GUI level. There could be bugs deep in the code, maybe in computational blocks.
Which also leads to the point that the "testers" for making automated tests often have a different skill set from manual testers. The latter might not be programmers. The former should be, with access to the source code [white box or grey box testing]. Because this gives them knowledge about what automated tests to write, that test critical aspects.
Naively, given the book's nature, we might expect it to say automate everything in sight. But the book's credibility is enhanced by it explaining that this is simply not economically feasible. The estimate is 40-60% of tests to be automated. Table 6.2 in the book is a list of questions that can be applied to each test, to suggest whether a test is suitable for automation. Roughly, tests that will be run often are a high candidate for automation.
The book also strongly recommends extensive unit testing. This is the lowest level of testing and bugs caught here have the best payoff in terms of minimising the cost to fix. A tight software development loop; "agile" as opposed to "waterfall"-like, though the book doesn't use these terms. Plus often unit testing might not be doable at the GUI level anyway, if the units are computational routines. So punting by not having automated unit tests and expecting manual tests to later find bugs in these units is very bad. Of coure, the book also describes higher level tests like regression and functional tests. But first do the unit tests.
My professional background had been as a Software Engineer and Manager in application development; however, I recently became the Manager for my company's automation and performance testing team. I wanted to get a good overview of implementing automated testing, and this looked like the best of the books available for my purpose. I considered the author's previous book Automated Software Testing, but it was written in 1999. I wanted something that would talk about more current tools available so this 2009 offering seemed to better suit my needs.
Like most technology books, this book is written in a very organized manner. The first four chapters are a good overview of the what and why of automated testing along with information about developing a business case and common myths. The section on the business case is fairly involved on how to compute ROI for automated testing. You may be able to get by with something simpler than this, but it's a good starting point. The remaining six chapters give more details for executing an automated software testing effort from requirements and tools to processes and staffing guidelines. I found the chapters on automated software testing process and staffing guidelines the most helpful. The process recommendations are lightweight, but I agree with the authors that testing automation *is* software development.
The authors write from a perspective of a defense contractor, and this is important to understand. In this environment projects are typically standalone and large in nature, but this will not be the case for all readers. I work in the IT department of a for profit company, and my automated software testing team operates in a shared service model to support the highest priority projects. Whereas a defense contractor typically buys hardware, software, tools, etc. for each program as a part of their bid, my team uses a consistent development stack and reuses a consistent hardware environment. We may add tools or hardware as new situations come up, but things are fairly stable overall. We also have different titles, roles, and responsibilities than those defined in chapter 10. These differences don't change the applicability of the concepts, but it does require me to translate the application from the defense contractor mindset.
The appendices give additional checklists and some detailed information on tools in the marketplace. The tool information will become dated soon, but it's probably good for another year or so. The authors also give a lot of links to web sites throughout the book, and I like it when readers are pointed to additional information for continued learning.
Overall, I couldn't ask for a lot more given what I was looking for. I am now in a better position to work with the experience people on my team and be confident in my ability to understand the key issues and considerations. Those looking for more hands on information may be left wanting. There are not a lot in the way of examples, but this is difficult to do without slanting the book toward specific tools. I think that the authors assume a certain level of experience for the software developers who will be doing the actual implementation, and they assume that they can translate the concepts into code. Please feel free to ask questions in the comments section if there is an area that I have not addressed.
This book presents a comprehensive treatement of the domain of software testing automation. The first part defines and describes test automation, proposing a business case for automation and discussing the pitfalls that should be avoided. The second part is a roadmap for test automation. It gives six keys for software testing automation payoff: 1. Know your requriments 2. Develop a strategy 3. Test your tools 4. Track progress and adjust 5. Implement the process 6. Put the right people in the process. Four appendixes complete the book. They provide a process checklist, explain how automation applies to various testing types, disscuss tools evaluation and give a case study.
The fact that the autors have worked with the Defence industry might have affected the way the book was conceived and written: with structure and rigor. The discussions, recommandations, references and tools suggestions apply however to every software testing situation and not only to organization that are strongly process oriented. The aim of the book is to be a guide that can help to implement successfully automated software testing and it certainly achieve its objective.