To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
The authors give you a top level description of why automated software testing is highly desirable, along with detailed guidelines for doing so. The tone is very realistic, making you aware of many issues associated with the topic.
For one thing, you are cautioned to avoid the blandishments of a vendor who might suggest that her product will meet all your testing needs. In the authors' experience, there is no single tool that covers all major operating systems. The book also advises you to look at open source freeware. There is a surprising amount of good stuff freely available, that you might want to check first before considering proprietary products.
The book mentions many reasons for automation. These include manual tester fatigue. But also that some things are very difficult to test in a manual fashion. Often this could be because manual testing is at the GUI level. There could be bugs deep in the code, maybe in computational blocks.
Which also leads to the point that the "testers" for making automated tests often have a different skill set from manual testers. The latter might not be programmers. The former should be, with access to the source code [white box or grey box testing]. Because this gives them knowledge about what automated tests to write, that test critical aspects.
Naively, given the book's nature, we might expect it to say automate everything in sight. But the book's credibility is enhanced by it explaining that this is simply not economically feasible. The estimate is 40-60% of tests to be automated. Table 6.2 in the book is a list of questions that can be applied to each test, to suggest whether a test is suitable for automation. Roughly, tests that will be run often are a high candidate for automation.
The book also strongly recommends extensive unit testing. This is the lowest level of testing and bugs caught here have the best payoff in terms of minimising the cost to fix. A tight software development loop; "agile" as opposed to "waterfall"-like, though the book doesn't use these terms. Plus often unit testing might not be doable at the GUI level anyway, if the units are computational routines. So punting by not having automated unit tests and expecting manual tests to later find bugs in these units is very bad. Of coure, the book also describes higher level tests like regression and functional tests. But first do the unit tests.
Although this book is not oriented towards agile software development, it's still a solid resource for anyone new to test automation. It's pragmatic, practical, clearly written, easy to understand. I especially like the six "Keys" for automation payoff. The authors explain the reasons for automating - it might seem obvious to some but many newbies don't see all the potential benefits. The book also blows through automation myths. There's a lot of emphasis on ROI, which is often overlooked.
Where the advice I give on automation differs from this book is making it a whole team effort, rather than the test team only, but that's easier to do in an agile setting. Also, the authors do talk about things like interviewing stakeholders, and getting people with the right skills, these are all so important.
I wish the book had a section on continuous integration and automated build process. I think in another few years nobody will question the need for this, any more than people currently question the need for automated source code management. Whereas a few years ago nobody in my conference tutorials was doing CI, nowadays about a third of the people are. I think it's so critical to have a way to continually run all the automated regression tests every time new code is checked in. The book makes a passing reference to this, and it does mention test automation at different levels starting at the unit level, but it doesn't explain why you need a build process and how to set one up.
Nevertheless, it's a great resource, and will give readers a good grip on the fundamentals of test automation. I get so frustrated when people think it's impossible to automate, or that they have to hire some expensive consultant to get it done. This book will enable teams to be much more successful. It is a good overview of all the different areas where automation can help a team tremendously.
Just be sure to also buy a book that tells you how to set up continuous integration and automated builds, such as _Pragmatic Project Automation_ by Mike Clark, or _Continuous Integration_ by Paul Duvall, Andy Glover and Steve Matyas. Or _Ship It_ by Jared Richardson and William Gwaltney.
My professional background had been as a Software Engineer and Manager in application development; however, I recently became the Manager for my company's automation and performance testing team. I wanted to get a good overview of implementing automated testing, and this looked like the best of the books available for my purpose. I considered the author's previous book Automated Software Testing, but it was written in 1999. I wanted something that would talk about more current tools available so this 2009 offering seemed to better suit my needs.
Like most technology books, this book is written in a very organized manner. The first four chapters are a good overview of the what and why of automated testing along with information about developing a business case and common myths. The section on the business case is fairly involved on how to compute ROI for automated testing. You may be able to get by with something simpler than this, but it's a good starting point. The remaining six chapters give more details for executing an automated software testing effort from requirements and tools to processes and staffing guidelines. I found the chapters on automated software testing process and staffing guidelines the most helpful. The process recommendations are lightweight, but I agree with the authors that testing automation *is* software development.
The authors write from a perspective of a defense contractor, and this is important to understand. In this environment projects are typically standalone and large in nature, but this will not be the case for all readers. I work in the IT department of a for profit company, and my automated software testing team operates in a shared service model to support the highest priority projects. Whereas a defense contractor typically buys hardware, software, tools, etc. for each program as a part of their bid, my team uses a consistent development stack and reuses a consistent hardware environment. We may add tools or hardware as new situations come up, but things are fairly stable overall. We also have different titles, roles, and responsibilities than those defined in chapter 10. These differences don't change the applicability of the concepts, but it does require me to translate the application from the defense contractor mindset.
The appendices give additional checklists and some detailed information on tools in the marketplace. The tool information will become dated soon, but it's probably good for another year or so. The authors also give a lot of links to web sites throughout the book, and I like it when readers are pointed to additional information for continued learning.
Overall, I couldn't ask for a lot more given what I was looking for. I am now in a better position to work with the experience people on my team and be confident in my ability to understand the key issues and considerations. Those looking for more hands on information may be left wanting. There are not a lot in the way of examples, but this is difficult to do without slanting the book toward specific tools. I think that the authors assume a certain level of experience for the software developers who will be doing the actual implementation, and they assume that they can translate the concepts into code. Please feel free to ask questions in the comments section if there is an area that I have not addressed.
"Implementing Automated Software Testing" is meant for software test professionals and managers. The authors also list developers and project managers in the target audience. If an organization has developers/pms in a dual role, this makes sense. The book really is written from a QA viewpoint.
At least one of the authors has done work for the Department of Defense and the other two sound like they have worked closely with it. The writing style reminds me of the CMM documents - a government research paper style leaks through. This isn't a bad thing - I thought it was a very good book - just something to be prepared for.
I particularly liked the distinction between Automated Software Testing and playback/record testing. The book really walks you through setting up an Automated Software Testing program. It contain recipes (which are more like requirements), each of the phases and how to respond to roadblocks. There was a whole chapter dedicated to myths and best practices. For someone setting this up, there are checklists and a job description (skills and roles) needed for each of the phases.
Overall, this book is like a field guide for someone about to start an Automated Software Testing program. My only criticism is that it is acronym heavy - remember the government paper comment - and could have used a glossary.
This book presents a comprehensive treatement of the domain of software testing automation. The first part defines and describes test automation, proposing a business case for automation and discussing the pitfalls that should be avoided. The second part is a roadmap for test automation. It gives six keys for software testing automation payoff: 1. Know your requriments 2. Develop a strategy 3. Test your tools 4. Track progress and adjust 5. Implement the process 6. Put the right people in the process. Four appendixes complete the book. They provide a process checklist, explain how automation applies to various testing types, disscuss tools evaluation and give a case study.
The fact that the autors have worked with the Defence industry might have affected the way the book was conceived and written: with structure and rigor. The discussions, recommandations, references and tools suggestions apply however to every software testing situation and not only to organization that are strongly process oriented. The aim of the book is to be a guide that can help to implement successfully automated software testing and it certainly achieve its objective.
(Full disclosure: I got this book for review at no cost for me)
"Implementing Automated Software Testing" walks the reader through the each of the phases necessary for going from a manual testing environment to an automated one. The book is written mostly in general terms, making it useful for most environments, regardless of the type of applications being tested.
The book reads like a compilation of tips accumulated over the course of years of experience by the authors. It can almost be described as vaccination: If you don't read the book you might not make all the mistakes it tries to prevent, but if you do read it, you'll instinctively know how to make good decisions and what to do next.
Content is organized in a very logical manner. It starts defining why AST is beneficial and how to introduce it to the current environment, including presenting the business case. From there it goes to the main subject, implementation, followed by results measurement, and personnel roles in the project.
Overall, the book is very well written. Points are made quickly and in simple terms. The page layout and fonts make the book easy to read.
The book is a very valuable tool for managers and team leads of software QA. If read before starting an AST project, it will save the reader from many of the common mistakes made by many.
Wanna know what's hot in test automation? Just do a Google search on the term "Implementing Automated Software Testing" (IAST), and you'll find out quickly. With approximately 3 million search results, this is clearly a very popular term, but not only that, it is now the title of a new book written by Elfriede Dustin, Thom Garrett and Bernie Gauf. We at the Automated Testing Institute aren't sure if the title is coincidently identical to one of the most popular test automation terms, or if it was a genius marketing ploy to get more attention drawn to the book; whatever the reason, it is definitely aptly named. This successor to Automated Software Testing (AST) - a 1999 book also coauthored by Dustin, and also a term with approximately 30 million Google search results (wink, wink) - wastes no time in picking up where its predecessor leaves off. AST largely focuses on how test automation may fit into the overall software development and testing lifecycles, IAST lends more of its attention to the test automation effort itself and the details of managing an automated software testing effort.
IAST does an excellent job covering the following Automation Body of Knowledge version 1 (ABOK v1) skill categories (visit [....] for more information on the ABOK): * Skill Category 1: Automation's Role in the Software Testing Lifecycle (STLC) * Skill Category 2: Test Automation Types and Interfaces * Skill Category 3: Automation Tools * Skill Category 4: Test Automation Framework Design * Skill Category 5: Automation Framework Design * Skill Category 12: Automated Test Reporting
This review will therefore describe the book and discuss how these categories are addressed. IAST is divided into three major segments: * Part I: What Is Automated Software Testing and Why Should We Automate? (Chapters 1 - 4) * Part II: How to Automate: Top Six Keys for Automation Payoff (Chapters 5 - 10) * Appendices (Appendix A - D)
Part I does an excellent job addressing Skill Category 1 of the ABOK in that it addresses the `what' and `why' of test automation. Chapter 1 sets the tone of the book with a "unified" definition of test automation; a definition that also works to distinguish test automation from manual software testing. This chapter, along with its reference to Appendix B, also addresses ABOK Skill Category 2 with a discussion of the "typical testing types that lend themselves to automation". Chapters 2 through 4, round out Part 1 of IAST, and continue to further address ABOK Skill Category 1 with topics including: reasons for automating, making the business case for test automation with return on investment (ROI) calculations, and how to avoid typical test automation pitfalls. The latter half of Chapter 4 also addresses ABOK Skill Category 3, by discussing tool evaluation and selection. This discussion is aided by Appendix C which expands into tools and tool requirements not only for automating tests, but also for automation of processes such as requirements management, defect tracking and security testing.
Part II of IAST graduates from the `what' and `why' of test automation and delves into the `how' of test automation. Not `how' in terms of how to develop scripts, but `how' in terms of how to create, implement and monitor an automated test framework; and these are items found in Skill Categories 4, 5 and 12 of the ABOK. Part II of IAST begins with Chapter 5, which discusses how to define requirements for your test automation effort. Chapters 6 and 7 build on this by discussing how to take these requirements and develop a compatible strategy and framework. Chapters 8 and 9 cover the next logical steps of defining metrics to track test automation progress, and the implementation of the automated software test framework processes. Then finally, Chapter 10 closes the main portion of the book with a broad discussion of the skills required for test automation implementation.
In summary, IAST is strong with respect to test automation from a macroscopic level. This makes it a great resource for managers, leads, and anyone that is responsible for or will take part in defining and implementing a test automation effort. [Review originally posted in the August 09 Issue of the Automated Software Testing Magazine - [...]]