To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
Implementing Automated Software Testing is a systematic examination of the why and how of large scale automation of big, complex systems.
While the back cover says "Whether you're a test professional, QA spec ialist, project manager, or developer, this book can help you bring unprecendented efficiency to testing - and then use AST to improve your entire developement lifecycle", I'm not sure this book is for everyone.
New QAers or testers, those involved with testing smaller-scale systems, or those involved in projects with smaller budgets and shorter timelines might find the book's recommendations overwhelming. It seems clear to me that this book is best for those who are charged with determining a test automation approach for very large scale, long-term systems, with large budgets, such as those used by the Department of Defense and larger commercial efforts.
From the Preface:
"We at IDT have identified a boilerplate solution, strategies, and idea, all provided in this book, that can help increase the chances of your automated testing success."
I agree. All of the authors and all of the contributing authors are IDT employees. This is IDT's boilerplate solution. As with all boilerplate solutions, it may not fit your specific situation.
If it does, you will find a wealth of information. And even if it doesn't you can still find useful nuggets of information about how some of these "big-project" teams automate their tests. For example, I particularly liked Chapter 4 - Why Automated Software Testing Fails and Pitfalls to Avoid. I believe this chapter would be useful for anyone about to embark on a test automation project.
This book isn't for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a great deal of value from the book.
I'm an Agile fanatic and I had a difficult time dealing with book's approach which emphasizes spending large amounts of time creating documentation such as requirements traceability matrixes, detailed test plans, etc. My preferred approach is to have testers working side-by-side as part of a team, creating specifications from user stories/requirements and moving those right in to automated test suites via tools like Selenium, Cucumber, or RSpec.
That said, I did indeed get some good value from the book. I found the discussions on making hard evaluations on what to test very worthwhile reading: teams can easily vaporize large amounts of time creating large suites of brittle, unmaintainable automated tests. This book has several really good chapters on using business cases to drive return on investment (ROI) decisions for testing, understanding automated test pitfalls, and adjusting your testing as you progress through your project.
Additionally, one of the book's high points was on building the test team: "Put the Right People on the Project - Know the Skill Sets Required." This is a great chapter which emphasizes starting the search by focusing on how to interview test team members - and how those testers' skills are greatly different than other members of the team.
The book's very academic, dry tone makes for some difficult reading, and few concrete examples are used until very late in the book. Having spent a large number of years either in the DOD or working for DOD contractors, it quickly became apparent that much of the book seemed targeted to folks working in those environments - too many dry acronyms are scattered through the book, adding to the difficulty in reading.
The lack of examples using real tools frustrated me. While the appendices contain some examples of comparing various tools, the book doesn't actually show how a real world testing environment would use those tools. One appendix, eight or nine pages in length, is touted as a "Case Study" but falls short, in my opinion.
Overall it's a decent book. The dry tone and lack of real environments is balanced out by the excellent coverage of team skills and emphasis on selecting how and what you test.