Amazon calculates a product’s star ratings based on a machine learned model instead of a raw data average. The model takes into account factors including the age of a rating, whether the ratings are from verified purchasers, and factors that establish reviewer trustworthiness.
The authors give you a top level description of why automated software testing is highly desirable, along with detailed guidelines for doing so. The tone is very realistic, making you aware of many issues associated with the topic.
For one thing, you are cautioned to avoid the blandishments of a vendor who might suggest that her product will meet all your testing needs. In the authors' experience, there is no single tool that covers all major operating systems. The book also advises you to look at open source freeware. There is a surprising amount of good stuff freely available, that you might want to check first before considering proprietary products.
The book mentions many reasons for automation. These include manual tester fatigue. But also that some things are very difficult to test in a manual fashion. Often this could be because manual testing is at the GUI level. There could be bugs deep in the code, maybe in computational blocks.
Which also leads to the point that the "testers" for making automated tests often have a different skill set from manual testers. The latter might not be programmers. The former should be, with access to the source code [white box or grey box testing]. Because this gives them knowledge about what automated tests to write, that test critical aspects.
Naively, given the book's nature, we might expect it to say automate everything in sight. But the book's credibility is enhanced by it explaining that this is simply not economically feasible. The estimate is 40-60% of tests to be automated. Table 6.2 in the book is a list of questions that can be applied to each test, to suggest whether a test is suitable for automation. Roughly, tests that will be run often are a high candidate for automation.
The book also strongly recommends extensive unit testing. This is the lowest level of testing and bugs caught here have the best payoff in terms of minimising the cost to fix. A tight software development loop; "agile" as opposed to "waterfall"-like, though the book doesn't use these terms. Plus often unit testing might not be doable at the GUI level anyway, if the units are computational routines. So punting by not having automated unit tests and expecting manual tests to later find bugs in these units is very bad. Of coure, the book also describes higher level tests like regression and functional tests. But first do the unit tests.
Although this book is not oriented towards agile software development, it's still a solid resource for anyone new to test automation. It's pragmatic, practical, clearly written, easy to understand. I especially like the six "Keys" for automation payoff. The authors explain the reasons for automating - it might seem obvious to some but many newbies don't see all the potential benefits. The book also blows through automation myths. There's a lot of emphasis on ROI, which is often overlooked.
Where the advice I give on automation differs from this book is making it a whole team effort, rather than the test team only, but that's easier to do in an agile setting. Also, the authors do talk about things like interviewing stakeholders, and getting people with the right skills, these are all so important.
I wish the book had a section on continuous integration and automated build process. I think in another few years nobody will question the need for this, any more than people currently question the need for automated source code management. Whereas a few years ago nobody in my conference tutorials was doing CI, nowadays about a third of the people are. I think it's so critical to have a way to continually run all the automated regression tests every time new code is checked in. The book makes a passing reference to this, and it does mention test automation at different levels starting at the unit level, but it doesn't explain why you need a build process and how to set one up.
Nevertheless, it's a great resource, and will give readers a good grip on the fundamentals of test automation. I get so frustrated when people think it's impossible to automate, or that they have to hire some expensive consultant to get it done. This book will enable teams to be much more successful. It is a good overview of all the different areas where automation can help a team tremendously.
Just be sure to also buy a book that tells you how to set up continuous integration and automated builds, such as _Pragmatic Project Automation_ by Mike Clark, or _Continuous Integration_ by Paul Duvall, Andy Glover and Steve Matyas. Or _Ship It_ by Jared Richardson and William Gwaltney.
Implementing Automated Software Testing is a systematic examination of the why and how of large scale automation of big, complex systems.
While the back cover says "Whether you're a test professional, QA spec ialist, project manager, or developer, this book can help you bring unprecendented efficiency to testing - and then use AST to improve your entire developement lifecycle", I'm not sure this book is for everyone.
New QAers or testers, those involved with testing smaller-scale systems, or those involved in projects with smaller budgets and shorter timelines might find the book's recommendations overwhelming. It seems clear to me that this book is best for those who are charged with determining a test automation approach for very large scale, long-term systems, with large budgets, such as those used by the Department of Defense and larger commercial efforts.
From the Preface:
"We at IDT have identified a boilerplate solution, strategies, and idea, all provided in this book, that can help increase the chances of your automated testing success."
I agree. All of the authors and all of the contributing authors are IDT employees. This is IDT's boilerplate solution. As with all boilerplate solutions, it may not fit your specific situation.
If it does, you will find a wealth of information. And even if it doesn't you can still find useful nuggets of information about how some of these "big-project" teams automate their tests. For example, I particularly liked Chapter 4 - Why Automated Software Testing Fails and Pitfalls to Avoid. I believe this chapter would be useful for anyone about to embark on a test automation project.
This book isn't for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a great deal of value from the book.
I'm an Agile fanatic and I had a difficult time dealing with book's approach which emphasizes spending large amounts of time creating documentation such as requirements traceability matrixes, detailed test plans, etc. My preferred approach is to have testers working side-by-side as part of a team, creating specifications from user stories/requirements and moving those right in to automated test suites via tools like Selenium, Cucumber, or RSpec.
That said, I did indeed get some good value from the book. I found the discussions on making hard evaluations on what to test very worthwhile reading: teams can easily vaporize large amounts of time creating large suites of brittle, unmaintainable automated tests. This book has several really good chapters on using business cases to drive return on investment (ROI) decisions for testing, understanding automated test pitfalls, and adjusting your testing as you progress through your project.
Additionally, one of the book's high points was on building the test team: "Put the Right People on the Project - Know the Skill Sets Required." This is a great chapter which emphasizes starting the search by focusing on how to interview test team members - and how those testers' skills are greatly different than other members of the team.
The book's very academic, dry tone makes for some difficult reading, and few concrete examples are used until very late in the book. Having spent a large number of years either in the DOD or working for DOD contractors, it quickly became apparent that much of the book seemed targeted to folks working in those environments - too many dry acronyms are scattered through the book, adding to the difficulty in reading.
The lack of examples using real tools frustrated me. While the appendices contain some examples of comparing various tools, the book doesn't actually show how a real world testing environment would use those tools. One appendix, eight or nine pages in length, is touted as a "Case Study" but falls short, in my opinion.
Overall it's a decent book. The dry tone and lack of real environments is balanced out by the excellent coverage of team skills and emphasis on selecting how and what you test.
My professional background had been as a Software Engineer and Manager in application development; however, I recently became the Manager for my company's automation and performance testing team. I wanted to get a good overview of implementing automated testing, and this looked like the best of the books available for my purpose. I considered the author's previous book Automated Software Testing, but it was written in 1999. I wanted something that would talk about more current tools available so this 2009 offering seemed to better suit my needs.
Like most technology books, this book is written in a very organized manner. The first four chapters are a good overview of the what and why of automated testing along with information about developing a business case and common myths. The section on the business case is fairly involved on how to compute ROI for automated testing. You may be able to get by with something simpler than this, but it's a good starting point. The remaining six chapters give more details for executing an automated software testing effort from requirements and tools to processes and staffing guidelines. I found the chapters on automated software testing process and staffing guidelines the most helpful. The process recommendations are lightweight, but I agree with the authors that testing automation *is* software development.
The authors write from a perspective of a defense contractor, and this is important to understand. In this environment projects are typically standalone and large in nature, but this will not be the case for all readers. I work in the IT department of a for profit company, and my automated software testing team operates in a shared service model to support the highest priority projects. Whereas a defense contractor typically buys hardware, software, tools, etc. for each program as a part of their bid, my team uses a consistent development stack and reuses a consistent hardware environment. We may add tools or hardware as new situations come up, but things are fairly stable overall. We also have different titles, roles, and responsibilities than those defined in chapter 10. These differences don't change the applicability of the concepts, but it does require me to translate the application from the defense contractor mindset.
The appendices give additional checklists and some detailed information on tools in the marketplace. The tool information will become dated soon, but it's probably good for another year or so. The authors also give a lot of links to web sites throughout the book, and I like it when readers are pointed to additional information for continued learning.
Overall, I couldn't ask for a lot more given what I was looking for. I am now in a better position to work with the experience people on my team and be confident in my ability to understand the key issues and considerations. Those looking for more hands on information may be left wanting. There are not a lot in the way of examples, but this is difficult to do without slanting the book toward specific tools. I think that the authors assume a certain level of experience for the software developers who will be doing the actual implementation, and they assume that they can translate the concepts into code. Please feel free to ask questions in the comments section if there is an area that I have not addressed.
"Implementing Automated Software Testing" is meant for software test professionals and managers. The authors also list developers and project managers in the target audience. If an organization has developers/pms in a dual role, this makes sense. The book really is written from a QA viewpoint.
At least one of the authors has done work for the Department of Defense and the other two sound like they have worked closely with it. The writing style reminds me of the CMM documents - a government research paper style leaks through. This isn't a bad thing - I thought it was a very good book - just something to be prepared for.
I particularly liked the distinction between Automated Software Testing and playback/record testing. The book really walks you through setting up an Automated Software Testing program. It contain recipes (which are more like requirements), each of the phases and how to respond to roadblocks. There was a whole chapter dedicated to myths and best practices. For someone setting this up, there are checklists and a job description (skills and roles) needed for each of the phases.
Overall, this book is like a field guide for someone about to start an Automated Software Testing program. My only criticism is that it is acronym heavy - remember the government paper comment - and could have used a glossary.
This book presents a comprehensive treatement of the domain of software testing automation. The first part defines and describes test automation, proposing a business case for automation and discussing the pitfalls that should be avoided. The second part is a roadmap for test automation. It gives six keys for software testing automation payoff: 1. Know your requriments 2. Develop a strategy 3. Test your tools 4. Track progress and adjust 5. Implement the process 6. Put the right people in the process. Four appendixes complete the book. They provide a process checklist, explain how automation applies to various testing types, disscuss tools evaluation and give a case study.
The fact that the autors have worked with the Defence industry might have affected the way the book was conceived and written: with structure and rigor. The discussions, recommandations, references and tools suggestions apply however to every software testing situation and not only to organization that are strongly process oriented. The aim of the book is to be a guide that can help to implement successfully automated software testing and it certainly achieve its objective.
(Full disclosure: I got this book for review at no cost for me)
"Implementing Automated Software Testing" walks the reader through the each of the phases necessary for going from a manual testing environment to an automated one. The book is written mostly in general terms, making it useful for most environments, regardless of the type of applications being tested.
The book reads like a compilation of tips accumulated over the course of years of experience by the authors. It can almost be described as vaccination: If you don't read the book you might not make all the mistakes it tries to prevent, but if you do read it, you'll instinctively know how to make good decisions and what to do next.
Content is organized in a very logical manner. It starts defining why AST is beneficial and how to introduce it to the current environment, including presenting the business case. From there it goes to the main subject, implementation, followed by results measurement, and personnel roles in the project.
Overall, the book is very well written. Points are made quickly and in simple terms. The page layout and fonts make the book easy to read.
The book is a very valuable tool for managers and team leads of software QA. If read before starting an AST project, it will save the reader from many of the common mistakes made by many.