Automation, and the shift to Quality Engineering

Automation, and the shift to Quality Engineering

At this point, the role of manual testing needs to be limited to figuring out how to automate.  Automation is a mandatory deliverable of each Sprint; it's folly to rely on manual testing or to defer automation to a later Sprint.  That said, it's sometimes a challenge to write automated tests for code that is in flight, especially in the first week of the Sprint. So for those days, manual testing is an effective way to explore the best ways to automate.  To me, that's the beginning, middle, and end of manual testing, barring some incredibly compelling reason why automation is physically impossible for the User Stories at hand.  There are plenty of tools in the marketplace to facilitate automation; tools to drive user interfaces (like Selenium), tools to drive infrastructure (like Robot Framework), tools to automate deployment on resources (like Puppet, Chef, and Ansible).  These are only a few examples, but in general there is no shortage of decent tools; tools that facilitate accomplishing automation as a current, directly and immediately relevant deliverable in each Sprint.

There is a net implication of this, outside of the obvious benefits of writing tests once and running them continuously. It is that Quality Assurance (QA) is obsolete, and instead Quality Engineering (QE) is the new paradigm. QA was about finding out of the basic functionality works, looking at some negative conditions, and finding out of the code regressed, to the extent possible in the allotted time.  In classic QA we learned about basic functionality after the code was developed, while the development itself was the real focal activity. 

I strongly advocate the term QE, and the shift in what it implies.  Quality Engineers have real influence on Design, User Stories, Sprint contents, thought patterns of the Developers, etc.  Quality Engineers think about whether the system can be deployed as documented, they measure elapsed time and level of difficulty to customer first success, they measure the System’s ability to meet customer SLAs, they examine the real world scenarios the system will encounter, the scalability, the performance under unexpected loads and scenarios, robustness in the face of failures, what those failures might be, the system's endurance over time, etc.  They are aware of Sprint metrics, and they have influence over the prioritization of User Stories, Technical Backlog, and Sprint Planning.  They are active influential members of the Feature Team, who sometimes draw the line and say "we will add no new functionality until what we already have works"; we don't want to build on a house of cards. 

I see QA as an old paradigm, where people were very busy chasing the question "does the system run pretty well under good conditions?", and the prevalence of good automation capabilities really enables the job to transform to Quality Engineering. 

A Microservices Architecture allows for the QE model that you describe. (The decomposition of monolithic applications into single-feature programs that run independently but well-orchestrated.) Continuous development/continuous integration, documentation and testing is built in to the devlopment process. In a worst case scenario, if what you build does not work well, don't fix/re-test/re-deploy it - scrap the whole thing and start over. No feature blocking, no houses of cards.

There's still a need for verifying that the overall product works cleanly when all new functionality has been integrated into the product. If Team A creates functionality X and tests it thoroughly, and Team B creates functionality Y and tests it thoroughly, what happens when X and Y both end up in mainline code and start to interact? Will the tests that each team developed also cover that overlap? Now multiply that by a dozen new pieces of functionality. There's still a place for system-level testing outside of the development teams for products requiring that level of integration. Maybe that's customer level testing, as Barry Graham noted below, or maybe it's some other beast, but there's still a need there that I don't think is quite fleshed out in the modern Agile world.

While I agree with your comments on QE, I must confess I'm not there yet with the "QA is obsolete" argument, at least not as you've defined it here. Maybe we'll get there, but I still see too many cases where things work exactly as (badly as) they were designed, rather than how they should, by well intentioned people. At the end of the day, someone has to test the application just like the customer will, and automation of the "ilities" is a real challenge.

Like
Reply

To view or add a comment, sign in

More articles by Dave Allen

Others also viewed

Explore content categories