Test Automation in Agile

I have never ever seen Functional Regression Test Automation work well in Agile. Now there is an opening statement! But it is true, I really haven't. Many places try and follow their interpretation of the Agile manifesto thinking it makes them Agile and all is good. But a constant stream of issues and Production defects, cost blowouts and high personel (especially test/QA related!) turnover confuse stakeholders as to why it doesn't work.

The Issue

As I, and others, have documented in many QA and Test related articles the standard Scrum process and Definition of Done (DoD) are flawed. In many places the DoD states something like 'DoD - Story is complete. All Acceptance Criteria pass. Test Automation written'. This is probably the single most cause of the issues above.

This does not work for a couple of primary reasons;

  1. There is a non-understanding of 'Test Automation'. Is it Unit Tests (which are often Automated and frequently use a TDD/BDD based approach) or is it Functional Regression Tests? Or is it Integration Tests? The lack of understanding of what these phases are is usually covered up by chucking in some meaningless cliches - such as Test Triangle or Pyramid , which folk do not question as they dont want to appear daft or unknowledgeable. 'Test Automation' in the above can mean Unit Tests (We use Cucumber/Specflow so the Given/When/Thens need to be automated) or a Functional/Integration mishmash.
  2. The Automation phase of a story is tagged on the end. The Story took 6 days to write and make available to test leaving 4 days to write debug and deploy the test automation. A quick, flakey, script is knocked up with perhaps a couple of tests at most. Coverage negligible; especially in the boundary/edge and cross-functional areas where the defects tend to lurk until Production when they come up and bite.

It doesn't work why?

It doesn't work as the Functional Regression Test coverage is very poor and the poor coverage is simply not visible. Every Story has a test or two associated with it so why are we still getting hit with a lot of UAT/Production issues! It is because there is negligible real test coverage. If a piece of functionality has not been exercised and verified then it has not been tested and if there is an issue in the particular piece of functionality it will not be found. No amount of trendy terminology and cliche driven testing can change that. A typical Story may well result in a few tens of functional tests and if each test needs to be automated (as it should) having a couple of days to write/debug/execute the tests is not going to cut it as an Automation Framework is a software project in its own right. Some tests may well take a long time to write due to triggering the need to add - possibly quite complex - library code or require some investigation on how best to implement the test; all of which is independent to the original Story.

This all was/is a constant frustration. And not only mine. Most - if not all! - experienced software test automation engineers I have chatted with on this fully agree (some have even left the industry in frustration!). However; I very recently heard of a typically sized project within a financial institution that was indeed actually working. And it has given me that Eureka moment.

The Fix

The fix was simple. They changed the Definition of Done and moved Functional Regression Test Automation development into its own Agile project seperate from the main project/s (actually there were three separate projects running on the same Application - just different areas).

Definition of Done in the main projects was changed to 'DoD - Story is complete. All Acceptance Criteria pass. Regression Test Suite passes (or fails explained and accepted)'. No Functional Test Automation development was performed as part of the Application development projects. As part of the grooming process, the areas of the Regression Test Suite that were to be executed were decided as mostly the full regression suite did not have to be run - a risk-based approach.

When each Story was completed its associated Acceptance Criteria and associated functional tests (which had been executed manually) were copied to the Regression Automation project ToDo list. The Regression Automation project ran in Kanban, basically taking items from the ToDo list and building the associated Automated Regression Tests into the Regression Test Suite. The framework they used allowed the Business to have visibility of the Test Coverage in real time which allowed the main project team members & stakeholders to make decisions about software release, resourcing etc.

In the Application projects, each Story was fully implemented & tested before being stamped as Done. The execution of the Regression Test Suite ensured the implementation had not broken anything else (as far as the Regression Test coverage allowed - which was visible) before merging into the main development branch.

The same Regression suite was also used to supply the BVT/Smoke tests and full regression suite executed - overnight - on full application deployments to higher environments such as UAT, Performance Test and Staging. They were also able to re-use assets (such as libraries) in building automation regression tests and in the performance tests - these added a lot of value in cost savings!

Why did it work?

The reason it worked was simple. It recognised that the Automation Functional Regression Test Suite was a seperate Project, in its own right, to the main project/s. Its inputs were the requirements/tests/functionality of the main project/s and its output was the ability to perform automated regression testing of the projects (as well as providing assets for other areas as well).

I questioned the lag between functionality being built/tested into the main application and its associated regression tests being built. However, that didn't matter as the coverage was visible and the BA's had input to the prioritization of automated tests. In addition, the Automation team was able to create Stories/Tasks that tackled Tech-Debt and Framework building issues which ensured a robust and highly maintainable framework containing truely cross-functional regression tests.



Love this article mate. Great insights. I can absolutely see why this approach worked well.

Like
Reply

Interesting reading Mutley, a great insight. How are things dude?

Like
Reply

To view or add a comment, sign in

More articles by Mathieu Walker

  • Remembering Fred Brooks

    I have just found out that Fred Brooks passed away in November last year. Very ashamed I didn't know.

  • Where is the Abort test result?

    In nearly most results presentation/reporting tools there are only two states for a test; Pass and Fail. This is not…

  • Test Automation - Get Real!

    I warn you, this is a rant. An unashamed rant.

    8 Comments
  • Test Tool Review sites

    Recently I was directed to a Software Test Tool review site; https://www.softwaretestinghelp.

    2 Comments
  • Test Automation - What is Test Data?

    When discussing Test Automation (Automation of the integrated regression suite of tests) with folk outside the Test…

    1 Comment
  • Analogy of Software and Hardware testing

    It is often difficult to explain to non software-test folk why there is a need to test at the varying stages of…

  • Test Pyramid and focus of testing

    I was recently asked if there is value in following the [Agile] testing pyramid (ie. focus on unit & integration tests…

    3 Comments
  • Automation of UI Tests - Economical?

    Very recently, I had lunch with an old colleague who is a highly respected business analyst and architect. During lunch…

    5 Comments
  • Test Automation tools and scaleability

    Choosing the right test automation tool is always an uphill task, that is quite often frustrating. The primary reason…

  • Testability - Building in Lower QA Costs

    Currently working on developing a Test Framework for a third-party's website and it is an excellent example of how…

Explore content categories