Artificial intelligence, machine learning, and the future of software testing
This is the second article in my series about how the focus of development testing is shifting toward quality and the technologies that enable organizations to work with quality at the forefront. The first article is ‘The shift from quality assurance to quality engineering, and why it matters’ and 'third article on Intelligent reporting using Machine learning'.
Software testing is a field that’s seen rapid, large-scale transformation in recent years. For many software testers, the role has evolved into ‘software development engineer in test’ (SDET), and the discipline as a whole is shifting towards ‘quality engineering’.
We can’t predict the future of quality engineering exactly, but it’s evident that the pace of transformation will get faster still, with new technologies acting as the catalyst. What we can be certain of, however, is that artificial intelligence (AI) and machine learning (ML) are two of the key technologies that will carry quality engineering forward.
So, let’s take a look at how AI and ML can help turn software engineers into intelligent engineers by radically transforming quality engineering processes and tools.
The traditional quality engineering process
Most quality teams have already moved from testing to fully automated testing using BDD (Behavior driven development) frameworks, coupled with the language of their choice.
Automation teams spend most of their time coding and maintaining their scripts, with very limited control over the quality process, beyond the code itself. However, quality is a continuous process – it shouldn’t be treated as the final step in the software development lifecycle. Quality engineers need to broaden their horizons to deliver better quality products, and that embedding quality processes and tools throughout software development lifecycle.
The new vision for quality engineering
The new world of quality engineering will rely on intelligent tools and utilities, using machine learning techniques and working in tandem to achieve quality goals.
Here are some examples of how these intelligent tools and processes will work:
An automated creation utility
The new process for quality engineering will start as soon as UX designs are ready and the software team is identifying requirements. The quality engineers can use a spidering library (such as scrappy) and create automation script to navigate through the UX design, along with test data and locators.
A test data generator
Once the automated script is ready, the team will design a test data combination generator which can identify the minimum amount of test data that will cover all test scenarios. This test data will be fed to the automated script to maximize effectiveness and coverage.
A self-healing utility
The next step is to generate an automatic healing utility for the automation script. This gives test cases self-healing capabilities to ensure they don’t fail if locators change, keeping the automation suite stable and robust. This capability incorporates two algorithms:
- A robust locator generator algorithm, which creates multiple locators for the same web element.
- A weight distribution algorithm, which will ensure there are still locators attached to an element, even if one fails. These can be weighted differently after each iteration.
The combination of automatic tests, a test data generator and the self-healing capability will give you highly stable test cases that provide maximum coverage.
An automated test case prioritization utility
This utility assigns each test case a random weight, and a separate reward function prioritizes the high-value test cases. This list of test cases can then be used to execution tests in order of priority. You can also use the utility to prioritize regression test packs based on the build commit’s code changes and the test case failure history to select the test cases that’ll cover the most defects.
Automated result analysis using ML and auto-defect logging
ML-based result analysis tools can classify failed test cases into predefined buckets (for example, environmental, application, or automation issues). Once it’s trained, the algorithm can automatically predict which bucket any new failures will fall into. This eliminates manual script failure analysis – saving your team time and effort.
A GIT commit error prediction model
This algorithm predicts possible error-prone commits based on the GIT commit history and code changes, alerting the team to review code changes in at-risk commits.
Application debugging and log analysis tools
These automatically traverse application logs and debug applications to find code changes that cause errors. This tool, integrated with the automated result analysis tool, forms a robust model to predict which code will cause errors, so you can fix failures faster.
These machine learning-led tools and processes won’t just help us to test software code more effectively, they’ll also help you prevent faulty code from reaching your code repository – improving software quality and reducing the feedback cycle.
They can transform quality engineers into the intelligent testers of tomorrow, ready to face every challenge and deliver quality at speed.
Read my next article to understand more on how we can leverage ML for smart automated reporting
If you have thoughts about this topic, I’d love to hear them in the comments section.
If you found this post interesting, it would be great if you could hit the ‘like’ button, or feel free to share with your colleagues.
By Indra Prabha Sharma, Director of Quality Engineering at Publicis Sapient. She believes that quality engineering offers major scope for creativity and innovation throughout the development progress – across industries and disciplines.
This is indeed very thought provoking post a very good read...we already are living this concept through our tool Virtuoso...Check the details on https://www.spotqa.com/
Himanshu Motwani
Indeed the vision for next big change in industry
Congratulations on writing such a well researched and detailed blog.