Artificial Intelligence and Software Testing
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display.
The key is to let the machines do what they’re good at and let the humans leverage their creativity and judgment.
- Focus on creative and business-specific test inputs and validations. Be more creative and think of email address values that a machine with access to thousands of possible email test inputs wouldn’t think to try. Verify that cultural- or domain-specific and expectations are met. Think of test cases that will break the machine processing for your specific app (e.g., negative prices, disconnecting the network at the worst possible time, or simulating possible errors).
- Record these human decisions in a way that later helps to train the bots. Schematized records of input and outputs are better than English text descriptions in paragraph form.
- Focus on the qualitative aspects of software testing that is specific to their specific app and customer.
- Leave the exhaustive testing to AI. Leave tapping every button, inputting obvious valid and invalid data into text fields, etc. to the machines.
AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications.
Current Problem statement 1 - Frequency of Release: Every app team wants to move faster thanks to competition, as well as adopt agile, lean and continuous build environments. Manual testing isn’t fast enough anymore, and today’s test automation is expensive, slow, and often breaks when you need it most.
Future AI: Bots can generate 100 times the test coverage of most test teams. Even better, with a little bit of AI mixed in, the bots could automatically discover new features and test new behaviors. If the change in the app is too complex for a bot to know it is a bug, it simply sends a before and after picture to a human to make the bug-or-feature decision.
Current Problem statement 2— Performance: Improved app performance is the number one priority of app teams today. They can’t improve what they can’t measure, and the best solutions today measure performance in noisy production environments, depend on SDKs, and require teams to look at raw data and charts to figure out what is slow. Worse, performance regressions are often caught weeks after the offending code change.
Future AI: Automated test bots could test the performance of every action in an app, many times, and catch regressions within minutes of each new build. Rather than charts, the bots could take easy to understand pictures of the slowest part of your app, and show them to the app team.
Problem Area 3—Automation Specialist: Many teams can’t afford legions of test automation engineers, or the infrastructure they need. Most teams can’t wait for six to eighteen months for an automated test suite to be coded up and be running. Most interestingly, there is far more demand for software test development engineers than there are test engineers.
Future AI-powered bots could start basic testing of an app right away. Machines are far less expensive than the cost to hire a team and to write and maintain basic test code. Machines can also provide test coverage and execute in parallel, enabling all this work to be done in just minutes.