Reduce your API test maintenance
Maintaining API test cases across multiple environments, for different load levels, and types of tests (eg. smoke, functional, regression) traditionally required separate tests be written and maintained. Test maintenance is expensive. If you have three lower environments that tests are run for, then you usually have at least three different tests you are trying to keep in sync. If you have a new api added for a microservice, then you have to modify and validate across all three tests at a minimum. If you are running different load levels for build tests verses load tests, for example, that could add additional tests that need to be maintained.
We have developed a method of using one API test and modifying it at run time to support multiple environments, types of tests, and load levels. We use variables for information that needs to change between environments so that they can easily be changed using configuration files. These variables include, but aren't limited to, number of users, environment url, execution time, results location. We also used testing section constructs to allow us to turn on and off portions of the test so that we can use the test for a smoke test vs a full functional test. We use shell scripts to specify the configuration files and the run time variables for output location needed for the different environments, test types, and load levels. Because the shell scripts and the configuration files are specific for environments and load level, we don't usually have to alter them when we add new apis to the test.
The test produces an xml output file with the details of each api call. The shell script generates csv files from the xml output that contain the statistics, including average response time, failures, number of executions, and other items. The results are stored on cloud storage for later retrieval. The results csvs are imported into a spreadsheet where a comparison against previous runs is generated. The spreadsheet creation is automated using robotic process automation.
Recommended by LinkedIn
The results are also written to a database for more complex comparisons and dashboards.
We have deployed this solution in a container running in a cloud environment. It has a microservice wrapper that enables api calls to trigger and monitor the tests. This enables easy automation with any development and deployment pipeline.
Because this can be deployed in any cloud environment, it can be used to compare latency from any location.
Our thanks to Jason Mah , Justin Talarek and Sachin Avasthi for helping us get the microservice created and deployed.