Part 6 - Enabling Performance Testing with Generative AI: The Power of Design and Execution in Action
Pixabay on Pexels.com

Part 6 - Enabling Performance Testing with Generative AI: The Power of Design and Execution in Action

In my previous article (Part 5), I had talked about Harnessing Synthetic Data Creation with Gen AI for Business-Centric Performance Workloads - Synthetic Data with Gen AI. We came up with a template to create performance workloads keeping all stakeholders (business & tech teams). This was for a typical retail e-commerce client.

Building on our previous discussions, let's now delve deeper into how generative AI can revolutionize the process of designing and executing performance test scripts.

Designing Test Scripts with Generative AI

Let's revisit our e-commerce platform scenario. Recall that we have already created a series of synthetic workloads that closely mimic anticipated user behavior and system interactions. Now we need to turn these workloads into executable test scripts.

Consider one of our synthetic workloads: users navigating from the landing page to the product page, then adding an item to the cart, and finally proceeding to checkout. We'd want our performance test to simulate this exact sequence of actions.

Picking the right tool

But first, we need to figure out what tool to use first. Now typically a team should figured out what tool to use by this time. However, just for the sake of this article, let's assume that the team is not sure what is the best option for them. I would recommend the team use the following parameters to evaluate existing tools

  1. Ease of Use: The tool should be intuitive and easy to use, with a user-friendly interface. It should provide clear error messages and have comprehensive documentation.
  2. Scripting Language: Consider the scripting language used by the tool. It should ideally use a language that your team is already comfortable with. Additionally, the language should be powerful and flexible enough to handle complex testing scenarios.
  3. Support for Protocols: The tool should support the protocols that your application uses. This includes web protocols like HTTP/HTTPS, as well as any specific protocols used by your application (like WebSocket, MQTT, etc.).
  4. Load Generation Capacity: The tool should be able to generate a high load to stress test your application adequately. This involves considering both the number of concurrent users it can simulate and the geographical distribution of the load.
  5. Test Results Reporting: Look for tools that offer detailed reporting and analytics. They should provide information like response time, throughput, errors, and resource utilization, among others.
  6. Integration: Check if the tool can integrate with other tools in your software development lifecycle, such as continuous integration/continuous deployment (CI/CD) tools, monitoring tools, or version control systems.
  7. Support and Community: A tool with a large community and good support can be invaluable. This can be in the form of documentation, forums, tutorials, or direct support from the tool's developers.
  8. Scalability: As your application grows, the tool should be able to scale and meet increased testing requirements.
  9. Cost: Some tools are open-source and free, while others can be quite costly. Consider the tool's cost against your budget and the features it offers.
  10. Maintenance: Consider how much effort it takes to maintain test scripts when the application changes. Some tools offer features like automatic script update when the application changes, which can save considerable time and effort.
  11. Multi Browser Testing: If you are testing a web application, check if the tool supports multi browser testing. This allows you to test how your application performs in specific browsers.

Here is what a simple comparison matrix may look like. Please do keep in mind that these results are subjective and based on an extremely simple comparison.

No alt text provided for this image
Perf tool comparison matrix

Personally, I like Python and so Locust is usually the tool of choice for me.

Creating boiler plate code for your test framework

After picking your tool of choice, you need to create some sort of test framework that makes it easy for you and your team to design, create, edit, execute, report and maintain your test suites. This can be a daunting task for many - especially given the pressure that teams face in terms of bandwidth and timelines to get things done. What better way than to use Gen AI to help design this for us.

Using the workload that we agreed upon with business (above), we can feed that into Gen AI and get it to create a project structure for us with the tool of our choice. We can make sure it follows clean, secure code, and SOLID principles to ensure that the resultant boilerplate code is easy to use. To demonstrate the use of Gen AI for this purpose, I am using the following dummy e-commerce site that can be used for performance testing.

No alt text provided for this image
JPetStore - A dummy performance testing site

I fed in the sitemap along with my required performance workload to get a complete project structure using Locust.

No alt text provided for this image
Perf framework structure

I can go one step deeper and even develop the code for the files. I shy away from using Gen AI for coding as there are still open questions over the source of training data with code. Given the open questions over the use of Gen AI to code, I would recommend not using it to code on production systems.

The next step involves generating test data for the tests, running the tests and presenting back the results with the stakeholders.

I will cover this in my next article. Until then, look forward to your thoughts on this article and look forward to any discussions on this.

To view or add a comment, sign in

More articles by Bharath Hemachandran

Others also viewed

Explore content categories