How I Took Full Control of Concurrency in Playwright Using an API-Driven Batch Runner

Concurrency issues in Playwright are often treated as a UI problem. In my case, the fix had nothing to do with the UI — it was about controlling API execution intentionally.

Instead of relying only on Playwright workers, I built a small Batch Runner that lets me control:

  • How many times an API action runs
  • How many executions run concurrently
  • How much delay exists between executions

All of this is controlled directly from the test class.


The Problem I Wanted to Solve

When running API setup and validation in parallel, I needed:

  • Predictable concurrency (not “as fast as possible”)
  • Zero shared data collisions
  • The ability to stress certain endpoints safely
  • Control over execution order and rate

Playwright workers alone weren’t enough for this level of control.


The Solution: A Custom Batch Runner

I introduced a lightweight Batch Runner that executes an async producer function with:

  • A configurable concurrency limit
  • Optional delay between executions
  • Guaranteed result ordering

Conceptually, it works like a controlled worker pool.


Why This Matters in Real Tests

From the test layer, I can now do things like:

  • Run the same API call N times
  • Control whether it runs sequentially or concurrently
  • Throttle execution to avoid backend overload
  • Simulate real-world traffic patterns

Example use cases:

  • Creating multiple users safely
  • Verifying backend behavior under controlled load
  • Running performance-style validations without external tools

All without touching the UI.


API-First, UI-Second

The key design decision was this:

  • The API prepares and validates state.
  • The UI only verifies behavior.

Because concurrency is handled at the API layer:

  • UI tests remain stable
  • Parallel execution becomes safe
  • Flakiness caused by shared data disappears

Playwright becomes a consumer of a clean backend state, not a creator of it.


Performance Without Guessing

This approach also gives me:

  • Full control over execution count
  • Adjustable concurrency per test
  • Optional delays for rate-limited systems

Which means I can:

  • Run lightweight performance checks
  • Identify slow endpoints
  • Compare sequential vs concurrent behavior

All inside the test framework.


Test Class Example :

await BatchRunner.run(

10, // execution count

async (index) => {

return await createAppointmentAPI(index);

},

{

concurrency: 3,

delayMs: 200

}

);

What this gives me:

  • Run an API 10 times
  • Only 3 executions at once
  • 200ms throttle between calls
  • Full control from the test layer

BatchRunner Class :


export type BatchRunOptions = { concurrency?: number; delayMs?: number; };

const sleep = (ms: number) => new Promise<void>(res => setTimeout(res, ms));

async function runWithDelay<T>(fn: () => Promise<T>, delayMs: number): Promise<T> { const out = await fn(); if (delayMs > 0) await sleep(delayMs); return out; }

export class BatchRunner { static async run<T>( count: number, producer: (index: number) => Promise<T>, options: BatchRunOptions = {} ): Promise<T[]> { if (count <= 0) return [];

const concurrency = Math.max(1, options.concurrency ?? 1); const delayMs = Math.max(0, options.delayMs ?? 0);

if (concurrency === 1) { const out: T[] = []; for (let i = 0; i < count; i++) { out.push(await runWithDelay(() => producer(i), delayMs)); } return out; }

const results: T[] = new Array(count); let next = 0;


const workers = Array.from({ length: concurrency }, async () => { while (true) { const i = next++; if (i >= count) break; results[i] = await runWithDelay(() => producer(i), delayMs); } });


await Promise.all(workers); return results; } }


Very mature approach 👏 I like how this gives explicit control over concurrency instead of relying on implicit worker behavior. That level of control usually pays off in stability and debuggability.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories