How I Took Full Control of Concurrency in Playwright Using an API-Driven Batch Runner
Concurrency issues in Playwright are often treated as a UI problem. In my case, the fix had nothing to do with the UI — it was about controlling API execution intentionally.
Instead of relying only on Playwright workers, I built a small Batch Runner that lets me control:
All of this is controlled directly from the test class.
The Problem I Wanted to Solve
When running API setup and validation in parallel, I needed:
Playwright workers alone weren’t enough for this level of control.
The Solution: A Custom Batch Runner
I introduced a lightweight Batch Runner that executes an async producer function with:
Conceptually, it works like a controlled worker pool.
Why This Matters in Real Tests
From the test layer, I can now do things like:
Example use cases:
All without touching the UI.
API-First, UI-Second
The key design decision was this:
Because concurrency is handled at the API layer:
Playwright becomes a consumer of a clean backend state, not a creator of it.
Performance Without Guessing
This approach also gives me:
Recommended by LinkedIn
Which means I can:
All inside the test framework.
Test Class Example :
await BatchRunner.run(
10, // execution count
async (index) => {
return await createAppointmentAPI(index);
},
{
concurrency: 3,
delayMs: 200
}
);
What this gives me:
BatchRunner Class :
export type BatchRunOptions = { concurrency?: number; delayMs?: number; };
const sleep = (ms: number) => new Promise<void>(res => setTimeout(res, ms));
async function runWithDelay<T>(fn: () => Promise<T>, delayMs: number): Promise<T> { const out = await fn(); if (delayMs > 0) await sleep(delayMs); return out; }
export class BatchRunner { static async run<T>( count: number, producer: (index: number) => Promise<T>, options: BatchRunOptions = {} ): Promise<T[]> { if (count <= 0) return [];
const concurrency = Math.max(1, options.concurrency ?? 1); const delayMs = Math.max(0, options.delayMs ?? 0);
if (concurrency === 1) { const out: T[] = []; for (let i = 0; i < count; i++) { out.push(await runWithDelay(() => producer(i), delayMs)); } return out; }
const results: T[] = new Array(count); let next = 0;
const workers = Array.from({ length: concurrency }, async () => { while (true) { const i = next++; if (i >= count) break; results[i] = await runWithDelay(() => producer(i), delayMs); } });
await Promise.all(workers); return results; } }
Very mature approach 👏 I like how this gives explicit control over concurrency instead of relying on implicit worker behavior. That level of control usually pays off in stability and debuggability.