The Coordination Layer Is Next

The Coordination Layer Is Next

For the past three years, the automation conversation has centred on a single question: which individual jobs will AI replace? Copywriters, coders, customer service reps, translators. The framing assumes that management, the connective tissue between workers, remains stubbornly human. That assumption is now breaking down.

The signals arriving this week suggest something more structural. We are not merely automating tasks. We are automating the act of coordination itself.

The CEO as a Prompt

When a major tech CEO announces he is building an AI agent to help him run his company, and that he eventually wants every employee and external partner to have one too, the implications ripple outward fast. This is not delegating email triage or calendar management. This is delegating decision-making, resource allocation, and strategic prioritisation to an agent layer that sits between the executive and the organisation.

Think about what a CEO actually does, stripped of mystique. They synthesise information from dozens of sources. They make judgment calls under uncertainty. They communicate priorities downward and sideways. They allocate capital and attention. Every one of those functions is now within reach of a well-orchestrated agent system.

The interesting part is not that one CEO is doing this. It is that the pattern will be impossible to resist once it works. If an AI-augmented executive can process ten times the information, respond to emerging problems in minutes rather than days, and maintain consistent strategic logic across a thousand decisions, then every executive without that augmentation is operating at a structural disadvantage.

This is the management equivalent of the spreadsheet. Before spreadsheets, financial modelling required teams of analysts working for weeks. After spreadsheets, one person could do it in an afternoon. We did not eliminate financial analysis. We eliminated the bottleneck of coordination around it. The same compression is coming for executive function itself.

Recruiting Bots Like Senior Engineers

In open-source software, a parallel shift is underway. Developers are now actively optimising their projects to attract capable AI agents, treating them like desirable contributors who need to be courted. The projects that make themselves legible to AI, with clear documentation, well-structured codebases, and machine-parseable contribution guidelines, are getting more and better AI-generated pull requests.

This inverts a decades-old dynamic. Open-source projects have always competed for human contributors. Maintainers wrote "good first issue" labels and mentorship programmes to lower the barrier for new developers. Now they are writing for a different audience entirely. The "good first issue" is becoming the "good first prompt."

The implication for businesses is direct. If your systems, processes, and documentation are not legible to AI agents, you are not just missing a productivity tool. You are becoming invisible to an increasingly powerful labour force. The companies that structure their operations for agent consumption will attract capabilities that disorganised competitors simply cannot access.

The Wholesale Replacement Pattern

Then there is the blunt end. A major cloud data company laid off its entire technical writing team of around 70 people this past week, replacing them wholesale with AI. Not augmenting. Not reshuffling. Replacing.

This is notable not because it is the first such move, but because of what it reveals about the replacement pattern. Technical writing is a coordination function. Tech writers do not build the product. They translate between the people who build and the people who use. They are, in the purest sense, a human middleware layer.

When a company decides that AI can handle the entire middleware layer for a function, it signals that the coordination cost of maintaining that human team now exceeds the coordination cost of managing AI output. The economics have flipped.

Watch for this pattern to repeat across any role whose primary function is translation between systems or teams: project managers, business analysts, internal communications, compliance documentation, vendor management. These are not low-skill roles. They are high-skill coordination roles. And coordination is precisely what large language models are converging on.

The Blue-Collar Hedge

Meanwhile, young professionals are doing something that would have seemed absurd five years ago: pivoting away from knowledge work toward trades. Aspiring firefighters, electricians, plumbers. Not because these jobs pay more (though some do), but because they involve physical presence, real-time judgment in unstructured environments, and a resistance to remote automation that no amount of model capability can currently overcome.

This is rational behaviour in the face of a coordination-layer collapse. If the value of being a human intermediary between systems is dropping toward zero, then the value of being a human who physically manipulates the world is rising by comparison. The hedge is not against AI generally. It is against the specific compression of knowledge-work coordination.

But this hedge has a shelf life. Humanoid robots are already being rented for retail and event work in China, built on general-purpose platforms. The Tesla Semi is winning over long-haul truckers. Autonomous vehicles are protecting their passengers from physical threats. The physical world is not permanently safe from automation. It is merely on a later timeline.

The Recursive Accelerant

What makes this moment different from previous waves of automation is the recursive element. Models are now participating in their own training and improvement. When a frontier lab announces that its latest model is "deeply participating in its own evolution," that is not marketing language. It is a description of a feedback loop that has no obvious ceiling.

Recursive self-improvement means the coordination layer does not just get automated once. It gets automated, then the automation improves itself, then the improved automation automates the next layer up. Each cycle is faster than the last.

For businesses, this creates an uncomfortable strategic reality. The window for "wait and see" is not measured in years. It is measured in model generations, and those generations are now arriving quarterly.

What This Means

The companies that will thrive in this environment are not the ones that replace their workers fastest. They are the ones that redesign their coordination architecture to be agent-native from the ground up. That means:

Documentation and processes structured for machine consumption, not just human readability.

Decision frameworks that can be executed by agents, not just understood by managers.

Integration layers that allow AI agents to participate as first-class team members.

Measurement systems that evaluate outcomes regardless of whether a human or agent produced them.

The coordination layer was the last thing we expected to automate, because it felt like the most human part of work. Judgment. Synthesis. Prioritisation. Communication. But it turns out these are precisely the capabilities that scale most naturally with intelligence.

The management layer is not being replaced. It is being absorbed into the infrastructure. And infrastructure, once it works, becomes invisible. The question is not whether your company will have an AI coordination layer. It is whether you will build it deliberately or discover it has been built around you.

To view or add a comment, sign in

More articles by James Guy

Explore content categories