The Evolution of Shared Language in AI Agent Development

The Evolution of Shared Language in AI Agent Development

Something interesting is happening in AI agent development…

We’re converging on a shared vocabulary which is mostly new.

Not because anyone mandated it, but because the abstractions are real. 

Here’s how the language has evolved across six distinct phases and what the latest shift tells us about where the field is headed….

Phase 1: LLM Primitives (2020–2022)

The earliest vocabulary was borrowed directly from ML research…prompts, completions, tokens, temperature, few-shot. 

Everything was stateless and single-turn. 

The metaphor was raw material, you shaped the output by crafting the input.

Phase 2: Chains & Composition (2022–2023)

One can argue that LangChain popularised a new layer of terms…chains, prompt templates, output parsers, retrievers and more…

And the now-ubiquitous RAG

The metaphor shifted to plumbing … piping data through sequential steps. 

The breakthrough was treating LLM calls as composable units rather than standalone interactions.

Phase 3: Agency & Tools (2023–2024)

The vocabulary shifted again with agents, tool calling, function calling, memory and planning

The metaphor moved from plumbing to delegation…

An LLM deciding what to do, not just processing input. 

OpenAI’s function calling API and LangChain’s agent abstractions converged on remarkably similar concepts from different directions.

Phase 4: Orchestration & Multi-Agent (2024–2025)

As single agents hit their limits, the language became organisational…

graphs (LangGraph), handoffs , human-in-the-loop, guardrails, supervisors. 

The metaphor was organisation… teams, workflows, oversight. 

We started talking about agents the way we talk about teams of people.

Phase 5: Protocols & Interop (2025)

The push toward shared protocols marked a turning point. 

MCP (Model Context Protocol), CUA (Computer Use Agents), tool schemas, A2A (Agent-to-Agent) and standardised APIs turned vocabulary into specification.

We moved from vocabulary to specification. 

This is where shared language stops being informal convention and becomes enforceable contract.

Phase 6: Harness Engineering (2026+)

And now we’re entering the latest phase… 

The emerging vocabulary?

Harnesses, hooks, contextual prompting, context engineering , model orchestration…

All of these reflects a fundamental shift in how we think about building with AI agents.

Harness engineering is about the control surface…the boundary layer between the human (or system) and the agent. 

It’s not about what the agent can do, that’s largely solved. 

It’s about how you shape, constrain and direct what the agent actually does in practice.

Harnesses are the scaffolding that wraps an agent.

The system prompts that set its personality and boundaries, the hooks that fire before and after tool calls, the context management that decides what the agent sees and when.

Right-sizing captures a crucial insight… not every task needs the most powerful model. The largest context window, or the most autonomous agent. 

The skill is matching capability to requirement, using a fast, cheap model for routine classification and reserving the heavy machinery for genuinely complex reasoning.

This phase is significant because it signals maturity. 

When a field starts optimising for control and efficiency rather than raw capability, it’s moving from experimentation to engineering.

The convergence pattern

What’s remarkable is that competing frameworks keep arriving at the same concepts even when they use slightly different terminology. 

This convergence suggests the abstractions aren’t arbitrary… they reflect genuine structure in the problem space.

The evolution of shared language in AI agents isn’t just linguistic trivia. 

It’s a map of what the field has learned about what actually matters when building systems that think, act, and collaborate.


Chief AI Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.



So true. Vocabulary in AI is evolving so fast, but experience still speaks louder than terminology. The moment you build, everything starts making sense at a different level.

Agreed on build-to-learn, that's how I've internalized every term on this chart. The piece less discussed is that two vocabularies are forming, not one. The dev community is converging on MCP, A2A, harness. Enterprise execs are using the same words to mean a magic integration layer they read about in a deck last quarter. Building teaches you the word. It doesn't help when the person funding the project means something else by it.

To view or add a comment, sign in

More articles by Cobus Greyling

  • The AI Agent Reality Gap

    Researching AI Agents In Production and the gap Between Demo and Deploy The agents that actually ship are constrained…

    3 Comments
  • Two-Thirds of Multi-Agent Intelligence Is Harness

    When a multi-agent system solves a complex task, who deserves the credit? The LLM that generated the reasoning, or the…

  • NVIDIA Nemotron 3 Nano Omni

    One model to see, hear & reason I got early access to NVIDIA’s new model, Nemotron 3 Nano Omni, and this is what I have…

    1 Comment
  • Architecting Agentic AI: How SDKs, Scaffolding, Frameworks & Harnesses Are Different

    Building with Agentic AI? Are You Using an SDK, Scaffolding, a Framework, or a Harness? The explosion of AI related…

  • Claude Opus 4.7 is a harness release

    The model barely changed…everything around it did The interesting part of Opus 4.7 is not the weights.

    3 Comments
  • Death of the Demo

    How autonomous agents are reshaping the sales cycle The sales demo is a performance. And performances don’t scale.

    2 Comments
  • The Four Debts of Agentic AI

    Your AI agent works. It answers questions, calls APIs, completes tasks.

    1 Comment
  • 98% of Claude Code Is Not AI

    A new 46-page study reverse-engineers Claude Code’s architecture from its open-source TypeScript codebase. In Short…

    16 Comments
  • How Claude Managed Agents Actually Works

    A Console Walkthrough I have restarted tis blog a few times…the reason being that I wanted to look at Managed Agents…

    2 Comments
  • Claude Managed Agents

    The Fourth Way to Build AI Agents With Claude In a recent post, I explored the Three Ways to Build AI Agents With…

    1 Comment

Explore content categories