Missing Context
If you’re new here: XcessAI is my weekly brief on AI and business for decision-makers. Subscribe above to get the next edition.

Missing Context

Context Is the Real Bottleneck in AI

Welcome Back to XcessAI

By now, most organisations have seen impressive AI demos.

Models summarise documents, write code, answer questions, generate plans. In controlled settings, the outputs look intelligent.

And yet, when these same systems are deployed inside real organisations, something breaks.

Outputs become inconsistent. Recommendations feel naïve. Edge cases multiply. Trust erodes.

The instinctive explanation is that the models aren’t good enough yet.

That diagnosis is wrong.

The real constraint is not intelligence. It is context.


The illusion of intelligence

Modern AI systems perform exceptionally well in environments where the problem is clearly defined, the inputs are clean, and the objective is unambiguous.

That’s why benchmarks look strong.

According to Stanford’s AI Index, model performance on standardised tasks has improved dramatically year after year, with error rates collapsing across language, vision, and reasoning benchmarks.

But those benchmarks test capability, not deployment.

They measure whether a model can answer a question, not whether it understands the environment in which that answer will be used.

Intelligence without context does not feel smart. It feels unreliable.


What people call “AI failure”

When executives describe AI initiatives that disappoint, the complaints are familiar:

  • “It hallucinated.”
  • “It didn’t understand the situation.”
  • “The answer changed when we phrased the question differently.”
  • “It worked in the pilot, then broke at scale.”

These are rarely intelligence failures.

They are context failures.

The model is responding correctly to the information it has, but the information it needs was never encoded.


Context is not data

This distinction is critical, and often misunderstood.

Data is:

  • documents
  • records
  • transactions
  • historical facts

Context is:

  • constraints
  • incentives
  • risk tolerance
  • prior decisions
  • ownership and accountability
  • what must not happen

Context answers questions like:

  • What has already been agreed?
  • What is politically or legally constrained?
  • What outcome is acceptable but suboptimal?
  • Who bears the downside if this goes wrong?

AI systems ingest data. Organisations operate on context.

And context is rarely explicit.


Why context is so hard to encode

Context resists formalisation for structural reasons.

It is:

  • distributed across systems
  • embedded in processes
  • held in people’s heads
  • shaped by incentives and power, not documentation

Most of the context that governs real decisions is informal: budget realities, historical scars, unspoken trade-offs, and institutional memory.

Prompting tries to compress this into language.

That works briefly, and poorly at scale.

Prompting is not a solution to missing context. It is a workaround.


Why pilots work and deployments fail

This explains a pattern many organisations recognise.

Pilots succeed because:

  • scope is narrow
  • risk is low
  • humans actively supervise
  • context is manually supplied

Deployments fail because:

  • context fragments across teams
  • accountability becomes unclear
  • edge cases explode
  • incentives collide

McKinsey’s most recent AI surveys reflect this gap clearly: while over half of large organisations report experimenting with AI, only a minority report material, enterprise-wide impact.

The problem is not ambition. It is execution under real organisational complexity.

AI works in controlled environments. Organisations are not controlled environments.


Where context actually lives

If context isn’t in prompts or datasets, where is it?

It lives in:

  • governance frameworks
  • approval processes
  • budget constraints
  • regulatory boundaries
  • escalation paths
  • informal norms

In other words, context lives in the operating model.

And operating models are slow to change.

This is why simply “adding AI” to existing workflows rarely works. The intelligence layer improves, but the contextual substrate does not.


Why CFOs feel the problem early

This is also why CFOs often become sceptical before enthusiasm fades elsewhere.

From a finance perspective, the pattern is familiar:

  • AI spend increases
  • coordination costs rise
  • productivity gains lag
  • payback periods extend

According to PwC’s global surveys, fewer than one in three executives report that AI investments have yet delivered measurable financial benefits at scale.

Without context, AI increases activity before it increases output.

And increased activity without output is a cost problem, not a technology problem.


Context is now the scaling constraint

As models continue to improve, intelligence becomes cheaper, faster, and more accessible.

That shifts the bottleneck.

The limiting factor is no longer:

  • model capability
  • benchmark performance
  • prompt quality

It is:

  • integration
  • governance
  • ownership
  • feedback loops
  • constraint definition

In other words, execution.

The bottleneck has moved from intelligence to context.


What changes when organisations recognise this

Organisations that make progress with AI don’t obsess over tools.

They focus on:

  • where decisions actually live
  • who owns outcomes when something goes wrong
  • how constraints are enforced, not just documented
  • how feedback from real use flows back into the system

In practice, this often shows up in small but telling shifts.

For example, instead of asking an AI system what the best decision is, teams define which decisions the system is allowed to support, and which remain human-owned.

Instead of feeding models more data, they surface hard constraints explicitly: budget ceilings, regulatory boundaries, approval thresholds, and risk tolerances that previously lived only in people’s heads.

Instead of treating errors as model failures, they track where context was missing: a prior commitment the system didn’t know about, a downstream dependency it couldn’t see, or an incentive it wasn’t aware of.

None of this requires better models.

It requires organisations to make their own decision logic visible.

This is not a technology shift. It is an organisational one.

AI does not fail because it lacks intelligence. It fails because it is deployed into systems that do not surface their own context.


Naming the phase

We are at the beginning of execution reality.

Intelligence scales quickly. Context does not.

And in complex organisations, that difference determines whether AI becomes leverage, or just another layer of noise.

Until next time,

Stay adaptive. Stay strategic. And keep exploring the frontier of AI.

Fabio Lopes

XcessAI


💡Next week: I’m breaking down one of the most misunderstood AI shifts happening right now. Stay tuned. Subscribe above.


Read our previous episodes online!

Inconsistency is the fatal flaw of modern AI; organisations cannot build strategy on fluctuating answers. Commodity Intelligence solves this by anchoring our AI in a decade-long archive of market data. By training our model on 10+ years of history, we evaluate story relevance and significance with a consistency generic models cannot match—providing our clients with a distinct competitive edge. While generic AI drifts toward the consensus, our data gives us the confidence to seek the outliers.

To view or add a comment, sign in

More articles by Fabio Lopes

  • XcessAI is moving back home

    Over the past two years, XcessAI has evolved into a weekly briefing on how artificial intelligence is reshaping…

  • Humanoid Breakpoint, Part II

    General-purpose robots are moving from demos to deployment Welcome Back to XcessAI Last year in Humanoid Breakpoint, we…

  • The Agent Interface

    The battle for the interface between AI agents and software Welcome Back to XcessAI For decades, software interacted…

  • Reverse SaaS

    AI-assisted coding is turning expertise into scalable infrastructure Welcome Back to XcessAI For the past two decades…

    2 Comments
  • The Commoditization of Intelligence

    Why Model Selection Is Not a Strategy Welcome Back to XcessAI The AI conversation is dominated by model comparisons…

  • The New Middlemen

    Why AI naturally creates middle layers Welcome Back to XcessAI For years, the dominant narrative around technology was…

  • The Delegation Problem

    Why automation is not delegation Welcome Back to XcessAI The conversation around AI agents is accelerating. Agents that…

  • Productivity Paradox

    Why massive AI spend isn’t showing up in output (yet) Welcome Back to XcessAI Across boardrooms and executive teams, a…

  • The Cycle of Hardware

    Value is migrating back to capital, assets, and execution Welcome Back to XcessAI Recent market moves in software…

    1 Comment
  • The End of the Prompt Era

    Why talking to AI is already the wrong interface Welcome Back to XcessAI We’ve all learned to talk to machines. We type…

Others also viewed

Explore content categories