The Variable Missing From Amit Soni's AI Value Creation Equation

Amit Soni put a rigorous framework in front of a lot of people who needed one.

His word equation, Exponential Growth = (Talent Density × AI Leverage) ^ Learning Velocity, cuts through the noise. It's precise, it's memorable, and it reframes the AI value creation conversation away from tool adoption and toward organizational capacity. If you lead a PE portfolio company or run a technology organization, that framing matters.

I've been thinking about it since I read it. And I think it's almost exactly right.

But there's a variable missing that determines whether the exponent does anything at all.

The Equation Works. Until It Doesn't.

Soni's framework holds up under scrutiny. Talent density and AI leverage are multiplicative inputs: if either approaches zero, the product collapses regardless of the other. That's correct. And treating Learning Velocity as the exponent is the real insight. Small improvements to learning rate compound in ways that linear inputs cannot.

The problem is what's left unexamined.

Learning Velocity is presented as a measurement, a rate of organizational adaptation that you can observe and optimize. But it isn't a free variable you can simply dial up. Learning Velocity is the output of something else. And until you name that something else, the equation doesn't tell you how to move the number that matters most.

What produces Learning Velocity? Not training programs. Not access to better tools. Not even talented people, or at least not talent alone.

Learning Velocity is produced by the willingness to experiment and the tolerance for experiments that fail.

Soni points toward the mechanisms: fast cycles, visible data, experiments that die quickly. But he treats them as operational choices rather than outputs of something deeper. The question his framework doesn't answer: what determines whether an organization is willing to run experiments at all?

The Sub-Equation Soni's Framework Needs

Here's the amendment I'd propose:

Learning Velocity = (Experiment Rate × Failure Tolerance) / Fear of Judgment

This isn't decorative. Each term does real work.

Experiment Rate is how frequently a team generates novel attempts: new approaches, deviated-from-process runs, hypotheses tested in production. It's a volume metric. You can't learn fast if you're not trying things fast.

Failure Tolerance is the organization's appetite for experiments that don't work. Not recklessness. The equation is a product, not a sum. Failure tolerance without experiments is just inertia. But experiments without failure tolerance produce a specific pathology: teams that prototype forever, polish endlessly, and never ship anything that could be wrong.

Fear of Judgment is the denominator. It doesn't add to learning. It divides it. When individuals believe that visible failure leads to social or professional consequences, they stop running experiments. They run safe plays instead. They optimize for looking competent rather than for becoming more capable. As Fear of Judgment approaches infinity, Learning Velocity approaches zero.

No amount of AI leverage recovers from that.

The Psychological Safety Problem Is Not Soft

The research is not ambiguous.

Amy Edmondson's work at Harvard Business School, developed over more than 25 years and dozens of organizational settings, establishes psychological safety as the primary predictor of team learning behaviors. Her most cited finding isn't about happiness or engagement. It's about error reporting: teams with higher psychological safety didn't make fewer mistakes. They reported more of them. Because they reported more, they learned faster, adapted earlier, and outperformed teams that appeared to be running clean.

Google's Project Aristotle, which analyzed 180 internal teams over two years, found psychological safety to be the most important of the five dynamics the study identified, more important than individual talent levels or team composition.

The variable isn't soft. It's the load-bearing one.

Here's the implication Soni's framework doesn't fully surface: you can have exceptional Talent Density and best-in-class AI Leverage and still produce near-zero Exponential Growth, because high Fear of Judgment collapses the exponent before the multiplication ever happens.

This is not a technology problem. It's a culture problem. And it's the specific culture problem that PE-backed portfolio companies, under intense performance pressure, are most likely to create.

Why This Matters Specifically for PE-Backed Organizations

Soni's target audience is operating partners and portfolio leadership. That makes the omission more consequential, not less.

Private equity environments are not neutral on failure tolerance. Performance pressure is explicit, timelines are compressed, and leadership accountability is high. Those conditions are not inherently corrosive. Pressure creates focus, and focus can accelerate learning. But the type of pressure matters.

Fear of judgment, specifically fear that visible failure will be interpreted as incompetence rather than as information, is the precise failure mode that performance-oriented environments tend to produce. Leaders demonstrate competence by being right. Teams learn to surface wins and suppress struggles. Experiments that could generate insight get killed before they generate data, because the data might be unflattering.

The result isn't a team with low AI leverage. It's a team with high AI leverage and nowhere to apply it productively, because the organizational immune system is rejecting the learning loop.

Experiment Rate falls. Failure Tolerance compresses. Fear of Judgment rises. Learning Velocity approaches 1.0.

And when the exponent is 1.0, even strong talent density and AI leverage produce linear returns. You've eliminated the compounding entirely.

What Operating Partners Should Do About It

The diagnostic question isn't "how do we increase Learning Velocity?" That's tautological. The diagnostic question is: what's the current Fear of Judgment level, and who set it?

In most cases, the answer is leadership. Not through explicit policy: almost no organization has a written rule against failure. Through the pattern of responses to visible mistakes. How does the executive team respond when a project misses? Does the post-mortem look for lessons or look for someone to hold accountable? Is being wrong in a meeting treated as a data point or a credibility event?

Three signals worth tracking:

  • Does failure travel up the org chart? If bad news reaches leadership quickly, Fear of Judgment is low enough for the signal to move. If bad news arrives late and polished, the fear is suppressing the information.
  • Are experiments being run in the open or in hiding? Teams with high Fear of Judgment prototype in stealth, revealing work only when it's defensible. Low Fear of Judgment teams run experiments visibly and invite early feedback.
  • What gets celebrated versus what gets tolerated? Organizations that only celebrate wins teach their people to manufacture wins rather than generate learning. The highest-velocity teams celebrate instructive failures explicitly.

None of this is operationally complex. All of it requires leadership to go first.

The Amended Framework

Soni's equation remains correct. I'd extend it:

Exponential Growth = (Talent Density × AI Leverage) ^ Learning Velocity

where: Learning Velocity = (Experiment Rate × Failure Tolerance) / Fear of Judgment

The original equation tells you what compounds. The sub-equation tells you what enables the compounding. Both matter. You can't optimize the exponent without understanding what generates it.

The organizations that will unlock the full value of AI-native ways of working are not the ones with the best talent or the best tools. They're the ones that have built cultures where trying something that doesn't work is treated as a contribution, not a liability.

That's not a mindset exercise. It's an organizational design choice. And it starts at the top.


This article is written in response to Amit Soni's "The Math Behind AI-Driven Value Creation," published on Substack at wordequations.substack.com. Soni adapted this framework from OpenAI Academy's original equation. I recommend reading his original piece. The framework is worth your time.

That’s a great take Jeff. I took a detailed deep dive on learning velocity as well in this post https://substack.com/@amitsoni76/note/c-187123516?r=45hmw&utm_medium=ios&utm_source=notes-share-action I like adding the fear tolerance and fear of judgement. I proposed two other variables (adaptation rate and knowledge amplification).

To view or add a comment, sign in

More articles by Jeffrey Monnette

Others also viewed

Explore content categories