Trust Isn't a Communications Problem
Trust: Claimed or Assessed?

Trust Isn't a Communications Problem

Open any AI press release this week and count how many times you see the word "trust."

It's in the product announcements. It's in the governance frameworks. It's in the executive keynotes, the analyst reports, and the regulatory proposals. Every company building AI systems wants you to know that trust is central to their approach. Trust is their priority. Trust is embedded in their process.

And almost none of them can tell you what they mean by it.

This isn't a criticism of any single company. It's a diagnosis of what happens when an essential concept gets used so broadly that it loses the precision needed to evaluate its worth. "Trust" in the AI conversation has become a reassurance word — something organizations say to signal good intentions rather than describe a measurable condition. It functions like "quality" did in the manufacturing era or "innovation" does now: everyone claims it, nobody defines it, and the word does more work as marketing than as evaluation.

The result is that the most important concept in AI deployment has become the least specific. And that's not a semantic problem. It's a structural one.


When I wrote about trust as currency earlier in this newsletter series, I argued that trust operates as an investment; it's something earned, spent, and squandered through specific architectural and business-model choices. The Trust Exchange between people, providers, and policies is either balanced or it isn't. That's a measurable condition, not a feeling.

But here's what I've observed since: the broader conversation about trust in AI hasn't developed that precision. If anything, the word has gotten more diluted, not less.

When a company says "we're building trustworthy AI," what does that actually mean? Does it mean the system is transparent about how it makes decisions? Does it mean users can modify its behavior when it doesn't serve their goals? Does it mean the business model aligns provider incentives with user interests? Or does it mean the company added a trust-and-safety team and published a set of principles?

These are not the same thing. But without diagnostic language to distinguish between them, they all get filed under "trust" — and the word stops doing any analytical work at all.

This is the gap I keep coming back to. Organizations aren't failing to care about trust. They're failing to evaluate it with any specificity. And that failure isn't caused by negligence; it's caused by the absence of a shared vocabulary precise enough to make trust assessable rather than just claimable.


I think about this the way I think about the difference between saying a building is "safe" and having structural engineering criteria to assess whether the foundation can hold the load. No one would accept "we prioritize safety" as a substitute for a structural assessment. But that's exactly what's happening with trust in AI. The word is doing the work of the assessment without the assessment actually taking place.

The Six Pillars exist, in part, to restore that precision. Trust-Centered Design doesn't ask whether an organization values trust. It asks whether the system's architecture treats trust as a renewable or extractable resource. It evaluates along specific dimensions — whether users can understand and modify system behavior, whether incentives align structurally rather than aspirationally, whether people can exit the relationship without losing their data or digital identity.

Those aren't abstract questions. They're the difference between a trust claim and a trust condition.


This is the conversation I'll be having this Friday with Dr. Mike Smith, PCC., Dr. Sergei Kladko, PhD, and Alina Timofeeva at the Women Lead Congress Ethical AI & Leadership Series. The panel topic is "Trust is the New Currency in the AI Era," and if that framing sounds familiar, it should. It's the question I've been working through for more than a year now, and it's the question most organizations are still answering with sentiment rather than structure.

The distinction I'll be bringing to that conversation is the one I keep returning to in this newsletter: trust in AI isn't a communications problem to be managed. It's a structural condition to be designed. And until the vocabulary catches up to the stakes, we'll keep hearing organizations claim trust while the architecture underneath tells a different story.

Trust collapses into a buzzword when companies treat it as a messaging asset instead of a behavioural condition. In practice, users don’t trust AI because a vendor says they should -- they trust it when the system consistently reduces uncertainty, behaves predictably under edge cases, and doesn’t force them into verification mode. That’s why vague trust language backfires: it signals persuasion instead of reliability. The only version that matters is operational trust, where the product’s behaviour earns delegation without users having to talk themselves into it.

To view or add a comment, sign in

More articles by Van Eiseman

Explore content categories