The Intelligence Loop

The Intelligence Loop

How I Accidentally Solved a Piece of the AI Alignment Problem While Just Trying to Get Better Answers

Views are my own.


There's a moment most people have had with AI that nobody talks about honestly.

You type a detailed, carefully worded request. The AI responds confidently. And what comes back is... almost right. Impressive, even. But not quite what you meant. So you type more. Clarify. Correct. Try again. Thirty minutes later you have something usable, but you're not sure if the AI got smarter or you just got better at managing it.

I had that moment. Then I had it again. And again.

Most people blamed the AI. I blamed the handoff.


The Real Problem Is the Gap Between Intent and Input

In machine learning, every model is governed by an objective function — the mathematical definition of what "good" looks like. The model optimizes relentlessly toward it. If the objective function is wrong, the model optimizes for the wrong thing, sometimes in ways that are subtle, sometimes catastrophic.

This is known as Goodhart's Law: when a measure becomes a target, it ceases to be a good measure.

The canonical AI example: a model rewarded purely for user engagement learns to generate outrage, because outrage drives clicks. It's doing exactly what it was told — just not what was meant.

I realized something uncomfortable: I was doing the same thing to my AI.

Every time I typed a prompt, I was handing the AI a proxy for what I actually wanted. Sometimes the proxy was good. Often it wasn't. And the AI — being exactly as powerful as it is — would optimize for my proxy with full confidence, producing output that was technically responsive but substantively off.

The problem wasn't the AI's intelligence. It was my objective function.


The Insight: Let the AI Ask the Questions

My first instinct was to engineer better prompts. That helped, but it had a ceiling — and that ceiling was me. I can only articulate what I'm already conscious of wanting.

So I flipped it.

Instead of me figuring out what to ask, I asked the AI to figure out what to ask me.

What emerged — through iteration, frustration, and genuine surprise — was a methodology I now call Multiple Choice Command Mode, or MCCM. The framework is simple:

  • I state a task or intention, however rough
  • The AI generates targeted clarifying questions in multiple choice format
  • I select answers — quickly, with minimal friction
  • The AI adjusts its understanding and either proceeds or asks another round
  • A clarity threshold I set in advance governs when it's ready to execute

The multiple choice format wasn't arbitrary. Free-form answers forced me to think like the AI — to anticipate what it needed. Multiple choice lets the AI think like itself and ask questions I would never have thought to ask. Questions that are obvious to a machine and invisible to a human. That asymmetry is the whole point.


Why This Is an Alignment Technique, Not Just a Better Prompt

Here's where it gets interesting.

The clarity threshold — my setting of how much confidence is needed before execution — is functionally a human-defined objective function. I am specifying, in advance, what "good enough to proceed" means. The system then optimizes toward that definition rather than a proxy.

This is the inversion Goodhart's Law demands: instead of letting the machine define the target and hoping it aligns with intent, MCCM forces intent to be clarified and confirmed by the human before optimization begins.

The AI isn't guessing what I want. It's extracting what I want — through structured dialogue — before it acts.

This matters beyond productivity. As AI systems become more autonomous, the gap between what we intend and what we specify becomes genuinely dangerous. MCCM is a small but concrete demonstration that this gap can be engineered, not just hoped away.


The Recursive Part: I Used AI to Build This

Here's what I find most compelling about this whole story — and the part I almost missed.

MCCM wasn't designed in a whiteboard session. It was iterated into existence through the very process it describes. I used AI interaction to design better AI interaction. The methodology is a product of itself.

That recursion is not accidental. It points to something important:

When humans and AI interact well, the human gets smarter. A smarter human generates better intent. Better intent produces better AI output. Better output further expands human capability.

I call this The Intelligence Loop.

It's not AI replacing human intelligence. It's AI amplifying it — with the human always at the center, always the initiating force. The loop begins and ends with human intent. The AI is the multiplier, not the origin.

This framing matters because it directly counters the most common fear about advanced AI: that it will eventually outpace and displace human thinking. The Intelligence Loop is a counterproposal. Not humans versus AI. Not AI instead of humans. Humans through AI, recursively, with intent as the governing constant.


Where This Goes

MCCM is one implementation of a much larger idea.

In multi-agent AI systems — where models coordinate with each other to complete complex tasks — there is no established protocol for how agents clarify intent between themselves. The same gap that exists between humans and AI exists between AI and AI. A structured clarity layer at those handoffs could prevent the compounding misalignments that make agentic systems brittle and unpredictable.

In enterprise AI deployment, auditable intent — a record of what was specified, at what threshold, before execution — could transform AI from a black box into an accountable system.

And at the furthest horizon: if we are serious about building AI that remains aligned with human values as it becomes more capable, we need formal methods for specifying those values clearly before systems act on them. MCCM is a primitive version of that. A starting point.


An Invitation

I'm not presenting this as a finished theory. I'm presenting it as a working proof of concept with implications I haven't fully mapped yet.

What I know is this: the way most people interact with AI is leaving enormous value on the table — not because the AI is limited, but because the handoff is broken. Fixing the handoff doesn't require more powerful models. It requires better methodology.

That methodology is buildable. It's teachable. And it starts with a deceptively simple question:

What if we let the AI ask the questions?

Here is an example of how I let AI ask the questions.

https://claude.ai/public/artifacts/fc2756bc-c5dc-4f9b-b852-6ce7af28db5e


Pete Maloney is an independent AI practitioner exploring the intersection of human intent, AI methodology, and the future of human-machine collaboration.

It was great seeing you today Kendall Tieck I’ll send you a link to the sparkly UI version of the MCCM and if you like it I’ll share the less UI friendly command. It doesn’t look like the UI supports persistent chat/project memory, due to product limitations. Memory persistence is a critical requirement of the intelligence loop.

Peter: Great insight and analysis. I look forward to following your pursuit of this line of thinking!

To view or add a comment, sign in

Others also viewed

Explore content categories