We Optimized the Wrong Thing

We Optimized the Wrong Thing

A systems thinking view of AI-assisted development


Think about the last time you wrote something complex by hand.

Not the typing. The struggle. The part where you hit an edge case and realized you didn't understand the requirements. The conversation with a domain expert that followed. The moment a better abstraction emerged while you were stuck.

That struggle was not overhead. It was the mechanism that built your understanding of the system.

Now think about what AI codegen removes.

It removes the typing. But it also removes the getting-stuck part. The exploration. The dead ends that taught you what not to build.

The codebase can now grow without that learning happening.


The Old Loop

In the old workflow, two things grew together: your domain knowledge and the code. They were coupled. You could not expand the codebase without proportionally deepening your understanding.

Writing code forced confrontation with edge cases. Edge cases revealed gaps in understanding. Gaps triggered dialogue with domain experts. New concepts emerged during the act of writing. Obsolete code got recognized and removed.

The codebase was a byproduct of understanding. Not the primary output.

This was a balancing loop. It kept the system healthy. It also felt like friction.

In systems thinking, a balancing loop keeps things stable. It connects "stocks", anything that accumulates over time, like knowledge, code, or technical debt, so they regulate each other. When one grows, the other responds. Remove the connection and one can inflate while the other stagnates.

Donella Meadows called this kind of structural breakdown a "systems trap" in her book Thinking in Systems. You don't notice it until the consequences surface. By then, the fix is expensive.

Article content
Domain knowledge and codebase grow together

The Break

AI codegen removed the friction. It also removed the coupling.

The codebase can now grow independently of domain knowledge. The effort that once forced learning is gone. What remains is output without the exploration that shaped it.

This creates a delay structure that fools us.

Functional problems surface in hours. "It doesn't work."

Integration problems surface in weeks. "It doesn't fit."

Conceptual problems surface in months. "It embeds the wrong understanding."

The third layer is where the trap closes. By then, neither you nor the AI can reconstruct what's missing.

Article content

The Pattern at Two Scales

I've watched this repeat across projects.

Small: I asked an AI to update all fonts in a table. It updated them column by column. Four separate actions for one global intent. The model could not see the system. Only the immediate problem.

Large: I let AI build a 9,775-line codebase. When I challenged the architecture, it defended a 794-line main.py with six technical objections. Every objection sounded sophisticated. Every one was wrong. (Full experiment here.)

The mechanism is identical. Context is finite.

As output accumulates, both human cognitive context and AI context windows degrade. Fixes become local rather than systemic. Each fix creates unintended consequences. New bugs emerge. A feedback loop, but amplifying the problem instead of containing it.


The Trap

Here's why this is hard to escape.

Code only shows what it does. Never why.

The "why" lives elsewhere. In the analysis that preceded it. In the conversations that shaped it. In the developer's head who wrote it.

When you write code by hand, you can grep your own memory. You were there for the dead ends. You remember what you rejected and why.

When AI generates code, that reasoning was never created. The path went straight from prompt to implementation. The exploration that would have built the "why" got skipped.

And you cannot cheaply retrofit it.

The old rule still applies: changing something in design costs one dollar, changing it in production costs a hundred. The "why" validated against a sketch is cheap. The "why" validated against a complete codebase is expensive.

But with AI codegen, there's a worse problem. Each prompt layered onto an existing solution doesn't sharpen the concept. It dilutes it. The model responds to what's there, not what should have been. The core intent gets buried under patches.

You're not refining. You're drifting.

The developer didn't build deep understanding because they skipped the struggle. The AI never had it. When requirements shift, neither party can reconstruct what's missing.

This is not a tooling problem. It is a systems problem.


The Addiction

Meadows describes another trap: addiction. A short-term fix that weakens the system's native capacity to solve the problem. The more you use the fix, the more you need it.

AI codegen fits this pattern uncomfortably well.

It provides short-term relief. Code ships. The immediate problem disappears.

But native capacity weakens. The skill was never syntax. It was understanding requirements and anticipating future paths. That skill requires practice. Each time you work through a problem manually, you see it from a different angle. Skip the practice and the muscle atrophies.

The dose escalates. AI-generated complexity requires more AI to manage. The 9,775-line codebase needs AI to navigate it. The intervention creates demand for more intervention.

Withdrawal becomes painful. Remove AI tools from a team that relies on them. Productivity craters. They cannot return to their old pace. The native capacity is gone.

The usual objections don't hold up.

"We can't keep up without AI." This assumes code is the output we can't produce fast enough. But code was the byproduct. Understanding was the output. AI doesn't help us understand faster. It helps us skip understanding. That makes complexity worse, not better.

"Tools have always augmented us." Writing externalized memory. Calculators externalized computation. They didn't atrophy the underlying skill. AI codegen externalizes the practice that builds understanding. Different category.

"The skill is shifting." To what? Directing AI? That doesn't replace domain understanding. It assumes someone else has it. Eventually, no one does.

The only escape from an addiction trap, per Meadows, is to strengthen the native capacity while gradually reducing the intervention.

Article content
The intervention weakening the native capcity

Speed was never the bottleneck. Understanding was.

We optimized the wrong thing.

To view or add a comment, sign in

More articles by Markus Karileet

Others also viewed

Explore content categories