Beyond Code: The Next Layer

Beyond Code: The Next Layer

Every abstraction transition felt like loss from the inside. From the outside, it looks like progress.

We may be in one right now.

But it’s stranger than the previous ones.

The Clean Story

There’s a story about programming languages that everyone in software knows, even if they’ve never heard it stated explicitly.

Each generation of abstraction made the one below it invisible.

Humans started by writing machine code directly. Assembly made that bearable. C made assembly a compiler concern. Higher-level languages, managed runtimes, garbage collection, each step moved humans further from the machine and closer to the problem.

The pattern felt consistent. A new level of abstraction emerges, the level below gets demoted, expertise migrates upward.

“You can’t trust a compiler to write better assembly than a skilled human” was a real argument, made seriously, by serious people. Then it became obviously wrong. Then it became something people said about the transition after that.

So the tempting read of the current moment is simple: we’re adding a new rung. Intent sits above code the way code sits above assembly. The abstraction level rises again. Code gets demoted. Expertise migrates upward.

Clean. Legible. Consistent with history.

But the story breaks down if you look closely.

After a certain point, the levels stopped stacking neatly. Python doesn’t become C. Java bytecode isn’t assembly with friendlier syntax. The layers coexist. They compile downward, but they aren’t just translations of each other.

C didn’t disappear. It just stopped being where most human attention lives.

Which means the real question isn’t whether code disappears. It almost certainly won’t.

The real question is whether code remains the surface where human judgment lives. Whether it stays the place where expertise concentrates. Where decisions get made.

And there’s a harder version of that question underneath.

Because all previous layers were about representation. Different ways of expressing instructions for a machine to execute. Binary, assembly, C, bytecode, all of them describe how something should run, at different levels of abstraction.

Intent is different.

Intent is about meaning. What you actually want. Why it matters. What guarantees the system needs to provide.

That’s not obviously a higher-level programming language. It might not belong on the same ladder at all.

The ladder metaphor helps orient us. But it may also be pointing in the wrong direction.

That uncertainty is worth holding, rather than resolving too quickly.

The Engineers Who Were Already There

The transition isn’t coming from nowhere.

If you look carefully at what the best engineers have always done, you find something uncomfortable: they were already doing intent capture.

Informally. Incompletely. Often in ways that made them look slow.

The engineer who reads a ticket and comes back with questions instead of an estimate. Who challenges the assumption embedded in the requirement instead of implementing it. Who says, “I can build this, but I don’t think this is what you actually want.”

The product manager who sits with a request long enough to realize that what was said and what was meant are different things, and resolves that gap before anything gets written down.

These people were doing the work of intent interrogation long before anyone named it. They were the ones catching wrong assumptions early. The ones who saw that a two-sentence ticket contained a dozen unstated decisions that needed to be surfaced.

And organizations spent years managing that behavior out.

The mechanism was simple. Once work is measured in story points and tickets closed, anything that happens before a ticket exists becomes invisible by definition. The engineer who asks three questions slows the sprint down. The engineer who accepts the ticket and starts coding moves it forward. The system rewards the second.

The interrogation step, the part where intent is clarified, challenged, sometimes rewritten entirely, didn’t just fail to be measured. It became locally irrational.

So it got removed. Not because anyone intended that outcome. But because the incentives made it the correct short-term decision.

There’s a clean proof of what happens when you remove that layer. And it predates LLMs by decades.

The Outsourcing Lesson

In the 2000s, a large part of the industry ran an experiment.

The premise was simple: coding is a separable layer. You write a complete specification, hand it off, and get working software back. Clean division of labor. The expensive people think. The cheaper people implement.

It didn’t work.

Not universally, not without exceptions. But as a general model, it was far more painful and expensive than expected. Projects slipped. Rework was constant. The final system often didn’t match what was intended.

The failure wasn’t about talent. It was structural.

The spec was never actually complete. And it was never complete because the people who could have completed it had always just written the code instead. The domain knowledge : the understanding of why the system needed to behave a certain way, what edge cases mattered, what constraints were real, lived in the same place as the implementation.

The translation from intent to code was lossy. But the translator was trustworthy, because they understood both sides.

Force the separation, and the gap becomes visible. The implementation follows the spec faithfully. The spec is incomplete. The result is wrong.

Coding was never the separable layer. The domain knowledge and the implementation were entangled. And that entanglement was doing real work.

LLMs expose the same gap. The difference is speed, and silence.

An offshore team could push back. A confused engineer could ask for clarification. The feedback loop was slow, but it existed. The gap surfaced as friction.

An LLM doesn’t push back. It reads the incomplete intent, fills the gaps with plausible assumptions, and produces code that looks correct. The gap doesn’t show up during implementation. It shows up later, when an edge case fails in production.

The metric improves. The system degrades.

The lesson wasn’t that separation is impossible. It was that separation without explicit intent fails.

LLMs break the old equilibrium. They separate execution from understanding by default, but without fixing the thing that made the separation hard in the first place.

So the question shifts. Not whether intent can be separated from implementation. But what happens if it finally is.

When the Primary Artifact Shifts

Assume the intent problem gets solved well enough.

Not perfectly. But well enough that explicit, structured intent becomes the artifact humans own, and software generation becomes a downstream step.

When the primary artifact shifts, everything organized around the previous one shifts with it.

Testing stops being about code correctness. It becomes about spec completeness. A failing test is no longer a code bug, it’s a spec failure. The question changes from “what did the code do wrong?” to “what did the spec fail to account for?” The entire discipline reorients around that shift.

Seniority changes too, but not in the direction most people assume. Good senior engineers already had domain expertise. The craft layer just allowed them to encode it implicitly: in edge cases handled, abstractions chosen, decisions refused to defer. The code carried the understanding.

If code becomes invisible, generated, unread, unmaintained, that channel disappears. The knowledge doesn’t go away. But it has to become explicit. Articulated in constraints and guarantees that a system can derive from.

That’s a different skill. Not because the knowledge wasn’t there. But because making implicit knowledge explicit is harder than embedding it in code. And it’s a skill the profession has systematically undervalued, because until now there was always a way to avoid it.

Architecture shifts as well, in a more structural way. Conway’s Law, systems mirror the org structure of the teams that build them, isn’t a law of nature. It’s a consequence of code being the coordination surface. If systems are generated more directly from intent and constraints, that coupling loosens. Teams coordinate through constraints rather than architecture. The result might look unfamiliar, not because it’s more complex, but because it’s less shaped by organizational boundaries.

What Engineers Actually Love

There’s something missing from most conversations about AI and engineering.

The assumption is that what engineers do is write code. But that was never quite true.

For most engineers and especially the ones who become good, the code was never the point.

The point was the problem. Understanding something complex enough to build a solution. Taking something ambiguous and making it precise. Seeing a system that doesn’t yet exist and figuring out how it must behave.

Code was the tool for getting there. A good tool, writing code forces clarity, surfaces what you don’t understand, exposes decisions you deferred. For many engineers, implementation was how they thought.

But the tool was never the reward. The reward was the understanding.

This is why the ladder framing only gets you so far. Previous transitions moved us up a ladder of representations, different ways of telling a machine what to do. This shift isn’t just another step up. It’s a shift sideways. From representation to meaning. From describing execution to defining intent.

If engineering is defined as writing code, and AI writes code, then the conclusion is straightforward: engineers are being replaced.

But if engineering is about understanding problems and defining systems clearly enough that they can be built, then something else is happening. The layer being automated is the translation. The step between understanding and execution.

The engineers best positioned for what comes next aren’t the ones who were fastest at writing code. They’re the ones who slowed down before the keyboard. Who asked why before asking how. Who saw the implicit decisions and refused to leave them unresolved.

The ones who were told they were overthinking.

It turns out they weren’t. They were doing the work that the next layer makes unavoidable.

 

We thought we were optimizing for speed.

We might have been optimizing away the part that matters most.

Part 1 looks at why code might not be the right artifact anymore. Part 2 digs into why the tool that would fix this is so hard to build. Links below if you want to start from the beginning. Part 1 : https://www.garudax.id/pulse/beyond-code-rethinking-software-age-ai-vincent-ysmal-oi4te/ Part 2 : https://www.garudax.id/pulse/beyond-code-tool-nobody-wants-build-vincent-ysmal-jqmhe/?trackingId=OFbMXx1PSjit%2F3uxVd4yrg%3D%3D

Like
Reply

To view or add a comment, sign in

More articles by Vincent Ysmal

Others also viewed

Explore content categories