The Code Will Write Itself
https://www.vicert.com/success-stories

The Code Will Write Itself

The Shift Happening Now

Something is changing in how we build software, and it's bigger than a new tool or framework.

For decades, the core activity of software development has been writing code. You think about what you want the system to do, then you translate that intent into instructions that machines can execute. The translation is the work. The code is the artifact.

But what if the translation became automatic? What if you could express what you want at a higher level—describing behaviors, defining constraints, specifying outcomes—and have the code generated for you?

This isn't hypothetical anymore. AI can read descriptions of software and produce working implementations. The implementations need verification. The technology has limits. But the direction is clear: the source of truth is moving up, and code is becoming output.

Understanding this shift matters, because it changes not just how we build things, but what skills we need, what problems we're solving, and how we think about quality.

Article content

A Pattern from History

This has happened before.

In the early days of computing, programs were sequences of numbers—machine code entered directly into memory. Then came assembly language, which let programmers write instructions in something closer to words. Then came compiled languages, which let them express ideas without managing registers. Each transition moved programmers further from the machine and closer to their intent.

The pattern has a shape: what we consider the source keeps rising.

At each level, the previous level becomes generated output. We don't read the machine code our compilers produce. We don't inspect the assembly. We trust the translation and work at a higher level.

If AI can reliably translate natural descriptions into code, the pattern continues. The description becomes the source. The code becomes output, the way assembly is today—something that exists, that you can inspect if needed, but that isn't the primary representation of what you're building.

This is disorienting for anyone who has spent their career writing code. Code wasn't just a skill; it was the work itself. But the assembly programmers probably felt similarly about registers. The work continues, at a higher level.

Article content

An Industry Built on a Cost Problem

Here's where this gets concrete.

The mobile development industry has spent billions of dollars on frameworks that let you write one codebase for multiple platforms. React Native, Flutter, and their predecessors exist because writing the same app twice—once for iOS, once for Android—is expensive.

That expense is real: two teams, two implementations, two sets of bugs, two maintenance burdens. Cross-platform frameworks attack this by enabling code sharing. The trade-off is that you give up some native quality.

But notice: the frameworks exist because duplication is expensive, not because shared code is inherently better.

If AI can generate a native iOS app from a description, and separately generate a native Android app from the same description, the economics change. You're not paying for implementation twice. You're paying for specification once. The duplication happens at the code level, but code is cheap to generate. The expensive human work—figuring out what to build—isn't duplicated.

You don't get one codebase for two platforms. You get two native codebases for the cost of specifying once.

This doesn't make cross-platform frameworks wrong. They solved a real problem with the tools available. But the problem itself may be dissolving, and that changes what solutions make sense going forward.

Article content

Native Without Identical

If one specification can produce multiple native apps, a subtle but important question emerges: should those apps be the same?

For years, cross-platform meant identical. The goal was pixel parity—an app that looked and felt exactly the same on iOS and Android. This made sense when shared code implied shared implementation. If you have one codebase producing both apps, similarity is natural.

But identical isn't always good.

iOS users expect different interactions than Android users. They've learned different gestures, different navigation patterns, different visual languages. An app that feels native on one platform often feels foreign on the other—not broken, just off.

When code is generated rather than shared, you can let each platform be fully itself. The iOS version uses iOS patterns. The Android version uses Android patterns. They don't look the same, and they're not supposed to.

What matters isn't sameness but alignment: both apps serve the same purpose, enforce the same rules, produce the same outcomes. They're unified by intent, not implementation.

This reframes what multi-platform means. Not "write once, run anywhere" but "specify once, generate native everywhere." Not pixel parity but purpose parity.

Article content

The Remaining Gate

So we have a technology that can translate descriptions into code, an economic shift that could dissolve the cross-platform cost problem, and a design philosophy that makes sense when code is generated. What's stopping this from being the new normal?

Trust.

Right now, developers don't trust AI-generated code enough to skip review. And they're right not to—AI makes mistakes, sometimes subtle ones. The code compiles and runs but has edge case bugs that only surface in production.

We've crossed similar thresholds before. We don't verify our calculator results by hand. We don't read compiler assembly output. We don't check GPS routes against paper maps. Each of these represents a point where we decided the cost of verification exceeded the risk of error.

AI code hasn't reached that point. The track record is thin, the failure modes are subtle, the errors sometimes significant. We're still in a verification phase, building intuition about when AI gets things right and wrong.

But the threshold is visible. As error rates drop and testing improves, verification becomes less necessary. At some point—probably sooner than feels comfortable—developers will start trusting generated code the way they trust compiled code: not perfectly, but enough.

When that happens, the bottleneck moves definitively from writing code to specifying what the code should do.

Article content

Where This Leaves Us

Four shifts, interconnected:

The source rises. Code moves from being what we write to being what we generate. Specifications become the artifact we care about.

The cost problem dissolves. Cross-platform frameworks addressed expensive duplication. When duplication is cheap, that solution becomes less relevant.

Sameness gives way to alignment. When platforms can each have native implementations, we can optimize for quality on each rather than consistency across both.

Trust becomes the gating factor. The technology is arriving. Whether we use it depends on when we decide to rely on it.

These aren't predictions about some distant future. They're descriptions of a shift that's happening now, in stages, across the industry.

Article content

The Question That Remains

Here's what it comes down to:

If code becomes generated output, the scarce skill isn't writing code. It's specifying what the code should do.

This is harder than it sounds. Clear specification requires understanding what you actually want—not just the happy path, but the edge cases, the error conditions, the constraints that must hold, the behaviors that would be invalid. It requires thinking at the level of intent, not implementation.

We've always needed this skill. Product managers and designers work in this space. But programmers often started with a vague idea and figured out the details while writing code. The code was the specification, evolved through implementation.

When code is generated, that crutch disappears. You have to know what you want before you ask for it, or you'll generate the wrong thing.

The developers who thrive won't be those who master the syntax of a particular language. They'll be those who can articulate clearly what a system should do, who can define constraints and behaviors precisely, who can think at the level of purpose rather than procedure.

The code will write itself.

The question is whether we can clearly say what it should do.

That's always been the hard part. It's becoming the only part that matters.

AI First Development
We’re entering a new phase of our work.

Not “AI as a feature.” Not “AI strategy” on a slide. But building systems that are AI-native from the start. Agent-based, modular, personalized, fast to ship, and surprisingly powerful.

These builds don’t follow the old rules.
They don’t take 18 months.
They don’t need 40-person teams.
They don’t require pristine data or perfect architecture.

They just need a problem worth solving and a team that knows how to work with models, data, context, and users together. That shift - moving from classes and queues to prompts and retries - is what lets us tackle problems traditional systems couldn’t touch.

After decades of building software, the work feels new again. Not just because of the technology, but because of what’s now possible. The ideas we used to talk clients out of are back on the table - and we can ship them, fast.

It’s not magic. It still takes architecture, data access, compliance, security, UI, and edge-case handli


To view or add a comment, sign in

Others also viewed

Explore content categories