The Hidden Cost of ‘LLM-Native’ Programming
Introduction
I started using Claude Code back in May 2025, when it was still in beta. At the time, I shared some early thoughts, and ended that post with a quote from Jeff Dean: “Within a year, we’ll have AI systems performing at the level of junior engineers.”
Fast forward to today, not only has that prediction played out, some are now declaring “the end of programming as we know it.”
In this post, I reflect on my recent experience building a Claude Code Skill, and how I see it as an anecdote to how programming may evolve from here.
Converting Confluence articles to Obsidian (the context engine)
I would like to setup Obsidian locally as a context engine for my Claude Code based AI workflows, but most of my knowledge base lives in Confluence. So I needed a way to:
Naturally, I built an AI agent (Skill) to perform this conversion. Like most modern agents, it was just a markdown file describing steps in natural language. No traditional code. Just instructions.
And it worked. I could migrate Confluence articles into Obsidian with minimal effort.
The Token cost
Then I asked a simple question: What does this actually cost? After measuring a few runs:
That’s fine, until you scale to thousands of articles? Now you’re looking at real money.
So I dug deeper.
Out of the 6 steps my agent performed, only one actually required a LLM: content classification. Everything else, validation, API calls, deduplication, tagging, file writing, was just software. But in the agent setup, the LLM was doing everything:
Recommended by LinkedIn
Expensive. And unnecessary.
V2
So I rebuilt my agent. V2 was just a simple Python program. Same workflow. Same outcome. But now:
Result?
Reflections
This led to a broader realization: while programming as we know it may be ending, what’s emerging in its place is a proliferation of natural language–based programming.
We’re increasingly treating LLMs as the default execution engine. And with that, two distinct patterns are starting to emerge:
Most agents today, and probably in the future, will follow Approach 1 . Because it lowers the barrier to automation dramatically. You no longer need to be a programmer to build useful systems. You just need intent.
But this approach comes with real tradeoffs: hallucinations, slow execution, high cost, context bloat, and weak guarantees, to name a few.
There is a second-order effect that’s more interesting: As these agents run repeatedly, we’ll start identifying stable patterns. And once a pattern stabilizes, we’ll “compile” it into code, reduce token usage, improve performance and eliminating many of these issues, including hallucinations.
In other words: Agents may become the prototyping layer for software.