Is code evolving or just mutating?

Is code evolving or just mutating?

AI can generate code, arguably with a pretty decent quality. That’s not news anymore. The question that’s been forming in my head all week is different: how do we decide what should go into production? Writing code is not the hard part (arguably, it never was). The hard part is making sure the right code ships and the wrong code doesn’t. And right now, that selection problem is becoming the defining challenge of AI-assisted development. Last week has definitely showed this.

Focus on code reviews

Last week, Anthropic announced that Claude Code now has a Code Review feature. When a PR opens, Claude dispatches a team of agents to hunt for bugs. Many people point out the irony in Claude reviewing it’s own code (insert spiderman meme). Boris Cherny, who created Claude Code, responded to one of the tweets in an interesting manner: the more tokens you throw at a coding problem, the better the result. In other words, one agent can cause bugs while another catches them.

A feature release like this one is always an interesting signal. As I mentioned in my last newsletter, it seems that code reviews are the next big challenge in AI adpoptin. With code velocity at an all time high, manual reviews just don’t cut it anymore. It seems like code review systems are what comes next.

In fact, Claude is not the first tool to tackle this problem. Martian recently released a review bench of various different code review tools. Tools were put on trial against real codebases and ranked on how thorough and precise each tool is. I highly recommend checking out the results.

I think that code reviews are indeed an interesting problem space. It does seem to be the current bottleneck and potentially a great way to keep bugs at bay. But I also feel that there’s more. Code review not just just about catching bugs. It has always been about knowledge transfer, about mentorship, about building a shared understanding of the codebase. I wonder what heppens with all this, when an AI reviews your code. Will the knowledge stay in the model’s context window and then be gone? I wonder how the part where the team gets smarter happen in the AI era.

Is code evolving, or just mutating?

Itamar Friedman, CEO of Qodo, published a piece that reframes the entire AI coding conversation through the lens of evolution. His argument is simple but profound: code generation is just mutation. Models write functions, agents generate pull requests, systems produce entire features - but from an evolutionary perspective, that’s just creating variation. What creates progress is selection. Evolution requires three ingredients: mutation, selection, and persistence. Without selection, mutations accumulate. With selection, improvement compounds.

He points out that software engineering has always had selection loops — tests, code review, CI pipelines, governance mechanisms. We just never described them that way. And now AI is dramatically increasing the mutation rate. Agents can understand unfamiliar codebases, propose architectural refactors, implement entire features. The rate of code production is skyrocketing. But the selection layer is not scaling at the same speed. The bottleneck in software development is moving from writing code to verifying it and selecting what survives.

This landed hard for me. In my previous newsletter I talked about how quality engineering is evolving, and Itamar’s framing gives it a language I’ve been missing. We’re not just “testers” or “quality engineers”. We’re the selection layer. And if that layer doesn’t keep up with the mutation rate, systems don’t evolve — they drift.

The hot dog problem

Mo Bitar posted a video called “I was a 10x engineer. Now I’m useless” and it hit me harder than I expected. Mo describes what happened when he used ChatGPT to deploy his entire product without looking at the code. It worked. And he hates it.

His analogy is perfect: he made a hot dog. It looks like food, it tastes like food, the transaction is complete. But he can’t sell it because he has no emotional connection to it. He didn’t earn it. He didn’t suffer for it. And that suffering, that struggle, that’s what used to make us better at our craft.

Mo’s video is honest, and asks an important question. What do you do, when you love to code? The activity and the craft of coding doesn’t seem to be in such a high demand as it used to. This new AI era takes something away from those who loved it. On the other hand, I belive there is a path forward, the goalpost has just moved. This tweet by Franziska seems to suggest an interesting problem space for engineers. Instead of making your work faster, you engineer AI systems.

The problem with AI demos

Vidhya Ranganathan wrote a piece called “Production Telemetry Is the Spec That Survived” that I think should be required reading for anyone deploying AI agents on existing codebases. She introduces a framework that distinguishes between greenfield systems (new, clean, well-specified), brownfield (evolving, messy), and what she calls “blackfield” - legacy systems under heavy load where the original intent is lost, documentation has rotted, and business rules hide in undocumented conditionals.

AI coding tools are great at greenfield. They struggle with brownfield. And they fail at blackfield, because they infer specifications from code patterns, creating implicit specs that fail silently when they contradict accumulated production behavior. The only honest specification left in these systems lives in production telemetry: traffic patterns, error rates, usage data.

I think this has always been a great pointer for testers on which tests should be written first. But it is also a very smart approach for adoption of new tools and testing PoC for services that provide nice demos, but leave you curious about real world usage.

OpenAI acquires PromptFoo

This connects to OpenAI acquiring Promptfoo, an AI security startup that specializes in red-teaming and vulnerability testing for AI systems. Promptfoo serves about 25% of Fortune 500 companies and has 130,000 developers using it monthly. OpenAI is integrating it into their Frontier platform to make security testing a built-in part of how teams ship AI agents.

The fact that OpenAI felt the need to buy a company whose entire job is testing whether AI systems are safe tells you something about where we are. We’re building agents that write code, review code, and deploy code, and we’re only now starting to seriously ask: but who tests the agents?

Great questions to be asked about AI

Hank Green and Cal Newport sat down for a conversation about AI that I think captures the current moment better than most. Hank’s approach is to catalog every legitimate concern - addiction, manipulation, hallucination, labor displacement, economic bubbles, children’s exposure - and resist the urge to collapse them into a single narrative. Each concern has its own severity and its own likelihood. They’re separate problems.

Cal Newport introduced a concept: “progress laundering.” Advances in one AI technology, like language models, get unfairly attributed to completely different domains like protein folding or robotics. These are separate technologies with separate trajectories, but the narrative treats them as one unstoppable wave. It’s a useful framing because it explains why the discourse feels so overwhelming. We’re not dealing with one problem. We’re dealing with dozens of separate problems being marketed as one.

The whole conversation is great, but what surprised me (but makes perfect sense) was Cal’s take on current AI models. He claims that we’ll probably end up with smaller, specialized systems that do specific things well - which, in a way, loops back to where we started. Specialized models. Specialized agents. Selection systems that keep the good mutations and discard the rest. Instead of having one know-it-all model like GPT-5.4, many will focus on models that are really good at specialized tasks.

But that’s a prediction, not a certainty, so we’ll see where we eventually end up.

I’d love to hear how this is landing for you. Has your team started using AI code reviews, or are you still doing them manually? Do you see yourself as the selection layer, or does that framing feel off? And if you’re someone who loves the craft of coding - how are you making peace with the hot dog era? Hit reply, I’m genuinely curious where everyone is at right now.

Dhony Imam Saputra

Test Enthusiast | Build Quality Maturity | Jest UnitTest Backend, gRPC Test | MochaChai API Test Backend | TestCafe UITest | FlutterDriver, EspressoNative, AppiumJs AndroidTest

1mo

love this Instead of making your work faster, you engineer AI systems. and the way to make it not certainty, lets make our code in "blackfield" color.. so we'll wait until someday there is agent that able to succeed processing blackfield as good as greenfield code haha

Like
Reply

To view or add a comment, sign in

More articles by Filip Hric

  • Too dangerous to release

    “Too dangerous to release” has become its own genre of AI announcement. Project Glasswing is the latest entry: not…

    1 Comment
  • Bad week for Anthropic

    Anthropic had a rough week. And the part that stings isn’t just that something went wrong - it’s how they handled it.

  • I’m joining Qodo

    If you’ve been reading this newsletter for a while, you know that quality engineering is the hill I’ll always choose to…

    15 Comments
  • Harness engineering

    An interesting thought is popping up in conversations around AI agents: the environment around the thing matters more…

    2 Comments
  • Quality in the age of A.I.

    Hey! If you’re reading this, chances are you care about quality. Coming from QA, I never stop looking at the apps I…

    14 Comments
  • Is AI taking away the easy part, or the hard part?

    It's been a while since I've issued a newsletter, but I'm hoping to get back on track. There are just so many…

    3 Comments
  • AI, BDD, and Why We're Bad at Predicting the Future

    Newsletter originally posted on filiphric.com Lately, it seems the conversation around AI has shifted.

    3 Comments
  • To be or not to be... technical

    This is a LinkedIn version of my blog newsletter. Go subscribe if you want to get the news directly 😊 There’s an…

  • 🌈 Vibe-coding tools showdown - which one is the best?

    Last week Jonathan Canales and I tested out different "vibe-coding tools" and scored them on: 📚 readability - how well…

    5 Comments
  • Streaming > Video production

    Streaming > Video production I had my first stream three weeks ago and did another two since then. I have been really…

Others also viewed

Explore content categories