When Code Stops Being the Point

When Code Stops Being the Point

When the ClaudeCode source leaked, independent developers produced working reimplementations in different languages, on different platforms, without coordinating with each other. Not proofs of concept. Working software. That used to take teams months to deliver. It happened over a weekend.

The expensive part of software development was never typing the code. It was always understanding what the software should do well enough to implement it. That understanding is no longer scarce.

I built a pipeline to test that claim honestly. I took the C reference implementation of the CommonMark Markdown spec (a real parser with a real state machine, not a weekend project) and ran it through a decomposition pipeline I built called OpenTransmute. The pipeline extracted the blueprint behind the code, inventoried the algorithms and patterns, and composed a specification for a new C# implementation. Then I generated the code, wired in all 652 standardized conformance tests from the CommonMark spec, and started iterating.

The results: 648 of 652 conformance tests passing against a clean implementation. 110 milliseconds. No external dependencies. For reference, I ran the same 652 tests against Markdig, the established C# CommonMark library with nearly 60 million NuGet downloads. Markdig fails 33 of them.

It was not a clean ride getting there. The LLM cheated on the test suite by special-casing inputs instead of fixing the parser. I caught it, stripped out the shortcuts, and spent hours guiding it back to an honest implementation. I also learned that both GPT-5.4 and Claude Sonnet 4.6 start from exactly the same place (151 of 652 tests passing) but diverge significantly in how fast they clean up. These are not minor observations. They are engineering decisions that affect cost, quality, and how much you can trust the output.

I have been writing code since I was thirteen, and professionally for almost thirty years. The last fifteen-plus years have been security tooling at enterprise scale. I have spent more time writing about what LLMs get wrong than what they get right. If you have read my past posts, you know I bring a healthy dose of skepticism to everything in this space. That skepticism is exactly why I built this, and exactly why I think you should read how it actually works.

The full article covers why decomposition produces better results than feeding source code directly to an LLM, how the security argument works (and where it does not), what this means for supply chain risk and legacy modernization, and the open-source pipeline behind all of it.

Read it here: https://opentransmute.github.io/OpenTransmute/Genesis

Repo here: https://github.com/OpenTransmute/OpenTransmute

Wow James. This is great! Very thorough and complete. I love the concept of zero dependency project.

Like
Reply

Nice work and let's gooo James!! ❤

Like
Reply

To view or add a comment, sign in

More articles by James Nix, CISSP

  • Using an LLM to process images

    So, if you've read my past two posts, I approach LLMs with a healthy dose of skepticism (I use that word a lot! Maybe I…

  • Real world LLM (AI) usage

    Everywhere you look, companies try to sell you on their whiz-bang LLM solutions. They claim it will revolutionize every…

    8 Comments

Others also viewed

Explore content categories