When Code Writes Itself
Northern Lights

When Code Writes Itself

A colleague at work told me about "Macrohard". While Microsoft built its empire around soft technologies, starting with simple command-line software and growing into a global software powerhouse, xAI tries to do flip the idea. Instead of focusing on "soft" problems, it aims to focus on the hard ones: advanced infrastructure, AI at scale, robotics, and global systems. And rather than operating at a micro (company or device) scale, it is looking for building technology on a macro scale.

It is also about focusing on driving the AI agent idea to a level when it’s less about doing assisted code micromanagement (see previous newsletter "from chaos to coherence"), but it’s about full self coding capabilities in the sense of an AGI (artificial general intelligence). It is not clear whether xAI will be more successful in development of such automated processes than Tesla was with full self driving which is still not working perfectly. However, the entire industry is now looking into developing AGI. It also aims to transfer some of the reality into simulation, so that the simulation allows to operate faster than reality. In case of xAI that would involve to simulate products and the world around it, and accelerate product development, get new products tested, and then deployed to the world once it is clear that they are successful (at least in theory).

“Technology is a way of organizing the universe so that man doesn’t have to experience it.” — Max Frisch

Therefore, no matter if xAI succeeds with this, the question remains, are we far away from full self coding capabilities and streaming software? Do we need and want that? And again, is it inevitable like there’s no escape?

Article content

How far are we from full self coding capabilities?

I received a comment from a reader that in future, all the AI coding micromanagement will go away, so we can basically ignore it and wait. The statement was, we will eventually have autonomous coding, and that will change everything anyways. After some thinking I think it's still a long way to go.

The reality right now is: no code generator creates error free code. From a syntax perspective they can restore correctness to language syntax just by repeatedly applying a compiler, and work on the errors. So they write code in a more or less random fashion, check if it compiles, and if not, fix it until it compiles. This is not only very inefficient but does not guarantee at all that the intent is reached. That may be fixed by establishing multiple layers of correctness checking. for example besides code syntax rules, we would have requirements descriptions, unit tests, UI tests, all to ensure that we get what we want. And detailed upfront waterfall like business requirements to ensure that as well. To ensure that the generator is only creative in the intended parts. It shall not be creative in areas like handling of security or sensitive data. Therefore, unless we create new AI systems that focus more on interactivity and really ensure domain understanding, and only apply the LLM idea, this issue is not very likely to disappear. This is a first step, but we are still locked in the boundaries of technology, with endless application of loops.

Article content
Multiple Levels ensuring "error free" code via looping

Luckily, we don’t need to write custom code all the time, instead we can create intelligent systems that focus more on smart components than to allow full freedom through software languages. Those had been developed for many years already and are low code or no code systems. With those, and better intent driven business product development you can create solutions that can theoretically also perform automated stitching of components and usage of those instead. N18N is a good example of a new approach, on how to take AI to the next level and integrate it. This is a bridge, but it’s not the entire solution yet for streaming code.

For streaming code, there will still be limits. Imagine for example, you ran a company, and an outside attacker found a way to attack your systems, meaning to get access to some APIs to do harmful things. And imagine that debug logs show an interesting pattern. An autonomous system would need to be capable of detecting unusual patterns in debug logs, and decide to create code beyond blocking IPs, but to monitor, evaluate, and understand what the person wants to do, potentially separate the entire sessions into a simulated environment to study further and then to perform an automated counterattack while still preserving own systems. Is this far away? It certainly feels far away now. And it’s also far away from the Turing test, which was created initially when we had no idea of what would eventually come. But it’s closer to ideas from science fiction, actually.

Do we need that? The question about need is based on what companies or people need to achieve. Companies have a need to create products or services with added value, so they can sell them to people or companies, and therefore make money. In a society where theoretically everything is interconnected, there might be limited gain of wealth just by exchanging the methods or increasing the speed of software creation. It’s more about shifting control from some people to some organisations who provide access to such new tools. Which in turn makes people addicted to AI based tools, though reducing societal benefits. Right now, most of such tools are owned by American corporations who consciously invest for a reason. Large sums of independent investment over a long time usually give a clear sign of what is a good bet.

When we think about what people actually need to live a happy and healthy life, then it doesn’t seem to be related at all. If you’re excited about technology, then AI will certainly excite you, but if not, then it’s more about the capabilities that are unleashed. So to automatically remove people from photos, or to ask about kitchen recipes is a better use case for AI, and closer to actual life. Or to use a chat agent for cheaper way to get support in situations of personal crises, compared to professional help providers.

I guess, if we had full self coding capabilities, it would also be very obvious that there are no boundaries for such systems. How could they be there? If they have access to code, adjust it themselves, then it’s possible to rewrite the code of their own system anytime, or to redefine limitations of that code. That in consequence simply means there will be no limitations if code is able to adjust itself. For example, in an ecommerce business, such a system could misinterpret debug logs of actual customers as hidden attacks that fake purchases, and decides to block them because of some potential reasons. Then, the system could actively hunt those people, create fake news on media about them and destroy their lives.

“What if this creature were to take a dislike to you?” — Mary Shelley (Frankenstein)

That means, even if we had full self code capable systems, we don’t need them actually, and we will not be happy to have them if they are unleashed in reality.

Luckily this scenario is pure science fiction, and not possible right now.

Article content

What do we need?

We need conductors of systems, maybe fewer low level coders. People who steer the code ship, who can orchestrate all the AI agents and feed them with all the required input, so they can do their job, and provide necessary feedback.

The idea of Macrohard, to create software on the fly, in a streaming fashion generates new risks while not generating new benefits. Risks are constant code mutation without knowing what’s in the code, or unclear accountability if some few guard rails fail (e.g. code revert to an older repository version in the middle of the night, reverting the API to an older version, which in turn renders clients unusable), or if different autonomous systems perform local decisions assuming the other systems will remain stable but they are not…

In other words, software was always pretty dynamic and evolved quickly. Streaming software would be another level of unpredictability, with high benefits, but even higher risks.

I personally believe that "code creation", or on a grander scale, "product creation" is a creative act, involving critical thinking and contemplation. This act of creation will change its tools, but it is still an act of creation. Currently, this is not transferable to AI, simply because there is no self-awareness of what it means to make decisions, or on what basis. LLMs are repeating patterns of behaviour, they are mirroring what has been done in the past. Right now they mirror previous human attempts to create things. In the future, they will mirror more of what other AI-based systems created, which is less about intent but about mirroring, and that will lead to degradation of the quality of delivery.

“We become what we behold. We shape our tools and then our tools shape us.” — Marshall McLuhan

Which means we need solutions that help people to live better lives, and we need to enable the act of intent-based creation of such solutions.

Thank you for the article, That's really good overview of current state of AI for code. Generally I agree. Current AI systems that generate code are not fit for complex applications development and maintenance, and creation of the applications that are not copycats of existing solutions freely available on the market. AI is a great amplifier, for me using it, is the same feeling as I've got long time ago when I first used an IDE: at first clumsy collection of tools, like files management and text editors evolved into something we cannot work without efficiently. In software development world AI will go through the same cycle. But faster, as first IDEs way available only for students and beacon enterprises, and the AI tech is available to all. Right now I cannot imagine shifting back, is even after 3 years of the artificial intelligence quoting solutions race for leadership they developed something, that finally gives resources to startups, small and medium enterprises to create the sites mobile applications, and compete on the next level. Llms have their limitations, as computer vision, text recognition and other areas of AI. Those limitations likely will be met soon. What would we have now is so much more than we had before.

To view or add a comment, sign in

More articles by Marco Nissen

  • When 70% Is Enough: Why AI Adoption Won't Wait for Perfection

    This edition is from a guest author. It is about how falling costs and rising pressure make "good enough" the new…

  • Unlearn

    This is a little note to my future self. I came across the word unlearn being described as an important skill for…

  • Architecture Should Be a Conversation

    Let me say something that will make a lot of people uncomfortable. Probably including myself.

    2 Comments
  • The Product Dilemma

    I rebuilt a product in six months that took ten years to build the first time. Not a prototype, not a demo.

    3 Comments
  • Simulated Reality

    This edition I want to reflect a bit on AI agents. The term is currently used as if we had already arrived at…

  • DIY is changing

    In 2013, I created my first knitting app. My wife was knitting and I was not.

    1 Comment
  • 2032

    Lars sat at his desk. He was a developer, proud of his work.

  • 2025: The Year when AI entered the Mainstream

    This newsletter is a little bit more provocative :) Read only if you dare :) When I went to university, AI was pretty…

    6 Comments
  • From AI Chaos to Coherence

    Cursor is an AI code tool that started in 2023 as an AI fork of Visual Code Studio. Claude Code is from February 2025…

    11 Comments
  • Changing the Game

    In this edition of the "Code Different" newsletter, I'd like to reflect on my personal origins of coding and how AI…

    2 Comments

Explore content categories