AI - The End of Closed Source Engineering Software
Here's a thought experiment I've been running in my head for a few months now, and the more I think about it, the more convinced I become: investing in closed source software might be the worst long-term investment a company can make today. Not because closed source is bad software — some of it is excellent — but because the way we work with software is fundamentally changing, and closed systems are structurally unable to keep up with that change.
The catalyst, of course, is AI. Specifically, LLM-based coding agents.
The Asymmetry That Changes Everything
If you've spent any time with tools like Claude Code, Cursor, or GitHub Copilot, you already know the feeling: you point an AI agent at a codebase, and suddenly things that used to take days take hours. Refactoring, extending, building integrations, writing tests, understanding unfamiliar code — all of it accelerates dramatically when the AI can actually read, navigate, and reason about the source code.
Now here's the critical insight that too few people are talking about: this only works when the source code is available. An AI coding agent trying to extend a closed system is essentially working blindfolded. It can call APIs, sure. It can read documentation. But it can only extend the system where the supplier gives you the allowance to do so. It cannot understand the internals, cannot refactor, cannot deeply integrate, cannot fix bugs at the root level. The difference between what an AI agent can do with shared source versus closed source isn't incremental — it's categorical. It's like the difference between having a brilliant engineer who can see the machine and one who can only press buttons on a panel.
And this gap is only going to widen. Every month, these agents get more capable. Every month, the productivity multiplier for teams working with accessible source code grows. Closed source vendors are falling behind not because they lack talent or funding, but because they've chosen a model that structurally prevents their users — and AI — from unlocking the full potential of the software.
The Lock-In Gets Worse, Not Better
Here's what I'm seeing from the closed source side: instead of opening up, they're doubling down on lock-in. Their response to the AI revolution is to build their own AI integrations, their own copilots, their own walled gardens. They're essentially saying: "Don't worry, you don't need to connect our software to the broader AI ecosystem — we'll do the AI for you."
But think about what that means in practice. You become dependent not just on the vendor's software, but on the vendor's AI. Their priorities, their pace, their roadmap. When a new protocol like MCP — Anthropic's Model Context Protocol — emerges and enables AI agents to interact with tools in standardized, open ways, closed vendors either ignore it or take months to offer a limited, curated integration. Meanwhile, open and shared source systems can adopt these standards immediately, because anyone can build the bridge.
The speed difference is staggering. In the open ecosystem, a new capability can go from concept to implementation in days. In the closed ecosystem, it goes through product management, gets prioritized against a hundred other features, gets scheduled for a future release, and arrives — if it arrives at all — months or years later. AI doesn't wait for product roadmaps. The companies that can move at the speed of AI will outpace those that can't, and closed architectures are structurally slower.
The End of Big IT Teams
A friend of mine, Carl, recently put it even more bluntly: soon, IT companies with more than a handful of people won't be able to compete. And the more I think about it, the more I believe he's right. When AI agents can write, refactor, and ship code at the speed they're approaching, the bottleneck is no longer coding — it's vision, taste, and decision-making. What you need is a small team of visionary people who know exactly what to build and why. The coordination overhead of large development organizations — the meetings, the alignment, the handoffs, the politics — becomes a liability rather than an asset.
This has profound implications for the closed source model. Large vendors have traditionally justified their pricing and their closed architectures with the argument that they employ hundreds or thousands of engineers building things you couldn't build yourself. But if a team of five talented people with AI agents can move faster and build better solutions than a department of two hundred, that argument collapses. And those small, fast teams will overwhelmingly choose shared source platforms, because that's where AI gives them the maximum leverage. The closed vendor's army of engineers becomes a cost center competing against small teams that are structurally faster, more focused, and more adaptable.
The Investment Risk Nobody Is Calculating
Now let's talk about what happens to your investment when things go wrong — because in technology, things always eventually go wrong. Vendors get acquired. Companies pivot. Products get discontinued. Business models change.
If you've invested heavily in a closed source platform and the vendor disappears, your investment is gone. Not diminished — gone. You can't maintain it, you can't extend it, you can't hire someone to keep it alive. You're left with a binary that will slowly rot as the world around it moves on.
With shared source or open source, the calculation is completely different. Even if the company behind the software goes under, the code remains. You can fork it, maintain it, adapt it. You can hire developers — or point AI agents at it — to keep it evolving. For industrial machines with lifecycles of 15 to 20 years, this isn't a theoretical concern. It's risk management. It's protecting a long-term investment in a world where no vendor's survival is guaranteed.
Recommended by LinkedIn
The Real Value Was Never the Software Itself
There's one more angle to this that I think is especially important for engineering companies, and it's something that's often overlooked in these discussions: in most real-world engineering workflows, the core software is only part of the value. Often, it's not even the biggest part. The real investment lives in the custom automation, the scripts, the integrations, the workflows, the toolchains that teams build around the software over years.
Think about how engineering teams actually work. They take a software platform, and then they spend months or years building their own layers on top of it — custom exporters, automated testing pipelines, integration scripts with their PLCs, data extraction tools, reporting systems. This is where the real competitive advantage lives, and this is where the source access question becomes existential.
With shared source, AI coding agents can work across the entire stack — the core platform and all the custom layers built around it. They can understand how your automation connects to the platform internals, optimize across boundaries, and help you build new capabilities that span the full system. With closed source, your AI agents hit a wall at the vendor's API boundary. They can see and improve your custom code, but the moment they need to understand or modify anything inside the platform itself, they're blind. The more you invest in customization — and the more critical that customization becomes to your operations — the more painful this limitation gets.
What We're Doing About It — Opening Digital Twins to AI
This is exactly why we've built realvirtual.io as a shared source platform from the beginning. Every customer gets full access to the complete C# source code — not because it's a nice marketing gesture, but because we believe it's the only honest way to build engineering software with decade-long lifecycles. Your team and your AI agents can read, understand, extend, and modify everything. There's no black box, no hidden layer, no ceiling imposed by us on what you can do with the platform you've invested in.
And it's why we've open-sourced the realvirtual MCP Server under the MIT license. It's a Unity package that implements Anthropic's Model Context Protocol, giving AI agents like Claude direct access to Unity digital twins — scenes, GameObjects, components, simulation control, drives, sensors, PLC signals, even robot inverse kinematics.
The idea is simple but powerful: instead of building a walled AI garden around our platform, we're opening it up to every AI agent that speaks MCP. Currently, Claude Code and Claude Desktop are the most capable MCP clients, but the protocol is open and any LLM can connect. An AI agent can start a simulation, control conveyors, read sensor data, modify scenes, take screenshots, and reason about the entire digital twin — not through a limited API, but with full access to the system.
And because the whole thing is open source and built on Unity's open C# architecture, you can extend it in minutes. Add a C# attribute to any static method, and it becomes a tool that AI agents can discover and use. No Python changes, no server restart, no manual registration. The AI agent sees your new capability immediately.
This is what shared source plus AI actually looks like in practice: not a vendor deciding what the AI can and cannot do, but the user and the AI working together with full access to the system. You can find it on GitHub at github.com/game4automation/io.realvirtual.mcp — we'd love for you to try it and tell us what you think.
The Uncomfortable Conclusion
I know this is a strong position, and I know there are people who will disagree. There are legitimate arguments for closed source — consistency, support, integrated experiences, quality control. I don't dismiss those.
But I would urge decision-makers not to be blinded by marketing budgets. Large closed source vendors spend millions on campaigns, events, analyst briefings, and partnership announcements that create an aura of inevitability around their platforms. It's easy to feel that choosing the big name is the safe choice. But safety and visibility are not the same thing. The safe choice is the one where your investment survives regardless of what happens to the vendor, where your team and their AI tools have full access to everything, and where you're building capability rather than dependency. That choice rarely comes with the biggest booth at the trade show.
When I look at where the technology is heading, I keep arriving at the same conclusion: the value of software is shifting from what the vendor builds to what the user can do with it. AI is the most powerful amplifier of user capability we've ever seen, and it works dramatically better when it can see and understand the full system. Closed source is a bet against that trend.
If you're making a long-term investment in software — especially in industrial software with decade-long lifecycles — ask yourself: is this a system that AI can fully work with? Can my team and their AI agents see, understand, and extend everything? Or am I buying a black box that will become increasingly limiting as the tools around it get more capable?
The answer to that question will define which investments survive the next decade and which ones become expensive lessons.
What's your experience? Are you seeing this shift in your own work? I'd love to hear how you're thinking about source access in the age of AI.
The interesting shift is that AI agents change the value of transparency in software. When systems can read and modify code, access to the underlying logic suddenly becomes a strategic advantage.
Very true
Is realvirtual app open source ?