The Simulation Century and Quantum Computing
As I inch towards the publishing date for my Simulation Century book I will begin posting some of the excerpts for the Simulation Century Faithful. I have been presenting this concept in a ninety minute session at #ModSimWorld for the last fourteen years. Initially it was directed at my colleagues at Joint Forces Command as we attempted to simulate planet Earth for our DIME and PMESSII (and kinetic) simulation games. And it has since branched out to include everything and everyone. In a meeting yesterday with state leaders here in North Carolina I described my plan to simulate the state of North Carolina in all of its glory (and blemishes) for better governance and public health. Watch this space https://www.simnc.net/ for more on that in the (hopefully) near future. In the meantime please enjoy this account of how I first encountered the HAL 9000-like very first quantum computer while I was at Lockheed Martin.
I-Sah Hsieh David Bray, PhD Robert Stratton David Brin UltiSim Alan Kay Jeff Frazier Sean Batir Brian Hibbeln
For those of you new to The Simulation Century, here is my liberal arts inspired thesis: The 20th Century was the first in which humanity could reflect upon past or current events by reviewing video footage. The 20th Century Canadian philosopher Marshall McLuhan developed a theory of media in which he said “the medium is the message”. He also said the media we use, in turn, make us. For McLuhan, it was the medium itself that shaped and controlled "the scale and form of human association and action[1]".
If the last century was about the moving image, I hold that this century is about simulation. It is the first time in human history when we can employ our newly created super tools including artificial intelligence and machine learning, but also rich 3D graphical simulation of those worlds in the form of digital twins. With AI-empowered digital twins we can not only model, simulate and reflect upon past events and current operations, but we can also now model and simulate complex futures. With this time-traveling capability we can not only attempt to predict the future, but also shape and create the futures we want. This is the highest moral purpose of the age of AI in the simulation century.
The Simulation Century and Quantum Computing
The Author using the Lockheed Martin "Holowall" in Orlando, Lake Underhill, 2011.
The Quantum Leap
The future of simulation arrived in a strip mall in Los Angeles, next to a Mexican restaurant, housed in a featureless black monolith that sat silent in a nondescript room like an artifact from 2001: A Space Odyssey that had gotten lost on its way to Jupiter and ended up near USC instead.
This was 2011, and Lockheed Martin had just purchased the world's first commercial quantum computer from a Canadian company called D-Wave Systems. It had four qubits; quantum bits that could exist in superposition, simultaneously representing multiple states until observed, which sounds impressive until you realize that the smartphone in your pocket has billions of regular bits and can actually run useful applications. But those four qubits represented something profound: a fundamentally different approach to computation that might eventually let us simulate systems that were impossibly complex for classical computers.
I was running Virtual World Labs at Lockheed Martin at the time, and my work centered on creating high-fidelity simulations of everything from aircraft to hospitals to complex operational scenarios. When I heard that Lockheed had bought a quantum computer, I had to see it. Not because I expected it to be useful for anything we were currently doing. It was still an emerging and dubious field and four qubits weren't going to simulate a fighter jet, but I wanted to understand what this technology might mean for the future of simulation.
The facility was deliberately underwhelming, the kind of anonymous strip mall space you'd drive past without noticing. Inside, the D-Wave machine looked like something HAL 9000 would have had shipped to its summer home. A large black cube, perfectly silent, giving no indication whatsoever that it was doing anything at all. No blinking lights, no whirring fans, no reassuring hum of cooling systems. Just an expensive monolith sitting there, reveling in tis inscrutability. I found myself thinking of my friend Douglas Adams and hi Improbability Drive on Heart of Gold. I asked the engineers there if they could sue this thing to calculate the answer to “Life, the Universe and Everything.” One of them just stared, the other said, ”We already know it’s 42” , and gave me a wry grin.
The Thing One and Thing Two Engineers gave me a tour, explaining the refrigeration systems required to keep the quantum processor at temperatures approaching absolute zero - colder than deep space - and the magnetic shielding necessary to isolate it from environmental interference. The entire apparatus was orders of magnitude more complex than the actual quantum processor it was protecting, which gave you a sense of how delicate and finicky quantum states were. Any interaction with the outside world whether heat, vibration, stray electromagnetic fields or someone opening the box and looking for Schrodinger’s cat, would cause decoherence, collapsing the quantum superposition and destroying the computation.
I asked what seemed like a reasonable question: "What programming languages do you use? What algorithms?"
Thing Two looked at me with the expression of someone about to deliver news that would reshape my understanding of computation. "Oh, it doesn't use traditional programming," he said. "We mathematically describe probability envelopes to get our results."
I told him they should install a fainting couch in the room if they were going to say things like that to visitors without warning.
But he was serious, and as he explained further, I began to understand just how alien quantum computation was compared to everything I'd spent decades working with. Classical computers; even the most powerful massively parallel supercomputers, operate on bits that are either zero or one, and computation proceeds through deterministic logical operations. You write an algorithm, the computer executes it step by step, and if you run the same algorithm with the same inputs, you get the same outputs every time (Monte Carlo simulations aside).
Quantum computers, at least the way D-Wave had implemented them, worked completely differently. You didn't write programs in the traditional sense. Instead, you mathematically described an optimization problem as an energy landscape, encoded it into the quantum processor, and let the system naturally evolve toward the lowest-energy state; which ideally would correspond to the optimal solution. You were, in effect, asking nature to solve the problem for you by finding the configuration that minimized energy, exploiting quantum effects like superposition and tunneling to explore the solution space in ways that classical computers couldn't.
This was quantum annealing, a specific approach to quantum computation that was well-suited to optimization problems but fundamentally different from what theorists called "universal quantum computing." The distinction mattered, though most people outside the field conflated the two.
Quantum annealing, which is what D-Wave's machines did ((initially, although I have spoken to D-Wave personnel at conferences who strenuously argued it was not just annealing), was like using quantum effects to roll a ball down a hilly landscape and find the lowest valley. The quantum processor could explore multiple paths simultaneously through superposition, and quantum tunneling let it occasionally pass through hills rather than having to climb over them, which meant it could potentially find better solutions than classical optimization algorithms that might get stuck in local minima. But quantum annealing machines were special-purpose devices, good at certain types of optimization problems but not capable of running arbitrary algorithms.
True universal quantum computing, the kind that researchers at IBM, Google, and various universities have been working toward, is something more profound and more difficult to achieve. A universal quantum computer could run any algorithm that could be expressed in quantum terms, including Shor's algorithm for factoring large numbers (which would break most current encryption), Grover's algorithm for searching unsorted databases, and quantum simulation algorithms that could model other quantum systems with exponential speedup compared to classical computers.
The difference was analogous to the difference between a special-purpose aircraft; say, a cargo plane optimized for carrying heavy loads, and a multi-role fighter that could perform air-to-air combat, ground attack, reconnaissance, and electronic warfare. Both were aircraft, both had their uses, but they weren't interchangeable, and one was vastly more difficult to build than the other.
Standing in that room looking at the black monolith, I understood that we were at the absolute beginning of understanding what quantum computers might eventually be capable of. Four qubits was nothing. Google would within the next five years demonstrate quantum supremacy with 53 qubits, performing a calculation in 200 seconds that would take the world's most powerful supercomputer 10,000 years. IBM was pursuing error-corrected quantum systems that might eventually scale to thousands or millions of qubits. But in 2011, we were still figuring out how to keep the quantum states stable long enough to do anything useful.
I suggested to the engineer that they should add some blinking lights to the machine, or at least a pinging sound, something to indicate that computation was happening. The silent black cube was technically impressive but theatrically disappointing. If you were going to show people the future of computing, it should at least look like it was doing something.
Engineer Thing One laughed and acknowledged that the lack of any visible activity was somewhat anticlimactic. "Maybe we should add an LED that blinks when we're running a calculation," he said. "Though honestly, the calculations finish so quickly you'd barely see it."
Recommended by LinkedIn
The visit stayed with me, not because of what the quantum computer could do at that moment; which was very little beyond demonstrating that quantum annealing could work in principle, and I still can’t discuss what it was being used for, but because of what it suggested about the future of simulation. Every complex system I'd ever tried to simulate, from virtual worlds populated by intelligent agents to climate models to molecular dynamics to modeling the population sentiment of Afghanistan, ran into the same fundamental constraint: exponential computational complexity.
As systems got larger and more interconnected, the computational resources required to simulate them grew exponentially. You could simulate ten atoms interacting quantum mechanically, but a hundred atoms required exponentially more computation, and a thousand atoms was impossible with classical computers. You could simulate a city with simplified models of human behavior, but if you wanted to model every individual person with realistic cognitive complexity, the computational requirements become prohibitive pretty quickly.
Quantum computers offered a potential way out of this trap, at least for certain classes of problems. If you wanted to simulate a quantum system, molecules, materials, chemical reactions, the weather, a quantum computer could do it naturally, because it was itself a quantum system. It wouldn't need to approximate the quantum behavior using classical algorithms that scaled exponentially; it would just be the quantum behavior, scaled up.
This had enormous implications for materials science, drug discovery, chemistry, cryptography, the weather and optimization problems that appeared everywhere from logistics to finance to machine learning. As I write this I have just come from the 2025 NATO modeling and simulation forum in Rome where I sat in on a briefing on quantum communication using entanglement that would be “impossible to break”.
Potentially, though this was much more speculative, quantum computing could enable new approaches to simulating complex systems that involved massive numbers of interacting agents, like economies or social networks or ecosystems. My SimState and SimNC.net dreams could become a reality.
But the timelines were uncertain and the technical challenges were formidable. Quantum computers were extraordinarily difficult to build and operate. They required near-absolute-zero temperatures, exquisite isolation from environmental noise, and error-correction schemes that required many physical qubits to create a single logical qubit that could reliably store information. The D-Wave machine I was looking at represented millions of dollars of engineering to create four qubits of quantum annealing capability. Scaling to thousands of error-corrected qubits that could run arbitrary quantum algorithms was a challenge that might take decades.
And even if we solved the engineering challenges, there was a deeper question: Which problems actually benefited from quantum computation? Not everything did. Classical computers were extremely good at what they did, and for many problems, including most of the simulations we were running at Virtual World Labs, classical approaches would remain superior for the foreseeable future. Quantum computers weren't faster classical computers; they were fundamentally different computational devices that happened to be exponentially better at certain specific tasks.
The art would be in identifying which simulations could benefit from quantum approaches and which couldn't. Quantum chemistry? Almost certainly yes. Simulating aircraft aerodynamics? Yep. The weather? Certainly. Agent-based models of human behavior for my SimState designs? Unclear. It would depend on whether the quantum effects in human cognition (if they existed at all) mattered for the behaviors you were trying to model, which was a controversial question in neuroscience.
As I left the facility and stepped back out into the Los Angeles sunshine, walking past the Mexican restaurant toward my rented Dodge Charger, I thought about how often the future arrived in unassuming packages. The first computers had been room-sized machines attended by teams of technicians, useful primarily for artillery calculations and code-breaking. The first transistor was a crude assemblage of gold foil and germanium that barely worked. The first integrated circuit was demonstrated to a mostly indifferent audience who didn't immediately grasp its significance.
And here was the first commercial quantum computer, four qubits in a strip mall, capable of solving toy optimization problems that classical computers could handle easily. Unimpressive in its immediate capabilities, but representing a fundamentally different approach to computation that might eventually transform entire fields of science and engineering.
The simulation century was built on our increasing ability to model complex systems with enough fidelity that the models became useful for understanding and predicting reality. Every increase in computational capability enabled higher-fidelity simulations, which enabled better understanding, which enabled better decisions. Quantum computing suggested that this progression might not hit the exponential wall we'd been worried about; that we might be able to continue increasing simulation fidelity for certain classes of problems even when classical computation reached its physical limits.
But it also suggested something more subtle: that as our computational tools became more exotic, our relationship with simulation would change. When you programmed a classical computer, you were giving it explicit instructions: do this, then this, then this, if, then, else. When you "programmed" a quantum annealer, you were describing a problem and letting physics find the solution. The machine wasn't following your instructions; it was exploring a solution space using quantum effects that didn't have classical analogues. It reminded me greatly of the massive epiphany of seeing machine learning work for the first time with Alex Kipman’s Microsoft Kinect project only two year’s earlier. Machine learning and neural nets, combined with quantum computing could be so much more than a new button on the calculator. It could absolutely become the new scientific method and unlock an entirely new and stranger box of universal secrets. But, I digress.
This was profoundly different from how we'd thought about computation for the entire history of the field. It was less like giving orders and more like asking questions to an oracle whose internal workings were fundamentally mysterious. You could verify that the answers were correct, but you couldn't necessarily follow the reasoning that produced them.
And if quantum computers became powerful enough to simulate complex systems that classical computers couldn't, we'd face an interesting philosophical question: Did we understand a system if we could only simulate it on a quantum computer whose operations we couldn't fully comprehend or verify? (Like our new machine learning LLM progeny) Was a simulation that worked through quantum superposition and entanglement, processes that defied classical intuition, actually an explanation? Was it like Douglas Adams’ universal computer coming up with the answer “42”, but without the underlying question to explain what the answer meant to us poor dumb humans with our 200 Hz synapses who haven’t had an upgrade since the Pleistocene?
I didn't have answers to these questions in 2011, standing in that strip mall looking at a silent black cube that represented a technology barely past proof-of-concept. I still don't have complete answers in 2025, now that quantum computers have progressed to hundreds of qubits and are beginning to solve problems that classical computers genuinely struggle with.
But I knew then, as I know now, that the intersection of quantum computing and neural nets and simulation represented one of the most important frontiers in the simulation century. If we could harness quantum effects to model complex systems that were otherwise difficult or impossible to simulate, we'd cross a threshold where simulation became less about creating simplified models of reality and more about creating alternate realities that obeyed the same fundamental laws as our own.
The technology wasn't there yet. Four qubits in a strip mall wasn't going to simulate anything meaningful. But the trajectory was clear, even if the timeline remained uncertain.
I followed my visit up with a note to the D-Wave engineers seriously suggesting that they put some LEDs on the thing, make it look alive. I did the same for the cyber test range demo area at the Lake Underhill facility for Lockheed Martin in Orlando. As we delve deeper into the quantum and cyber realm, things get curiouser and curiouser and it becomes harder to sell to muggles in the business and government world. Nobody got the memo, apparently; quantum computers remain impressively silent monoliths, doing their incomprehensible calculations in darkness and cold, giving no indication to the outside observer that anything interesting was happening at all.
Which was, in its own way, perfectly appropriate. The most profound computations have always been the ones you couldn't watch happening, because the interesting work occurred at a level of reality too small or too strange for human perception. We built tools to access those levels—telescopes for the very large, microscopes for the very small, and now quantum computers for the very weird.
The simulation century is teaching us that reality has more layers than we'd initially suspected, and if we want to model those layers accurately, we'll need tools as strange as the phenomena we are trying to understand.
Even if these tools are disappointingly quiet and look like they need James Cameron to transport them to Pandora with a bigger special effects budget only he could conjure.
Beyond Moore’s Law
Rose's Law, named after D-Wave founder Geordie Rose, observes that the number of qubits in quantum computers doubles approximately every year; a pace significantly faster than Moore's Law's doubling every eighteen to twenty-four months for classical computing. Rose proposed this in 2013 based on D-Wave's trajectory, suggesting that quantum computing power was scaling more rapidly than the classical computing revolution had. However, the comparison is complicated by the fact that not all qubits are created equal: D-Wave's quantum annealing qubits, error rates in different architectures, and the distinction between physical qubits and error-corrected logical qubits mean that raw qubit count doesn't straightforwardly translate to computational power the way transistor count does in classical systems. Nevertheless, Rose's Law captured something real about the early exponential growth phase of quantum computing, even if the ultimate trajectory would depend on solving formidable engineering challenges around coherence times, error correction, and scalability. It was quantum computing's optimistic answer to Moore's Law; a prediction that this fundamentally different approach to computation might scale even faster than the technology it was meant to eventually supplement or replace, assuming we could overcome the technical obstacles that made every additional qubit exponentially more difficult to control than the last.