Beyond the Algorithm Part 2: Redefining Memory, Processing and Execution as one

Beyond the Algorithm Part 2: Redefining Memory, Processing and Execution as one

Last time, we exposed the cracks in our current clockwork concept of computation - it fails at life. Brains rewire around injury, tumors evolve without programmers, and AI shatters under slight perturbations. Arguably this is proof that symbolic computation fails the wild, self-sustaining storm of biology. Dr. Kimia Witte’s "Computation as Organisation" gets at the root of why abstract computation can't hold as a metaphor for living systems: it mistakes execution for existence. Her paper dares to answer: If software concepts don't describe "living computation" what does?

If computation isn't software on hardware, what is it?

Redefining Organisation: The System Is the Message

The first step is to stop thinking about information as something stored in a system, like a file in a folder. In Witte's reformulation, the organisation of the system is the information.

So, what is "organisation"?

Organisation is the persistence of relational constraints that limit a system's possible behaviors.

This sounds abstract, so let’s use an analogy. Think of an ant colony. There's no CEO ant giving orders. Instead, the colony's structure and function emerge from a simple set of rules governing how individual ants interact with each other and their chemical environment (pheromones). These rules are the "relational constraints." They limit what any individual ant can do, but in doing so, they enable the complex, intelligent behavior of the whole colony. The colony’s organisation, its ability to find food, build nests, and defend territory persists even as individual ants die and are replaced.

In this view, information isn't a symbol encoded in a pheromone trail. Information is relational invariance: a difference in the world (like a new food source) creates a difference in the system's organization (a change in foraging patterns) that shapes its future possibilities.

Imagine a simple diagram where every possible state of a system is a node (a dot). The transitions between states are the edges (lines connecting the dots).

  • In a completely disorganized system, like a gas in a box, every state can transition to almost any other state. It's a chaotic mess of connections.
  • An organized system is one where many of those lines have been erased. The constraints "prune" the possibilities, creating a specific structure of allowed transitions. This structure is the organisation.

Computation, then, is the process of navigating this constrained landscape. The system's behavior is guided not by a central program, but by the very shape of its own possibility space. Witte proposes we can experimentally identify this kind of organisation by observing a system's response to being pushed around:

  • Persistence: Does the system maintain its core relational structure over time, despite fluctuations in its components? (The ant colony persists even as ants come and go).
  • Recovery: When perturbed, does the system actively work to re-establish those constraints and return to its organized state? (The brain rewires itself after injury).
  • Structural Failure: If pushed too far, does the system collapse into a different, less organized state? (A healthy cell ecosystem, when its constraints break down, can become a cancerous mass).

What does this mean for how we think about information? It means the memory of the system isn't in a specific location, but is embodied in its entire structure. The processing isn't a separate step, but is the natural unfolding of the system's dynamics within those constraints.

Computation as Enactment: The Dance of Matter

This leads to the most radical and powerful idea in the paper: computation is the ongoing enactment of organisation.

In our traditional model, memory (RAM/SSD), processing (CPU), and execution are physically and conceptually separate. But in a living system, they are one and the same.

  • Memory is the physical structure of the system, the web of constraints.
  • Processing is the change in that structure over time.
  • Execution is the material dynamics of the system itself.

Let’s go back to the brain. When you learn something new, you aren’t just writing a file to a biological hard drive. The very act of learning physically changes the connections between your neurons. The strength of synapses is altered, new pathways are formed. Your brain's structure is its memory. The "algorithm" for recalling that memory is simply the electrical activity flowing through that newly shaped structure. Memory, processing, and execution are completely integrated.

This is what Witte means by algorithms as internally embedded regularities. They aren't abstract instructions from outside; they are the stable patterns of behavior made possible by the system's own constraints. A whirlpool in a river is an algorithm of this kind. There is no "whirlpool code" running on the water; the whirlpool is a persistent, stable pattern that emerges from the constraints of fluid dynamics and the riverbed's geography.

Cellular signaling works the same way. A signal molecule doesn't carry a "message" in the symbolic sense. Its shape (a physical constraint) allows it to bind only to specific receptors, which in turn triggers a cascade of other binding events, changing the cell's internal organization and, therefore, its behavior. The computation is the physical dance of molecules itself.

This perspective reveals that the limits of computation for a given system aren't defined by processing speed or memory size, but by the nature of its organisation. How complex can its constraints get? How quickly can it reorganize in response to new information? Can it find new stable configurations (learn) or is it trapped in a rigid structure? These are the questions that matter.

A New Era of AI? Implications for Biotech and Cyber

So, what’s the payoff for a room full of engineers, researchers, and founders? If we take this "computation as organisation" framework seriously, it opens up entirely new frontiers.

For AI and Reinforcement Learning, it suggests a path away from brittle, data-hungry models. Instead of just optimizing the weights within a fixed neural network architecture, what if we could design systems that learn by modifying their own architecture? Systems that grow new connections and prune old ones, guided by the principle of maintaining a persistent, adaptive organisation in a complex environment. This is a new frontier for RL: not just learning a policy within a game, but learning the structure of the agent itself.

For Biotechnology, this framework provides a powerful new lens for understanding and manipulating living systems.

  • Cancer Adaptation: A tumor isn't just a rogue program; it's a system that has discovered a new, terrifyingly stable form of organisation that allows for rapid growth. By understanding the constraints that define this organisation, we could find novel ways to destabilize it and force a collapse back to a healthy state. Check out Concr's work in predictive bioinformatics.
  • Drug Delivery: We could design "smart" drug delivery systems not as pre-programmed nano-bots, but as chemical organizations that can "compute" their location in the body based on local environmental signals, releasing their payload only when their relational constraints match the target environment. Pedro Correa de Sampaio @ Neobe Therapeutics is doing this with programmable, tumour hunting bacteria!

For Cybersecurity, this reframing inspires some even wilder applications. Our digital security is built on a foundation of randomness, used to generate cryptographic keys. But what if our sources of randomness are flawed?

  • Biologically-Derived Cryptographic Keys: Forget pseudo-random algorithms. The chaotic, self-organizing dynamics of a living system (much like the emergent behavior of a Xenobot) is a source of nearly perfect, unpredictable entropy. This allows for the creation of smaller, more efficient cryptographic keys with higher entropy density, making them practically unbreakable and ideal for securing resource-constrained IoT devices. Shannon Egan @ Deep Science Ventures is doing great hardware-based work here using MRAM.
  • Ecosystemic Fleet Defense: This reframing inspires a radical new approach to fleet defense. Hackers today don't hunt individual agents; they hunt for the master key. They find one flaw in an agent's core architecture and watch the entire defensive fleet - a digital monoculture of identical clones - collapse like dominoes. But what if the swarm wasn't a monoculture, but a true ecosystem of different computational "species"? Instead of clones, imagine a fleet of agents with fundamentally different architectures, each embodying a unique "organisation." They wouldn't need a shared, vulnerable API. Instead, their collaboration would be emergent: one agent's action alters the network environment, which the others perceive and react to according to their own nature. The result is a resilient, adaptive immune system for our digital infrastructure, where a threat to one is not a threat to all. Lewis Hammond @ Cooperative AI Foundation, https://kirancodes.me/ @ Basis Research Institute and Mateo H. Petel @ Stanford University are doing extraordinary work in multi-agent coordination.

Here’s a clear comparison of the two views:

Article content

This isn't just theory. The paper provides experimentally accessible criteria. We can start building simple physical or chemical systems and test their ability to persist, recover, and fail. We can begin to quantify organisation and identify the "computational limits" that emerge from it. What's your take on these new possibilities?

Is The Future is Self-Organizing?

This shift in perspective is as profound as it is practical. For too long, we've tried to force life into the box of our engineered logic. Can we now learn from nature's fundamentally different approach?

Kimia Witte 's work challenges us to move beyond the metaphor of the computer and embrace the reality of the organism. The key takeaways are transformative:

  • Computation is a physical property, not an abstract one. It’s not about software running on hardware; it's about the persistence and adaptation of an organized material system.
  • Information isn't data; it's structure. The most important information in a system is its own organization, which constrains its future possibilities.
  • Memory, processing, and execution are a unified whole. In living systems, computation breaks the von Neumann limit. The algorithm, the data, and the processor are inseparable aspects of the same dynamic process.

By embracing this new framework, we can aspire to build technologies that are not just powerful, but Natural - resilient, adaptive, and truly intelligent. We can move from programming machines to growing them.

The full paper is a must-read for anyone working on the cutting edge of deep tech: https://arxiv.org/abs/2601.11599

How does this reshape your work in AI or biotech? Do you see computation as organisation in the systems you study or build?

Comment below - let's discuss! Tag a colleague who is rethinking the future of AI.

To view or add a comment, sign in

More articles by Dr. Thane Campbell

Others also viewed

Explore content categories