A big question in software is what happens to the systems of record in a world of AI Agents. Do they go away? Do they just become databases? Or do they become more powerful? I’d argue that they’re just as powerful as ever, if not more powerful, in a world of 100X more interactions with software. The purpose of your system of record (whether it’s ERP, CRM, ITSM, or a document management system) is to hold the data and manage the workflows around the most important areas of your business: your customer commitments, leads, revenue figures, inventory, IP, product research, supply chain, and more. Importantly, you want the data and workflows in these systems operate in deterministic ways. When you ask a question like “what is my revenue,” you need the precise answer. When you move a lead from one stage to another, you can’t afford for it to get dropped. When you update your inventory, you can’t have it change inadvertently. Getting the data, permissions, access controls, business logic, and workflows right, every single time, is critical. On the other hand, AI Agents operate in a world of non-deterministic actions. What makes them so powerful is they can adapt to entirely new instructions on the fly, use judgment to perform actions, and operate on troves of unstructured information and decisions. When you ask an AI Agent to research and summarize a set of documents, it will produce a slightly different answer every single time - and in most use-cases for AI Agents, this is a feature, not a bug. Just as you wouldn’t ask the world’s smartest human to memorize every piece of inventory you have, or all of the permissions of every information that employees should have access (with their specific access controls) to, you similarly won’t ask AI Agents to do that in the future. This is where the separation of duties comes into play. AI Agents will be doing non-deterministic actions (like generating a sales plan, responding to a customer, or writing code), and deterministic systems will be for remembering those actions and incorporating them across a variety of workflows. In fact, in a world of AI Agents running around doing autonomous tasks 24/7, in parallel, and at unlimited scale, the role that these systems of record play will likely be even more important. Getting this relationship down is going to be key to the future of the enterprise IT stack.
Comparing Emergent Systems and Deterministic Code
Explore top LinkedIn content from expert professionals.
Summary
Emergent systems (like AI agents) operate with flexibility and unpredictability, producing varied results based on changing inputs, while deterministic code follows fixed rules, ensuring the same input always delivers the same output. Comparing the two helps organizations understand when to rely on predictable automation versus adaptive, judgment-based AI, especially as business needs evolve and data complexity increases.
- Clarify your needs: Use deterministic systems for tasks that require precise, repeatable outcomes and rely on emergent systems when creativity, context, or nuanced decision-making is required.
- Combine strengths: Pair deterministic workflows for structured processes with AI agents as a backup for handling unexpected or complex scenarios, ensuring both reliability and adaptability.
- Set realistic expectations: Understand that AI-powered systems may produce different results for similar requests, requiring more testing and iteration compared to rules-based automation.
-
-
One thing I’ve learned building AI powered automation for clients: Deterministic and agentic automation do not ask for the same kind of patience. On paper, they both look like “a workflow”. In real life, they feel very different. Deterministic automation is the rules-based kind. If X happens, do Y. If a field is blank, stop. If a deal is in Stage 3, notify this person. You reach for this when: • The rules are clear • The data is structured • The same input should always produce the same output You spend most of your time defining the logic and testing a few scenarios. Once it works, it’s usually stable until something upstream changes. Agentic or AI powered automation is different. You’re not just routing data. You’re asking for judgment. Things like: • Summarize this email • Decide which team should handle this request • Draft a response that fits these guidelines • Classify this lead based on what they wrote Small input changes can shift the output. So the work changes too. You’re not just connecting steps. You’re shaping behavior. That means: • More rounds of testing with real examples • More time tightening prompts and instructions • More clarity on what “good” looks like and how to make it repeatable You do get to a stable version. It just takes more iteration to earn that stability. Neither type is “better”. They just shine in different places. • Clear rules, strict outcomes, predictable paths → deterministic • Messy inputs, natural language, prioritization, judgment → agentic The mistake is expecting agentic automation to behave like deterministic automation on day one. It can do more, but it also asks more from you: More patience. More examples. More care with inputs. Once you accept that, you start building with clearer expectations and a lot less frustration.
-
Still running 2025 problems on 1995 logic? They’ll say, “Software is predictable, just follow the spec.” They’ll argue, “Executives can’t bet on ‘maybe’.” That mindset made sense when code was linear and deterministic. 🔴 Traditional (Deterministic) Software 1. Rules in → Rules out Same input always yields the same output. 2. Fixed pathways Flowcharts end where they start, nothing new is learned. 3. Change occurs by release cycle Months of scoping, testing, and sign-offs before value appears. 4. Zero tolerance for ambiguity Anything less than 100 % certainty is marked as failure. 🟢 AI (Adaptive & Probabilistic) 1. Data in → Confidence scores out Outputs are likelihoods, not guarantees, letting you act on emerging signals. 2. Continuous learning Models refine themselves as new data streams in. 3. Real-time iteration Small pilots pivot daily, compounding gains while old projects wait for approval. 4. Risk managed by guardrails Governance shapes behaviour without strangling speed. 🚨 Where Linear Logic Fails Today • Yesterday’s KPIs hard-code blind spots. • Binary pass/fail gates kill novel insights. • Waterfall sign-offs stall momentum while markets shift. • Teams freeze when the model returns 73 % instead of 100 %. ✅ Why Probabilistic Leadership Wins • Acts on confidence intervals while rivals chase certainty. • Scales value through experimentation loops, not megaprojects. • Embeds safeguards at the data and model layer, freeing talent to innovate. • Turns “maybe” into competitive edge by learning faster than the environment changes. Paper maps give one static route. GPS plots optional paths, updates with live traffic, and recalculates when reality intrudes. Only one model survives congestion. This isn’t a faster version of the old waterfall, it renders waterfall obsolete. Prediction: Boards that demand linear absolutes will watch adaptive competitors erode their margins and poach their best people. Ready to steer your organisation by probabilities instead of certainties—or content to follow a map that no longer matches the terrain?
-
Workflow v.s. AI Agents II: Get the Best of Both Worlds In my last post, I unpacked the differences between workflow systems and agentic systems and showed how both have propelled contact‑center AI forward. Each comes with clear pros, cons, and use‑case sweet spots. Today, I want to describe two patterns I’m seeing in real‑world deployments that capture the best of both worlds. 1. Workflow as a Tool to AI Agents Think of refund or authentication flows: you need them to be reliable, precise, and deterministic, no imagination, no exceptions. The right approach is to wrap each of those flows in code and let the LLM call it only when the conversation reaches the correct step. It’s the same strategy an LLM uses when it calls a calculator. The model handles natural language, then hands off to deterministic code. Because these calls rarely exist in isolation, you also maintain a lightweight global‑state store, e.g. customer ID, authentication status (e.g. failed codeword, 2nd attempt, need last 4 digits of SSN) , open‑case number, refund amount, and so on. Both the agent and the workflow read from and write to that state, so every turn starts on the same page. 2. Agentic System as a Fallback‑and‑Healing Layer Rule‑based workflows dominate high‑volume, repetitive back‑office tasks. An invoice‑processing pipeline is a classic example, because cost and reliability matter more than creativity. The problem is that even the most battle‑hardened workflow eventually hits an edge case: an OCR misreads a field, a vendor changes a PDF layout, or a UI update moves a button or turns one text field into a drop down box. When that happens, route the exception to an LLM‑powered agent. The workflow raises a “can’t‑proceed” flag and passes the partial context. The agent reasons through the anomaly: asks a clarifying question, consults a knowledge base, rewrites the input, or tries to process the updated UI with an vLLM action model. The agent writes the corrected data back to the global state, then nudges the original workflow to resume. In effect, the deterministic layer handles the 95 % happy path, while the agentic layer patches the 5 % that rule‑based code can’t anticipate, and every successful patch becomes new training data for further hardening. In my next post, I will talk about test case management, evaluation to achieve determinism over underlying probabilistic models.
-
When I first built agents, I realised, We’re no longer just writing software - we’re designing reasoning systems. You will realise these things when you build agents: 1. Code Isn’t Deterministic Anymore In classical programming, a function call is a contract: Same input -> same output -> same world. LLMs broke that law. Now: The compiler is stochastic. The runtime is non-deterministic. The logic is emergent. This means software engineering is evolving into probabilistic systems design. You don’t guarantee outputs anymore - you bound uncertainty. It’s closer to systems control theory than traditional CS. You tune parameters, you calibrate temperature, you measure drift. In other words, you don’t “debug” an LLM - you align it. 2. If we try to draw some parallels with the current web-apps development,: Prompting = instruction design (the new UX). Retrieval = context injection (your dynamic knowledge base). Memory = persistence of thought (the system’s long-term awareness). Evaluation = emergent QA (the new testing framework). Orchestration = reasoning topology (the system’s meta-logic). We’ve gone from CRUD apps -> reactive apps -> cognitive apps. Traditional software: execute rules. LLM software: negotiate meaning. 3. Building reliable LLM systems means tackling: Context fragmentation: how do you represent and recall 100k+ tokens efficiently? Hallucination mitigation: how do you quantify “truthiness” in probabilistic text? Model drift: what happens when model weights evolve or APIs change behavior? Evaluation: how do you test logic that isn’t strictly deterministic? LLMOps is not MLOps.It’s more like cognitive systems engineering. You’re not deploying a model - you’re deploying an evolving mindset. 4. The next 12 months will be about composable reasoning. Right now, chains and agents are linear. Tomorrow, they’ll be self-organizing graphs of specialized submodels - each trained, optimized, and dynamically routed by feedback loops: Adaptive orchestration -> the system rewires its reasoning path in real time. Symbolic + sub-symbolic fusion -> hybrid models that combine logic + language. Autonomous reflection loops -> models that critique their own outputs. This isn’t “prompt engineering” anymore. It’s reasoning architecture. A recent article by anthropic made me think this: https://lnkd.in/g67JxrTz
-
Hi LinkedIn community, Jai Shree Krishna to everyone 🙏 We are witnessing one of the biggest paradigm shifts in software engineering — moving from deterministic systems to non-deterministic, agent-driven architectures. For decades, software was built on a simple promise: predictability. If you write a function → it returns the same output every time. If you query a database → you get the exact same record. This is what we call System Design 1.0 — structured, reliable, and fully controlled. 🚀 Enter System Design 2.0: The Age of AI Agents Today, we are designing systems where: Outputs are not strictly predefined Decisions are made dynamically Systems can “reason” instead of just execute These are AI agents powered by Large Language Models (LLMs). Instead of writing step-by-step logic, we now define goals: 👉 “Help the user book a meeting” 👉 “Assist in debugging an issue” 👉 “Generate a personalized response” The agent decides how to achieve it. 1)⚖️ Deterministic vs Non-Deterministic Thinking System Design 1.0 (Deterministic): Fixed flow (A → B → C) Same input = same output Easy to debug and test Works best for structured problems System Design 2.0 (Non-Deterministic): Goal-driven, not step-driven Same input ≠ same exact output Requires validation layers Handles ambiguity like humans 2)🧠 Core Components of Agentic Systems To build reliable AI systems, we need new building blocks: 1. Inference Engine (The Brain) LLM acts as the decision-maker Stateless by default Processes context and decides next action 2. Memory Layer (Context Engine) Short-term: Conversation history Long-term: Vector databases (semantic search) Helps the agent “remember” 3. Tool Calling (Action Layer) APIs, DB queries, external services The agent decides when and how to use tools 4. Guardrails (Safety Layer) Input validation Output filtering Prevent hallucinations & unsafe actions 5)🔁 The Cognitive Loop Agent systems operate in a loop: Understand → Decide → Act → Observe → Repeat This loop makes systems adaptive, but also introduces uncertainty. 6)⚠️ The Biggest Mindset Shift As engineers, we are no longer just writing logic — We are designing behavior under uncertainty. This means: You don’t control every step You guide decisions instead of enforcing them You verify outputs instead of assuming correctness 7)🛠️ When Should You Use Agents? Use AI agents when: ✅ The problem is ambiguous ✅ Inputs are unstructured (text, voice, intent) ✅ Flexibility is more important than precision 8)Avoid agents when: ❌ Exact correctness is critical (e.g., payments, banking logic) ❌ Latency must be extremely low ❌ The workflow is already well-defined #SystemDesign #AI #AgenticAI #LLM #SoftwareEngineering #BackendDevelopment #FutureOfTech #MachineLearning #Developers #TechLeadership #Innovation #GenAI #LearningJourney
-
Historically, systems and tools were deterministic. AI, particularly LLMs, is probabilistic. That sounds technical. It is not. It is a major leadership and architecture shift. Deterministic systems generally follow fixed logic to produce repeatable outcomes. Probabilistic systems do something different. They generate outputs based on patterns, weighting, context, and likelihood. Which means the same system can behave differently depending on framing, sequence, surrounding inputs, and operating conditions. That is not a small difference. It means AI cannot just be “added” to an environment as if it were another piece of software. It changes the design burden. Because once the system is probabilistic: structure matters more decision architecture matters more control paths matter more human judgment matters more Why? Because variation is no longer an exception. It is part of the nature of the system. And if the governing structure is weak, that variation does not stay contained. It shows up in outputs. In decisions. In workflows. In escalations. In trust. This is why so many AI conversations still start too low in the stack. They start with the model. Or the use case. Or the interface. But the real question is more foundational: What meta-structure is this system entering? What definitions hold it together? What decision rights surround it? What escalation logic catches ambiguity? What human judgment remains sacred at the point of consequence? Because deterministic systems can tolerate a surprising amount of structural laziness. Probabilistic systems cannot. They expose it. Then they scale it. That is why the AI era is not just about intelligence. It is about architecture. Tomorrow: drift. #AI #Leadership #DecisionArchitecture #Governance #Coherence #DigitalTransformation
-
Computers used to exclusively follow rules; now they generate possibilities. Combining both approaches maximizes their potential. 👇 For as long as we’ve had computers, they’ve produced predictable outputs. But AI – in the form of LLMs – represents a new kind of unpredictable computing. The key to implementing useful AI solutions is making the most of both paradigms. One of the oldest known computers is the Antikythera mechanism, an ancient device for calculating astronomical events. Given certain inputs, it computed positions based on logic hard-coded in its gears. Traditional software is kind of like that: it determines what to do based on pre-defined conditions. You give the computer input and get predictable outcomes. If a program produces unexpected results, it’s either because the programmer introduced randomness or because there are bugs. Both can be replicated by mirroring the exact conditions that led to the outcome. Because of this, traditional computation is *deterministic*. Modern AI, such as LLMs, represents a new paradigm. If you’ve used ChatGPT or Claude, you know you seldom get the same results given the same input. Unlike traditional programs, LLMs don’t follow explicit instructions. Instead, they generate responses by weighting probabilities across a vast network of linguistic relationships. There can be many paths to possible likely responses. This is a new kind of *probabilistic* computing. Much of what we value about computers is due to their predictability. That’s one reason why so many people find LLMs baffling or objectionable: probabilistic behavior breaks our mental models for how computers work. Probabilistic computing is good for some tasks but not others. Brainstorming is a good use case since you’re explicitly asking for divergent thinking. On the flip side, math requires deterministic approaches. Prompt engineering is an attempt to constrain probabilistic processing to make LLMs behave more predictably. But it only goes so far: you can’t force LLMs to behave like traditional programs. A better approach is building deterministic software that uses AI at particular junctures for specific tasks. An example is my approach to re-categorizing blog posts: a deterministic program iterates through files, offloading pattern matching to an LLM. The LLM is used only for stuff probabilistic systems do well. This new paradigm offers unprecedented opportunities. But taking advantage of probabilistic systems requires adding some determinism to the mix. You can't ask ChatGPT to re-organize a website, but you can build scaffolding using traditional approaches that take advantage of what each does best. If you work with content, it behooves you to learn how to combine AI’s probabilistic approach with the traditional deterministic approach. That’s what I’ll be teaching in my hands-on workshop at the IA Conference in April. Join me there to learn how to do it – link in the first comment. 👇 #InformationArchitecture #IA #AI
-
Are large language models (LLMs) more like natural systems than the deterministic machines we usually associate with computers? 🔵 What is a Natural System? To explore this, we first need to define what a natural system is. Let’s start by understanding what makes natural language "natural." Unlike formal languages—such as Python, lambda calculus, or first-order logic, which are designed with rigid syntactic rules—natural language evolves organically, without strict rules that can be “computed.” Formal systems, such as mathematics, ontologies, and programming languages, operate in a mechanistic, predictable manner. They’re akin to classical physics: governed by clear rules and producing deterministic outputs. In contrast, natural systems—such as biology, ecosystems, and human language—are adaptive, complex, and emergent. They evolve in ways that can’t be reduced to pre-programmed, rule-bound models. 🔵 So, What About LLMs? At first glance, LLMs appear purely formal—they’re built on algorithms and mathematics and run on computers. We might assume they are formal systems, but the more we study them, the more they resemble natural systems in fascinating ways: 🔹 Non-Deterministic Outputs: Like nature, LLMs are inherently non-deterministic. Given the same prompt, they may produce different responses due to their probabilistic nature. 🔹 Distributed Representations: LLMs represent concepts across billions of parameters. No single neuron holds all the information—meanings are distributed across the network. 🔹 Emergent Capabilities: LLMs develop their abilities not through explicit programming but through exposure to vast amounts of text data. 🔹 Contextual Adaptation: LLMs adapt their outputs based on the input context during inference. 🔹 Self-supervised Learning: LLMs aren’t reliant on strict supervision. Instead, they learn patterns through unsupervised training. 🔵 Treating LLMs as Natural Systems LLMs are still fundamentally engineered systems. Unlike living systems, they have no self-loop, no true world model, and no agency—they are probabilistic rather than anticipatory. But pragmatically, it might be useful to treat them as if they were natural systems. If we begin viewing LLMs as natural systems, this changes how we approach them. Machines are built to be deterministic and predictable—we can usually dissect their behaviour using reductionism. However, the complexity of LLMs makes it difficult to reduce them to simple, mechanistic rules. This suggests that instead of developing a "physics" for LLMs—a precise science of how they operate—we might need to approach them more like biologists. Furthermore, if we treat LLMs as natural systems, then we need to pair them with formal systems to carry out true deductive reasoning. What could that formal system be, you ask? Well, how about an Ontology? 😉 ⭕ LLM + Ontology: https://lnkd.in/eJ7S22hF
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development