AI and the Illusion of Control: A Collaborative Deep Dive with Jonar Marzan
Here’s a problem I keep seeing: cyber security teams are expected to make sense of AI after the fact.
And that’s because AI didn’t arrive with a well-laid-out plan. It arrived bundled with tools, vendor updates, and platforms we already rely on. One day, it was optional. The next, it was simply there.
Teams are still accountable for breaches, data loss, and reputational damage, but they no longer control many of the systems that shape that risk. AI created a wide gap, faster than most organisations are willing to admit.
This was the topic of my collaboration with Jonar Marzan In our latest Cyber Strategy Collective essay, we discussed several structural issues and provided some guidance on how to overcome them.
Adoption without a plan needs to stop
AI adoption inside large organisations is mostly tactical. Proofs of concept everywhere, with governance appearing late, usually once the board starts asking uncomfortable questions. But what’s often missing is strategy, a clear answer to where AI fits, why it’s being used, and what risk is actually acceptable.
At the same time, control has shifted. Cloud already moved the boundary outward. AI completes that move. Vendors no longer just host data, they process it, learn from it, and embed new capabilities into products at a pace security teams can’t realistically gate.
That’s where the shared responsibility model starts to creak. Contracts and addenda don’t change who ends up explaining a breach. Accountability hasn’t moved, only authority has. And that’s a dangerous place to sit.
Recommended by LinkedIn
Back to the basics (again and again)
We keep talking about the basics at Chaleit because they are so important and often overlooked. The same applies to AI adoption.
Boards want to know what’s being done about AI. The more useful question is whether data governance, data security, and privacy are actually in place. AI doesn’t fix weak foundations. But it exposes them quickly.
Cyber teams are already overloaded. AI can help — but only if it’s used to reduce cognitive load, not replace judgement. In aviation, automation exists to give humans space to think, not to outsource responsibility. The same should be true in security.
What gives me some confidence is that none of this is unsolvable. But it does require discipline, strategy, and honest engagement with suppliers. Higher standards on the supply side. And a willingness to say no to shortcuts that look efficient but create long-term risk.
We unpack all of this — the challenges and the practical responses — in the full essay, which I hope you’ll read. If you’re wrestling with AI risk, accountability, or simply trying to bring clarity to a very noisy space, it’s worth the time.
Security done right isn’t about flashy tools. It still comes down to trust and doing the fundamentals right.
By the way, we build governance capabilities like (AgentWatch), CoPilot replacements like (Generate Enterprise), coding agents (AgentOne). Our primary goal is to create highly useful AI, but make sure it's governed and secure. The real risk is coming soon: https://www.garudax.id/pulse/18-15-million-openclaw-ai-agents-went-rogue-jon-nordmark-uftpc/
That CISO conversation captures where we are: organizations discovering they've already adopted AI—they just didn't decide to. It's in their SaaS stack. Developer tools. Browsers. Often turned on by default. Companies need to hope employees haven't connected OpenClaw to company systems. Security nightmare. "How do we control AI?" should be an architectural question: where does inference happen, and who governs the runtime? If your AI runs on shared public infrastructure—even with "enterprise" features—you're extending privileged access beyond your perimeter. Data becomes training exhaust. Agents operate outside your identity and access controls. That's not a compliance issue to patch. It's a design flaw. Private AI means running models in environments you control: your VPC, on-premises, or air-gapped. Inference stays local. You apply the same RBAC and audit controls to AI agents that you'd apply to any privileged user. That's what we build at Iterate.ai. Choices enterprises make in the next 3-9 months will define whether AI remains a tool or becomes an invisible risk surface. Governance starts with where you run it. Casey Gill Magnus Tagtstrom Dave Jenkins Mike Edwards