The Cost of "No Human in the Loop"

The Cost of "No Human in the Loop"

AI is a powerful engine for productivity, but it is a terrible unsupervised employee. The pressure to cut costs, modernize, and appease shareholders has led many enterprises to strip away human oversight, deploying AI autonomously. The assumption is that the technology is ready to fly solo. The reality, however, is that removing the "human in the loop" doesn't just automate workflows; it automates risk. When companies push AI directly to the front lines without guardrails, the financial and reputational blowback is swift and entirely self-inflicted.

"Most executives are desperate to figure out how to weave AI into their company's DNA, but the race to be first is resulting in a dangerous blind deployment approach."

Consider these recent, highly documented failures of blind enterprise AI deployment:

  • The Air Canada Defense: In 2022, Air Canada's unsupervised chatbot hallucinated a bereavement fare policy, incorrectly promising a customer a retroactive discount. When sued, the airline attempted to argue the chatbot was a "separate legal entity that is responsible for its own actions". The tribunal rejected this remarkable defense, holding the enterprise fully liable for the outputs of its digital tool.
  • The $1 Chevrolet Tahoe: In late 2023, a California Chevy dealership used a ChatGPT-powered bot for customer service. Because it lacked human oversight, users easily manipulated it into agreeing to sell a 2024 Chevy Tahoe (MSRP ~$76,000) for a single dollar, forcing the dealership to immediately pull the system offline.
  • The Copilot Intelligence Failure: In the United Kingdom, in late 2025, West Midlands Police relied on an intelligence and risk assessment to support a decision to bar Maccabi Tel Aviv supporters from attending a Europa League match against Aston Villa. That assessment, submitted to the local Safety Advisory Group, was later found to contain false information generated using Microsoft Copilot, including reference to a nonexistent match involving Maccabi Tel Aviv, which was not independently verified before being used in decisionmaking. Subsequent reviews by His Majesty’s Inspectorate of Constabulary and the Home Affairs Select Committee concluded that the police report contained multiple inaccuracies, outright false stories, reflected confirmation bias, and demonstrated inappropriate reliance on generative AI without human validation. The incident led to a public apology from senior police leadership, the temporary suspension of Copilot use for intelligence work, and significant scrutiny from Parliament and affected communities.

These public missteps highlight a glaring gap in corporate strategy: the misunderstanding of accountability. This brings us to a critical paradox in enterprise AI adoption.


"When a human makes a mistake, there is inherent accountability and liability. But if AI is doing it, the system comes with a built-in disclaimer that it makes mistakes. How can an enterprise rely on a tool that legally absolves itself of accuracy and responsibility?"

This is the exact question every corporate board should be asking. When you hire a human, they operate under a social and legal contract. If they act negligently, there are consequences, reprimands, retraining, or termination. The enterprise relies on vicarious liability. Foundation AI models, on the other hand, are probabilistic software, not people.


Every major tech company attaches an explicit warning to their AI: "AI can make mistakes. Check important info." When an enterprise ignores that disclaimer and deploys autonomously, the liability shifts entirely:

  • Vendor Indemnification: AI developers explicitly disclaim accuracy in their Terms of Service. If an autonomous AI hallucinates and costs your company millions of dollars or damages a client relationship, the vendor is legally shielded. The liability rests entirely on the deploying enterprise.
  • The Illusion of Agency: An AI tool has no legal personhood or bank account. You cannot fire or sue an algorithm. The company that deploys the autonomous system assumes strict, unmitigated liability for whatever that system generates.

To truly grasp the absurdity of expecting accountability from an autonomous system, we don't even need to look at enterprise software. We just need to look at our public roads, where physical AI deployment is testing the absolute limits of the law.

"We are seeing autonomous driving systems commit traffic violations, but who exactly gets the ticket when there is no human in the driver's seat?"

The answer, for a long time, was nobody. Decades of traffic law were written with the fundamental assumption of a human operator. When you remove that human, the traditional legal framework shatters. We are watching regulators scramble in real-time to figure out how to hold lines of code accountable for running stop signs.

The reality of autonomous vehicle (AV) liability is currently unfolding on city streets:

  • The Unticketable U-Turn: In San Bruno, California, police pulled over a Waymo robotaxi for an illegal U-turn. Because traffic citations legally must be issued to a human driver, the officers were entirely unable to write a ticket and simply had to let the empty car go.
  • The Legislative Scramble: To close this accountability void, California recently passed Assembly Bill 1777 (which takes effect in July 2026). This law finally allows law enforcement to issue "notices of noncompliance" directly to the AV companies, bypassing the need for a human driver and placing the liability firmly on the deploying corporation.

The throughline here is undeniable: whether it is a customer service bot promising a refund or a robotaxi running a red light, accountability cannot be outsourced to an algorithm. Enterprises must embrace AI to stay competitive, but the deployment must be strategic. The "human in the loop" is not a bottleneck to innovation; it is the ultimate corporate shield.


The Bottom Line: Risk Appetite Dictates Deployment

Ultimately, how you approach this depends entirely on your organization and your specific risk appetite. The fundamentals of risk management don't suddenly disappear simply because you're dealing with AI.

Consider the spectrum of risk: if you are a pharmaceutical organization, how willing are you to let AI autonomously prescribe medication or compound chemical substances without a human in the loop? Conversely, if you are a sci-fi writer, how willing are you to allow AI to generate your next supposed bestseller? The stakes are entirely different.

It all comes down to risk, and risk management has been a fundamental aspect of security for a very long time. We have to remind ourselves that AI is no different. Enterprises must embrace AI to stay competitive, but the deployment must be strategic. The "human in the loop" is not a bottleneck to innovation; it is the ultimate corporate shield.

Co-Author: Nemi G

Reviewer: Jason James CISA, CDPSE, CCISO


I like the thought exercise in this post. A company can either transfer, mitigate, or accept risk. The liability of a risk being exploited is tied to the hip with the risk: if you transfer a risk to someone else, they absorb it. The cybersecurity industry revolves around this system, which might be worth consideration by other tech companies. When a business pays an MSSP for security protection, they’re paying a specialist for expertise - but even more - they’re paying to transfer the risk.

Strong piece. The key distinction is that “human in the loop” is not just a control mechanism. It is the point where accountability stays attached to the system instead of dissolving into the tool. The article’s core argument is that enterprises remain liable for what autonomous AI does, even when the vendor warns that the model can make mistakes. What makes this especially relevant is the link between deployment speed and organizational design. As the piece argues through examples like the Air Canada chatbot case and other public failures, the issue is rarely just model capability. It is whether the company put clear validation, review, and ownership around the decision path before automation touched the front line.

Here's what most people miss about AI liability: it doesn't begin at the point of failure. It begins at the deployment decision. Solid human-in-the-loop controls are great, but they don't erase a bad deployment rationale. If you picked the wrong tool, cut corners on validation, or used AI outside its lane — you own that outcome. Full stop.

I just rewrote the procurement contracts for a large government agency to include an entire section on Cyber risk including AI. Not only that but we expanded the Vendor Risk Management assessment capabilities to address AI use in the development of vendor apps and in the delivery of thier systems. At the end of the day that’s just a small part of addressing AI risk but it’s a good place to start.

This is an important conversation that many organizations haven’t fully thought through yet. As AI adoption accelerates, the real challenge won’t be just capability; it will be governance, accountability, and understanding where responsibility ultimately lies.

To view or add a comment, sign in

More articles by Dr. Victor Monga

Explore content categories