AI... Cloud... BYOD... SaaS Sprawl... When Will We Learn?
We’re repeating the same mistakes we've been making for decades.
We Have a Discipline Problem
The security industry has known how to manage access controls for decades. We’ve known how to classify data since before most of today’s practitioners entered the field. Policy enforcement is not an emerging concept. These are fundamentals.
And yet, according to the IBM 2025 Cost of a Data Breach Report, 97% of organizations that experienced an AI-related security incident lacked proper AI access controls. 63% either had no AI governance policy or were still writing one.
97% isn't a slim majority. It's not a concerning trend. Nearly every organization that suffered an AI-related breach had failed to apply the same access controls they would apply to any other system in their environment.
This isn't a technology failure. This is a choice. And we’ve made this exact choice before.
We’ve Seen This Movie Before
Act One: Cloud. In the early 2000s, organizations started moving workloads to the cloud. The speed was intoxicating. Spin up a server in minutes instead of weeks. No capital expenditure. No racking and stacking. IT loved the flexibility. Leadership loved the cost savings.
Nobody asked who was governing it.
Shadow IT exploded. Developers stood up AWS instances without telling security. Sensitive data landed in S3 buckets with public read access. The breaches followed. Misconfigured cloud storage became the punchline of every security conference for a decade.
The fundamentals that would have prevented most of those incidents?
Things we’d been doing on-premises for years. We chose not to apply them to the new technology because the new technology moved too fast.
Act Two: BYOD. Around 2010, employees started bringing personal devices to work. Phones. Tablets. Laptops. The productivity gains were real. Employees were more responsive, more mobile, more connected.
Again, nobody asked who was governing it.
Personal devices accessed corporate email, file shares, and business applications with no MDM, no encryption requirements, and no acceptable use policies. Lost phones became data breaches. Personal cloud backups became exfiltration vectors.
The fix? The same fundamentals. Access controls on corporate data. Device management policies. Data classification rules that determined what belonged on a personal device and what did not. We knew the playbook. We chose speed over discipline.
Act Three: SaaS Sprawl. By 2015, every department had its own SaaS stack. Marketing bought their own analytics platform. Sales brought in a CRM add-on. HR adopted a new recruiting tool. Each purchase was small enough to avoid IT procurement review.
Within a few years, the average enterprise had hundreds of SaaS applications, many containing sensitive customer or employee data, most operating outside the security team’s visibility. The term “shadow IT” entered the mainstream vocabulary.
Once again, the fix was not a new technology. The fix was governance. Vendor risk assessments. Access management. Data classification. Policy enforcement.
The same fundamentals. Every time.
Act Four: AI. Same Script. Higher Stakes.
Today, employees across every industry are adopting AI tools at a pace that makes cloud adoption look cautious.
An IBM-sponsored study found that 80% of American office workers are using AI in their roles, but only 22% rely exclusively on employer-provided tools. The rest are using personal accounts, free-tier platforms, and unapproved applications.
UpGuard’s research found that 81% of employees and 88% of security leaders report using unapproved AI tools. Security leaders. The people responsible for protecting the organization.
The pattern is identical. New technology ships. Everyone adopts it because the productivity gains are obvious. Nobody governs it because governance slows things down. The breach follows.
But this time, the stakes are higher.
Shadow AI isn't a misconfigured S3 bucket. Employees are pasting proprietary data, customer records, and competitive intelligence into third-party AI models. 57% of employees using AI tools are inputting sensitive data through personal accounts, according to Menlo Security’s 2025 report.
The IBM data confirms the financial impact. Shadow AI was a factor in 20% of breaches studied, adding an average of $670,000 to the cost. Those breaches also had longer lifecycles, higher rates of customer PII exposure (65% vs. the 53% global average), and greater intellectual property theft (40% vs. 33%).
The Irony: AI Is Also the Best Defense
The same IBM report found that organizations using AI and automation extensively throughout their security operations saved an average of $1.9 million in breach costs and reduced the breach lifecycle by 80 days.
AI is simultaneously the biggest ungoverned risk and the most effective cost reducer. The organizations that deploy AI with governance perform dramatically better. The organizations that let AI adoption outrun oversight get hammered.
This is the cost of discipline versus the cost of neglect.
We Already Know How to Fix It
If you’ve been in this industry long enough, you know what comes next. Because the fix for AI risk is the same fix we’ve applied to every technology cycle for the past three decades.
Access controls.
Data classification.
Policy enforcement.
Inventory and visibility.
Do you know which AI tools are in use across the organization? Not the ones you approved. The ones your employees are using. All of them. Only 34% of organizations with governance policies conduct regular audits for unsanctioned AI, according to the IBM report.
None of this is new thinking. Every one of these controls existed before generative AI did. We apply them to endpoints, to cloud workloads, to SaaS applications, to mobile devices. We apply them to everything except the technology that is growing the fastest and handling the most sensitive data.
That's the discipline problem.
Why We Keep Choosing Not To
Every time this cycle repeats, the excuse is the same: governance slows things down.
And the excuse is partially true. Governance does create friction. Approval processes take time. Acceptable use policies require review. Access control decisions require thought.
But the alternative is not frictionless. The alternative is a $4.44 million global average breach cost. The alternative is $670,000 in additional costs from shadow AI. The alternative is 65% of your customers’ PII exposed instead of 53%.
The organizations that governed cloud early spent less on breach response later. The organizations that implemented MDM before the BYOD breach spent less on incident response. The organizations that managed SaaS procurement proactively spent less on vendor-related incidents.
Discipline is always cheaper than cleanup.
The Question You Need to Answer
Has your organization written an AI governance policy?
Not a plan to write one. Not a draft sitting in someone’s inbox. An actual policy, approved by leadership, distributed to employees, and enforced through technical and administrative controls.
If your answer is yes, you’re ahead of 63% of the organizations studied in the IBM report. Keep going. Audit it. Test it. Update it as the technology evolves.
If your answer is “in progress,” you need to finish. The gap between “in progress” and “enforced” is where breaches live.
If your answer is “not started,” you’re making the same bet that every organization made with cloud in 2008, with BYOD in 2012, and with SaaS in 2016. The bet is that the breach will happen to someone else first. That bet has never paid off.
Tell me where you stand: policy in place, in progress, or not started. And if you’re honest, tell me why.
We’ve always known how to do this. The question is whether we’ll choose to do it this time before the breach, instead of after.
About the Author
Jerod Brennen is a vCISO and Board Advisor at SideChannel, where he advises mid-market and enterprise organizations on cybersecurity strategy, AI governance, and risk quantification. With 25 years of experience spanning global retail (Abercrombie & Fitch), higher education (The Ohio State University), and identity governance (SailPoint, One Identity), Jerod translates technical risk into boardroom strategy. He is a CISSP, a LinkedIn Learning author, and a former music teacher who believes complex topics should be simple to understand.
Connect with Jerod on LinkedIn: linkedin.com/in/jerodbrennen
#AIGovernance #ShadowAI #CyberResilience #vCISO #RiskQuantification