Why AI Is The New Cybersecurity Battleground - Forbes AI has evolved from a tool to an autonomous decision-maker, reshaping the landscape of cybersecurity and demanding innovative defense strategies. Artificial intelligence has quickly grown from a capability to an architecture. As models evolve from backend add-ons to the central engine of modern applications, security leaders are facing a new kind of battlefield. The objective not simply about protecting data or infrastructure—it’s about securing the intelligence itself. In this new approach, AI models don’t just inform decisions—they are decision-makers. They interpret, respond, and sometimes act autonomously. That shift demands a fundamental rethink of how we define risk, build trust, and defend digital systems. From Logic to Learning: The Architecture Has Changed Historically, enterprise software was built in layers: infrastructure, data, logic, and presentation. Now, there’s a new layer in the stack—the model layer. It’s dynamic, probabilistic, and increasingly integral to how applications function. Jeetu Patel, president and chief product officer at Cisco, described this transformation to me in a recent conversation: “We are trying to build extremely predictable enterprise applications on a layer of the stack which is inherently unpredictable.” That unpredictability is not a flaw—it’s a feature of large language models and generative AI. But it complicates traditional security assumptions. Models don’t always produce the same output from the same input. Their behavior can shift with new data, fine-tuning, or environmental cues. And that volatility makes them harder to defend. AI Is the New Attack Surface As AI becomes more central to application workflows, it also becomes a more attractive target. Attackers are already exploiting vulnerabilities through prompt injection, jailbreaks, and system prompt extraction. And with models being trained, shared, and fine-tuned at record speed, security controls struggle to keep up. Runtime Guardrails and Machine-Speed Validation Given the speed and sophistication of modern threats, legacy QA methods aren’t enough. Patel emphasized that red teaming must evolve into something automated and algorithmic. Security needs to shift from periodic assessments to continuous behavioral validation. Agentic AI: When Models Act on Their Own The risk doesn’t stop at outputs. With the rise of agentic AI—where models autonomously complete tasks, call APIs, and interact with other agents—the complexity multiplies. Security must now account for autonomous systems that make decisions, communicate, and execute code without human intervention. #cybersecurity #AI #AgenticAI #dynamic #riskmanagment
How AI Will Shape Software Security
Explore top LinkedIn content from expert professionals.
Summary
Artificial intelligence is transforming software security by making systems smarter but also introducing new risks. AI can both help defend against cyber threats and be used by attackers, meaning organizations must rethink how they protect software and data as AI becomes a central part of their operations.
- Update security strategies: Adapt your cybersecurity approach to address the unique vulnerabilities introduced by AI-powered tools and agents.
- Monitor AI behaviors: Continuously observe how AI systems make decisions and interact with data to catch unexpected actions and potential breaches.
- Prepare for dual threats: Strengthen defenses knowing that AI can be used by both security teams and attackers, so staying ahead of evolving threats is essential.
-
-
I spent more time digging into the new NIST Cybersecurity Profile for AI... The document frames AI cybersecurity around three distinct focus areas. Not just securing AI systems. But understanding how AI changes cybersecurity as a whole. The first focus area is securing AI systems themselves. This includes protecting and understanding training data implications, safeguarding model artifacts, securing inference APIs, and preventing things like model theft, prompt injection, or adversarial manipulation. The second focus area is using AI to strengthen cybersecurity operations. Security teams are already experimenting with AI for threat detection, GRC, anomaly analysis, and automating investigation workflows. The third focus area is defending against attackers who are using AI. That last point is where things start to change the security landscape. AI can accelerate vulnerability discovery, generate convincing phishing campaigns, and automate reconnaissance in ways that were previously very manual. In other words, AI is now influencing both sides of the cybersecurity equation. Organizations have to secure the AI systems they deploy while also preparing for attackers who are increasingly augmented by AI themselves. That dual pressure is why AI security is quickly becoming part of mainstream cybersecurity strategy. It is not a niche governance topic anymore. It is becoming part of how modern security programs operate. #AI #GRCEngineering
-
𝐀𝐈 𝐢𝐬 𝐫𝐞𝐬𝐡𝐚𝐩𝐢𝐧𝐠 𝐜𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐚𝐭 𝐚 𝐟𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐥𝐞𝐯𝐞𝐥, 𝐚𝐧𝐝 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐆𝐥𝐚𝐬𝐬𝐰𝐢𝐧𝐠 𝐛𝐲 Anthropic 𝐡𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬 𝐭𝐡𝐢𝐬 𝐬𝐡𝐢𝐟𝐭 𝐛𝐲 𝐦𝐨𝐯𝐢𝐧𝐠 𝐛𝐞𝐲𝐨𝐧𝐝 𝐭𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐬𝐜𝐚𝐧𝐧𝐢𝐧𝐠 𝐭𝐨 𝐬𝐲𝐬𝐭𝐞𝐦-𝐥𝐞𝐯𝐞𝐥 𝐯𝐮𝐥𝐧𝐞𝐫𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠. Learn why an AI model release was halted by its own creators and what it reveals about risks we are only beginning to understand. 𝐓𝐡𝐞 𝐚𝐫𝐭𝐢𝐜𝐥𝐞 𝐜𝐨𝐯𝐞𝐫𝐬: • How the model understands entire software ecosystems • Why decades-old vulnerabilities are now being rediscovered • How AI can simulate real attack paths, not just detect issues • What this means for ransomware, cloud security, and supply chains This isn’t just about finding more bugs. It’s about understanding how systems can break before attackers do. If you're working in cybersecurity, AI, or risk management, this shift is worth paying attention to. Would value your thoughts. #CyberSecurity #AI #CyberDefense #ThreatDetection #VulnerabilityManagement #ZeroDay #InfoSec #CloudSecurity #AISecurity #SecurityOperations #RiskManagement #DigitalSecurity #CyberResilience #SecurityInnovation
-
Criminals, Spies, and AI: A New Front in Cyber Warfare The use of AI in cybersecurity is rapidly changing the landscape, creating a new "arms race" between hackers and cybersecurity professionals. Here's a look at how different groups are leveraging this technology. AI and Malicious Actors Bad actors are increasingly incorporating AI into their cyberattacks. For example, Russian hackers have been caught using large language models (LLMs) to create malicious code for phishing campaigns, enabling them to automate the search for sensitive files on a victim's computer. Similarly, cybersecurity firm CrowdStrike has noted a growing trend of advanced adversaries, including Chinese, Russian, and Iranian state-sponsored groups, using AI to their advantage. The technology is making skilled hackers more efficient and effective, particularly in areas like social engineering and creating convincing phishing emails. AI in Cyber Defense The cybersecurity industry is also using AI to combat these threats. Google's security team, for instance, has used its Gemini LLM to hunt for software vulnerabilities. This process has already led to the discovery of at least 20 overlooked bugs in commonly used software, allowing companies to fix them before they can be exploited by criminals. While AI isn't yet finding entirely new types of vulnerabilities, it is significantly speeding up the process of discovering and patching known types of flaws. As Google's VP of Security Engineering, Heather Adkins, said, "It’s the beginning of the beginning." The use of AI in both offensive and defensive cybersecurity is still in its early stages, but it is clear that the technology is making a tangible impact, creating a faster, more complex, and more dynamic environment for everyone involved.
-
Something remarkable happened this week. Our AI security agent discovered and patched a zero-day vulnerability in Netty, one of the internet’s most widely used networking libraries (relied on by companies like Apple, Meta, and Google). The flaw, now assigned CVE-2025-59419, could have allowed attackers to forge emails that appeared to come from inside a trusted organization, bypassing every modern safeguard (SPF, DKIM, DMARC). Here’s what’s extraordinary: - No human found this bug. No human wrote the patch. - Our AI agent did. It autonomously analyzed live code, identified the root cause, generated a fix, and submitted it upstream. This is more than a single discovery. It’s a glimpse of what comes next. For decades, security has been reactive - humans chasing an ever-expanding attack surface. But the next chapter is autonomous defense: AI systems that find, fix, and fortify software at machine speed. Human expertise remains essential - but increasingly as orchestrators, not operators. The new frontier is collaboration between people and intelligent agents working in real time across the world’s software supply chain. Huge thanks to the Netty maintainers for their openness and partnership. And to every CISO, CIO, and security leader: the shift to autonomous security isn’t theoretical anymore. It’s happening. #AISecurity #ZeroDay #Cybersecurity #AutonomousDefense #AIagents #Netty #FutureOfSecurity
-
Anthropic 's Claude new code security capability didn’t introduce a better scanner — it introduced a new layer in AppSec. For years, we scaled detection: more tools, more alerts, more triage. But security never scaled at the same speed as software. What changed now is simple but structural — security reasoning moved into the developer workflow. AI doesn’t just find patterns, it explains risk, understands intent, and proposes secure alternatives. That shift compresses the distance between detection and remediation, which is where most AppSec friction has always lived. This doesn’t replace the AppSec stack, but it forces consolidation. Lightweight SAST, standalone review workflows, and parts of manual code assessment will increasingly become capabilities rather than products. The value moves upward — toward orchestration, governance, runtime validation, and decision quality. In other words, security is moving from tools to intelligence. From a CISO perspective, this is an operating model change, not a tooling trend. Teams that embed AI as a control layer will scale expertise without scaling headcount at the same rate. Teams that treat it as a developer feature will see incremental gains but miss the structural advantage. Within the next two years, most mature engineering organizations will run an AI reasoning layer inside their SDLC — formally or organically. The real risk is not adopting early. The real risk is adoption without design. AI-native code security doesn’t eliminate AppSec. It reveals which parts were process — and which parts were expertise. #AI #CyberSecurity #AppSec #DevSecOps #CISO #AIsecurity #Claude #SoftwareSecurity
-
Anthropic’s new Claude Code Security preview may be more than a product announcement. It seems to be a signal that AI‑driven vulnerability discovery is about to reshape the security market. The tool doesn’t just scan for known patterns, it reasons through code like a human researcher, tracing data flows and surfacing subtle, context‑dependent flaws that traditional scanners routinely miss. Perhaps that’s why Wall Street reacted so sharply. When AI can autonomously find and propose fixes for novel vulnerabilities, the economics of software assurance could change overnight. But the real wake‑up call isn’t the market dip—it’s the shift in expectations for defenders. Security teams drowning in backlog now face a future where AI agents can meaningfully compress the gap between discovery and remediation. And while this doesn’t replace the major cybersecurity, zero trust, or identity companies, it may force every vendor (and every CISO) to rethink what “good” looks like when AI can challenge its own findings, reduce false positives, and elevate only validated issues. The bar may have just moved and the industry won’t be able to pretend otherwise. I spoke to CSO Online about it here: https://lnkd.in/esJur4sW
-
Anthropic’s release of Claude Code Security fits a familiar pattern. Each technology era has a foundational platform — and that platform eventually builds security into itself. In the data center era, infrastructure companies such as Cisco, EMC, and IBM owned the core systems — and separate security leaders rose on top of them. In the cloud era, AWS and Azure became the operating layer — and companies like CrowdStrike and Wiz still emerged as security leaders. Now AI labs are becoming the operating layer for how software is created. When you power how code gets written, how workflows run, and how tools connect, you influence how risk enters the system. So it’s natural that AI platforms will build security into that layer. That raises the baseline for everyone. More bugs will be caught earlier. Safer patterns will be encouraged by default. What used to require separate scanning tools will increasingly be built in. But raising the baseline doesn’t eliminate the need for depth. Foundational platforms are built for the broadest set of customers. They can’t focus only on the most advanced threats. And they can’t fully serve as the independent judge of whether the systems they generate are truly secure. At the same time, AI is dramatically increasing how fast software gets built — which means more code, more complexity, and more attack surface. And underneath that sits decades of accumulated risk: accepted findings, renewed exceptions, systems no one has pressure-tested in years. So this moment is bigger than safer model outputs. AI companies now influence how quickly new risk is created. Security teams remain responsible for everything already there. History suggests this won’t shrink the security market. It will expand it.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development