AI in Cybersecurity

Explore top LinkedIn content from expert professionals.

  • View profile for Wendi Whitmore

    Chief Security Intelligence Officer @ Palo Alto Networks | Cyber Risk Translator | AI Security & National Security Leader | Former CrowdStrike & Mandiant | Congressional Witness | Keynote Speaker

    20,435 followers

    On behalf of Palo Alto Networks, I testified yesterday before the House Committee on Financial Services regarding the critical intersection of AI and cybersecurity. We are at a pivotal moment where AI is reshaping the financial sector, and my message to Congress focused on the critical reality that as we think about the future of cybersecurity legislation, we have to get security right. To do that, we need to distinguish between two very different challenges: 1️⃣ AI for Cybersecurity: The threat landscape has fundamentally changed. Attacks that used to play out over days are now happening in minutes. Our research at Palo Alto Networks shows that AI-driven tools can compress a ransomware campaign, from the initial breach to stealing data, into roughly 25 minutes. Human teams, no matter how skilled, cannot fight machine speed alone. Defenders are drowning in alerts and data. We have to use AI as a force multiplier to automate our defenses. It is the only way to flip the script and stay ahead of adversaries who are moving faster than ever. 2️⃣ Cybersecurity for AI: As we rely more on these powerful models within our financial institutions, we have to remember that AI systems themselves are targets. Attackers are trying to manipulate the models and poison the data we increasingly trust to make decisions. This is why we need a Secure AI by Design approach. Security cannot be a safety feature we bolt on after the fact. It has to be baked into the DNA of our AI infrastructure from day one, securing the supply chain, the data, and the models themselves from development through runtime. I’ve spent my career working to convince organizations, public and private, that we cannot have true innovation without security. By partnering with the public sector, together we can build a financial system that is resilient and secure enough to harness the power of AI. Jeanette Manfra Nicholas Stevens Tal Cohen Joshua Branch Eva Dudzik Mehlert Daniel Kroese Katie (Donnell) Strand

  • View profile for Marcel Velica

    Senior Security Program Manager | Leading Cybersecurity and AI Initiatives | Driving Strategic Security Solutions |

    59,801 followers

    The 10 AI Threats Quietly Putting Enterprises at Risk What most companies get wrong about AI security? Thinking it’s just a “tech problem.” It’s not. It’s a behavior problem. Enterprise AI is no longer just answering questions. It’s making decisions. Triggering actions. Accessing sensitive systems. And that changes everything. Here’s the part many teams underestimate: AI doesn’t need to be hacked… It just needs to be misguided. And the impact looks exactly like a breach. Here are 10 AI security threats every enterprise should be thinking about: Prompt Injection Attacks ↳ AI follows malicious instructions → data leaks or wrong actions Data Poisoning ↳ Bad data in training = corrupted outputs at scale Model Inversion ↳ Attackers pull sensitive data from responses Sensitive Data Leakage ↳ Poor context control exposes confidential info API Key & Credential Theft ↳ One stolen key = full system access Unauthorized Tool Invocation ↳ AI triggers actions it shouldn’t even have access to Supply Chain Vulnerabilities ↳ Third-party models can introduce hidden risks Model Drift ↳ AI silently becomes unreliable over time Excessive Autonomy ↳ Agents act beyond boundaries → real-world damage Compliance Violations ↳ AI outputs break regulations without warning What actually protects you isn’t just better models. It’s better control. • Input and output guardrails • Dataset validation pipelines • Access control and tool restrictions • Continuous monitoring • Human-in-the-loop for critical decisions Because here’s the reality: The more powerful your AI becomes… The smaller your margin for error gets. The companies that win with AI won’t be the fastest. They’ll be the most controlled. If you’re deploying AI today Are you treating it like a smart assistant… or like a potential insider with access to everything? Share it with your network. 📌 Follow Marcel Velica for more insights on AI, security, and real-world strategies. If you want short daily thoughts, quick threat observations, and real-time discussions, follow me on X as well →https://x.com/MarcelVelica

  • The National Institute of Standards and Technology (NIST) has released a draft of its “Cybersecurity Framework Profile for Artificial Intelligence” (open for public comment until Jan 30, 2026) to help organizations think about how to strategically adopt AI while addressing emerging cybersecurity risks that stem from AI’s rapid advance. Building on the #NIST Cybersecurity Framework 2.0, the Cyber AI Profile translates well-established risk management concepts into AI-specific cybersecurity considerations, offering a practical reference point as organizations integrate AI into critical systems and confront AI-enabled threats. The Cyber AI Profile centers on three focus areas: • Securing AI systems: identifying cybersecurity challenges when integrating AI into organizational ecosystems and infrastructure. • Conducting AI-enabled cyber defense: identifying opportunities to use AI to enhance cybersecurity, and understanding challenges when leveraging AI to support defensive operations. • Thwarting AI-enabled cyberattacks: building resilience to protect against new AI-enabled threats. The Profile complements existing NIST frameworks (CSF, AI RMF, RMF) by prioritizing AI-specific cybersecurity outcomes rather than creating a standalone regime.

  • View profile for Frank Roppelt

    Chief Information Security Officer (CISO)

    2,756 followers

    Today, NIST released the initial preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a community profile built on NIST CSF 2.0 to help organizations manage cybersecurity risk in an AI-driven world. A key section of this draft is Section 2.1, which introduces three Focus Areas that explain how AI and cybersecurity intersect in practice: 1. Securing AI System Components (Secure) AI systems introduce new assets that must be secured; models, training data, prompts, agents, pipelines, and deployment environments. This focus area emphasizes treating AI components as first-class cybersecurity assets, integrating them into governance, risk assessments, protection controls, and monitoring processes. It reinforces that AI risk should not be siloed from enterprise cybersecurity risk management. 2. Conducting AI-Enabled Cyber Defense (Defend) AI is not just something to protect, it is also a powerful defensive capability. This area focuses on using AI to enhance detection, analytics, automation, and response across security operations. At the same time, it recognizes the risks of over-reliance on automation, model integrity concerns, and the need for human oversight when AI supports security decision-making. 3. Thwarting AI-Enabled Cyber Attacks (Thwart) Adversaries are increasingly using AI to scale phishing, evade detection, and automate attacks. This focus area addresses how organizations must anticipate and counter AI-enabled threats by building resilience, improving detection of AI-driven attack patterns, and preparing for a rapidly evolving threat landscape where AI is weaponized. Why This Matters Together, Secure, Defend, and Thwart provide a practical structure for aligning AI initiatives with existing cybersecurity programs. By mapping AI-specific considerations to CSF 2.0 outcomes (Govern, Identify, Protect, Detect, Respond, Recover), the Cyber AI Profile helps organizations integrate AI security into familiar risk management practices. This is a preliminary draft, and NIST is seeking public feedback through January 30, 2026. If your organization is building, deploying, or defending with AI, now is the time to review and contribute. 🔗 https://lnkd.in/e-ETZXH8

  • View profile for María Luisa Redondo Velázquez

    IT Cybersecurity Director | Tecnology Executive | Security Strategy and Digital transformation - Security Architecture & Operations | Cloud Expertise | Malware Analysis, TH and Threat Intelligence | Board Advisor

    9,716 followers

    📛 CVE 2025 32711 is a turning point Last week, we saw the first confirmed zero click prompt injection breach against a production AI assistant. No malware. No links to click. No user interaction. Just a cleverly crafted email quietly triggering Microsoft 365 Copilot to leak sensitive org data as part of its intended behavior. Here’s how it worked: • The attacker sent a benign-looking email or calendar invite • Copilot ingested it automatically as background context • Hidden inside was markdown-crafted prompt injection • Copilot responded by appending internal data into an external URL owned by the attacker • All of this happened without the user ever opening the email This is CVE 2025 32711 (EchoLeak). Severity 9.3 Let that sink in. The AI assistant did exactly what it was designed to do. It read context, summarized, assisted. But with no guardrails on trust boundaries, it blended attacker inputs with internal memory. This wasn’t a user mistake. It wasn’t a phishing scam. It was a design flaw in the AI data pipeline itself. 🧠 The Novelty What makes this different from prior prompt injection? 1. Zero click. No action by the user. Sitting in the inbox was enough 2. Silent execution. No visible output or alerts. Invisible to the user and the SOC 3. Trusted context abuse. The assistant couldn’t distinguish between hostile inputs and safe memory 4. No sandboxing. Context ingestion, generation, and network response occurred in the same flow This wasn’t just bad prompt filtering. It was the AI behaving correctly in a poorly defined system. 🔐 Implications For CISOs, architects, and Copilot owners - read this twice. → You must assume all inputs are hostile, including passive ones → Enforce strict context segmentation. Copilot shouldn’t ingest emails, chats, docs in the same pass → Treat prompt handling as a security boundary, not just UX → Monitor agent output channels like you would outbound APIs → Require your vendors to disclose what their AI sees and what triggers it 🧭 Final Thought The next wave of breaches won’t look like malware or phishing. They will look like AI tools doing exactly what they were trained to do but in systems that never imagined a threat could come from within a calendar invite. Patch if you must. But fix your AI architecture before the next CVE hits.

  • View profile for Sol Rashidi, MBA
    Sol Rashidi, MBA Sol Rashidi, MBA is an Influencer
    113,122 followers

    AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership

  • View profile for Martin Zwick

    Lawyer | AIGP | CIPP/E | CIPT | FIP | GDDcert.EU | DHL Express Germany | IAPP Advisory Board Member

    20,353 followers

    AI Agents Are the New Attack Surface! Are We Ready for That? AI agents powered by large language models (LLMs) introduce entirely new vulnerabilities across confidentiality, integrity, and availability. Here’s what’s new and why it matters: AI Agents execute actions: Unlike typical LLMs, agents interact with tools, systems, and APIs, meaning a hallucinated or adversarial output can change files, leak data, or flood networks. Session management is a blind spot: Most agents don’t isolate user sessions robustly. Result: chat histories bleed across users, leading to data leaks and misassigned actions. Model pollution is real: Malicious inputs can subtly "poison" fine-tuned models, degrading performance and trust without being obviously adversarial. Sandboxing isn’t optional: Experiments showed that 90 out of 95 malicious prompts were accepted by a state-of-the-art agent, with 80% successfully executed, unless sandboxed. Promising defense directions: Session-aware memory and formal monads for state tracking, Encryption-preserving inference (like FPETS and FHE) to process sensitive data safely or toolchain access controls that isolate file systems and limit network requests. 📣 Bottom line: The same autonomy that makes AI agents exciting also makes them dangerous. Without secure-by-design architectures, they could become powerful attack vectors. What security practices are you considering for deploying AI agents in your org?

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,742 followers

    When AI Meets Security: The Blind Spot We Can't Afford Working in this field has revealed a troubling reality: our security practices aren't evolving as fast as our AI capabilities. Many organizations still treat AI security as an extension of traditional cybersecurity—it's not. AI security must protect dynamic, evolving systems that continuously learn and make decisions. This fundamental difference changes everything about our approach. What's particularly concerning is how vulnerable the model development pipeline remains. A single compromised credential can lead to subtle manipulations in training data that produce models which appear functional but contain hidden weaknesses or backdoors. The most effective security strategies I've seen share these characteristics: • They treat model architecture and training pipelines as critical infrastructure deserving specialized protection • They implement adversarial testing regimes that actively try to manipulate model outputs • They maintain comprehensive monitoring of both inputs and inference patterns to detect anomalies The uncomfortable reality is that securing AI systems requires expertise that bridges two traditionally separate domains. Few professionals truly understand both the intricacies of modern machine learning architectures and advanced cybersecurity principles. This security gap represents perhaps the greatest unaddressed risk in enterprise AI deployment today. Has anyone found effective ways to bridge this knowledge gap in their organizations? What training or collaborative approaches have worked?

  • View profile for Vaughan Shanks

    Helping security teams respond to cyber incidents better and faster | CEO & Co-Founder, Cydarm Technologies

    12,076 followers

    13 national cyber agencies from around the world, led by #ACSC, have collaborated on a guide for secure use of a range of "AI" technologies, and it is definitely worth a read! "Engaging with Artificial Intelligence" was written with collaboration from Australian Cyber Security Centre, along with the Cybersecurity and Infrastructure Security Agency (#CISA), FBI, NSA, NCSC-UK, CCCS, NCSC-NZ, CERT NZ, BSI, INCD, NISC, NCSC-NO, CSA, and SNCC, so you would expect this to be a tome, but it's only 15 pages! It is refreshing to see that the article is not solely focused on LLMs (eg. ChatGPT), but defines Artificial Intelligence to include Machine Learning, Natural Language Processing, and Generative AI (LLMs), while acknowledging there are other sub-fields as well. The challenges identified (with actual real-world examples!) are: 🚩 Data Poisoning of an AI Model: manipulating an AI model's training data, leading to incorrect, biased, or malicious outputs 🚩 Input Manipulation Attacks: includes prompt injection and adversarial examples, where malicious inputs are used to hijack AI model outputs or cause misclassifications 🚩 Generative AI Hallucinations: generating inaccurate or factually incorrect information 🚩 Privacy and Intellectual Property Concerns: challenges in ensuring the security of sensitive data, including personal and intellectual property, within AI systems 🚩 Model Stealing Attack: creating replicas of AI models using the outputs of existing systems, raising intellectual property and privacy issues The suggested mitigations include generic (but useful!) cybersecurity advice as well as AI-specific advice: 🔐 Implement cyber security frameworks 🔐 Assess privacy and data protection impact 🔐 Enforce phishing-resistant multi-factor authentication 🔐 Manage privileged access on a need-to-know basis 🔐 Maintain backups of AI models and training data 🔐 Conduct trials for AI systems 🔐 Use secure-by-design principles and evaluate supply chains 🔐 Understand AI system limitations 🔐 Ensure qualified staff manage AI systems 🔐 Perform regular health checks and manage data drift 🔐 Implement logging and monitoring for AI systems 🔐 Develop an incident response plan for AI systems This guide is a great practical resource for users of AI systems. I would interested to know if there are any incident response plans specifically written for AI systems - are there any available from a reputable source?

  • View profile for Amit Zavery

    President, CPO, and COO, ServiceNow; Board Member, Broadridge (NYSE:BR)

    48,785 followers

    We all know AI will continue to be the defining conversation for 2026, but what I’m hearing most often from leaders is: “How do we leverage AI without introducing untenable risk?” This year, we will see three defining shifts, all underpinned by the top priority for the CEO and the critical operational mandate for the CIO: security. AI is transforming the threat landscape faster than most organizations can adapt, and a reactive approach is a business risk. An AI-powered defense shield is the foundation for safe reinvention. It’s about real-time visibility, actionable insights, and closing the loop from discovery to remediation across IT, OT, and cloud silos. This strategic and operational imperative shapes our three key shifts: 📌 Proliferation of (Secure) AI Agents: Beyond chatbots to specialized agents embedded in every function - HR, IT, customer service - running autonomous workflows. They become proactive partners, but every connected asset they touch expands the attack surface. The CIO's mandate: ensure this happens securely, at scale. 📌 Deepening Industry Impact with Real-Time Protection: True transformation happens in mission-critical workflows. In healthcare, with thousands of connected devices managing patient data. In manufacturing, on smart factory floors. The CEO needs confidence that business reinvention can happen in their industry; the CIO needs a unified platform to see, decide, and act across it all. 📌 Expanding a Unified Security Posture: Our “ANY” strategy - connecting to any model, any data, any service - demands a unified view of risk. Observability, asset management, incident response… Risk doesn’t stay in silos; to manage it requires architecture that breaks down walls between IT, security, and operations. This is the year intelligent, secure automation becomes inseparable from business strategy. The organizations that thrive will be those that align the CEO's security-first vision with the CIO's execution, proactively seeing every asset, prioritizing every risk, and acting before an incident occurs. Here’s to a transformative - and secure - 2026. #AI #CyberSecurity #DigitalTransformation

Explore categories