This is a live example of how I am using AI to help expedite cyber risk assessments at an enterprise level using an enterprise Claude environment. Let’s be honest. Regardless of how competent we are as cyber professionals, AI can often analyse faster, process larger volumes of information, and maintain consistency across assessments, provided it is given the right context, governance artefacts, and structured prompts. One of the persistent challenges in enterprise environments is the volume of cyber risk assessment requests. New integrations are introduced, new products are adopted, business units request policy exemptions, and security teams are expected to provide timely risk decisions. Using agentic AI, we can accelerate this process by enabling the model to retrieve the relevant governance artefacts, apply the organisation’s risk framework and matrix, map appropriate controls, draft the assessment, and prepare a structured report for consultant review. The result is not automation for the sake of automation, but a practical way to reduce manual effort while maintaining governance and oversight. Human expertise remains critical. The consultant still reviews the draft, validates the reasoning, and approves the final assessment. What changes is the speed and scalability of the process. The next evolution is even more interesting. Instead of building isolated agents for individual tasks, we should focus on building reusable skills that allow a single AI capability to orchestrate an entire governed workflow. As Anthropic has suggested, the future is not about building more agents, but about building the right skills that can be composed into reliable systems. Cyber GRC is entering a phase where AI will not just assist professionals, but help organisations execute governance processes faster, more consistently, and at enterprise scale. Thoughts? #CyberSecurity #CyberRisk #GRC #AI #AgenticAI #Claude #CyberGovernance #RiskManagement #EnterpriseAI #AIinCyber #DigitalTransformation
How Agentic AI Improves Security Operations
Explore top LinkedIn content from expert professionals.
Summary
Agentic ai refers to artificial intelligence systems that act as autonomous agents, reasoning and making decisions in complex environments. These intelligent agents help streamline security operations by reducing manual workload, improving speed, and maintaining oversight, making cybersecurity workflows faster and more consistent.
- Accelerate risk assessments: Use agentic ai to quickly gather relevant data, map controls, and draft structured reports, allowing security professionals to focus on review and approval rather than manual compilation.
- Enable dynamic monitoring: Let agentic ai continuously analyze user behavior, detect anomalies, and synthesize threats across multiple data sources, keeping security teams ahead of evolving risks.
- Adopt zero trust principles: Always verify agentic ai actions, limit permissions, and monitor activity to protect sensitive systems, ensuring every operation is secure and accountable.
-
-
Are the magic words of AI-SOC agentic and autonomous? Triage, investigation, response, threat research, exposure management, malware analysis, and detection engineering are no longer islands. They behave like a connected reasoning graph, where each node feeds context to the next and pushes insights back into the system. The vision for an Agentic SOC makes this shift explicit. Instead of AI assistance, some of the model introduces autonomous multi agent systems that can analyze intent, break tasks into tasks, reason over evidence, and execute actions within guardrails. A collaborative system where humans lead and AI agents dynamically operate across the entire SOC surface. The real breakthrough is independence. Agents can identify when data is missing, request enrichment, correlate telemetry across domains, surface new hypotheses, and push improvements back into detection engineering. SOC work stops being a sequence of bottlenecks and becomes a feedback loop that continuously strengthens itself. Every alert, every artifact, and every hunt becomes learning material for the system. Data management becomes the backbone. Detection engineering becomes the learning engine. Triage becomes a reasoning hub, and incident response becomes the actuator for decisions born from a network of specialized agents capable of real analytical depth. I have dozens of them, but here are some principles for building an Agentic AI-SOC: Treat your telemetry as a trust contract. Agents cannot reason if the data is inconsistent, incomplete, or ungoverned. Operate detection engineering as a continuous reinforcement pipeline. Every investigation must feed back into what the SOC learns next. Give agents controlled autonomy. Let them correlate, enrich, hypothesize, and propose actions while humans own intent, boundaries, and oversight. One model to rule them all... not precisely, and you should have various models to behave at different tiers. The teams that adopt this model will operate at a level of efficiency and insight that traditional SOCs cannot achieve. The shift has already started. The question is who adapts and who gets left behind. #security #cybersecurity
-
🚨 Agentic Workflow for Insider Threat Monitoring 🧠🛡️ As enterprise data grows in complexity, insider threats are no longer just anomalies—they're sophisticated patterns that demand intelligent, context-aware monitoring. This cutting-edge Agentic AI architecture showcases how we can combine Machine Learning (ML), Large Language Models (LLMs), and rule-based automation to stay several steps ahead of potential security risks. 🔍 Key Highlights of the Workflow: 📥 Ingestion Layer: Seamlessly processes structured & unstructured security telemetry using Kafka, Amazon MSK, and Kinesis. 🧹 Preprocessing & Identity Mapping: Data Cleaner + PII Redactor (ML) ensures privacy by scrubbing sensitive information. Identity Graph Builder (ML) connects disparate user activities across systems to form a unified behavioral profile. 📊 Behavioral Analysis & Anomaly Detection: Baseline Behavior Modeler (ML) establishes “normal” behavior for every identity. Anomaly Detection Agent (ML) flags deviations using ML guardrails for precision and accountability. 🤖 Agentic Intelligence (LLM + Rule Engine): Threat Synthesizer Agent (LLM) reasons over anomalies and combines contextual signals from vector databases like Pinecone, Weaviate, and Amazon OpenSearch. Soar Executor Agent triggers appropriate actions using pre-set rules. Feedback Interpreter & Learner (LLM) learns from analyst feedback and continuously improves threat detection. 🧠 LLM Infra: Powered by Amazon Bedrock, OpenAI, and Claude 3 Sonnet—providing the scale and intelligence needed for complex, real-time decision making. 📈 Transparency & Explainability Tools: Integration with SageMaker Clarify, EvidentlyAI, and Bedrock Guardrails ensures fairness, transparency, and compliance. 💬 Human-in-the-loop: Analysts can review and interact through tools like Slack, Jira, and a dedicated Analyst Interface for final verdicts or overrides. 🔐 This isn’t just automation—it's augmented security intelligence, capable of evolving with your threat landscape.
-
🚀 Agentic AI and the need for Zero Trust Security Over the past couple of days I got questions about the security side of Agentic AI. When we talk about AI agents that can access business tools, sensitive databases, and internal APIs, security can’t just be an afterthought- it has to be the starting point. That’s where zero trust comes in. Zero trust is not just a tech buzzword. It’s a simple but powerful idea: don’t automatically trust anything or anyone - inside or outside your company’s systems. Always verify, every single time. So, what does zero trust actually look like? Here are a few features that define it - and why they matter so much for Agentic AI: 1️⃣ Never Trust, Always Verify: Every request - whether it’s an AI agent trying to fetch data, or a user logging in - must be checked and validated. Nothing is “trusted” just because it’s inside the network or from a familiar system. 2️⃣ Least Privilege: Only give access that’s absolutely needed. If an AI agent just needs to read sales numbers, it shouldn’t have access to edit or delete data. Permissions are tightly controlled, and kept as limited as possible. 3️⃣ Continuous Authentication: It’s not “log in once and you’re good.” Every action, every API call, every data request is checked. Tokens are short-lived, credentials are rotated, and the system is always asking, “Are you still allowed to do this?” 4️⃣ Micro-Segmentation: Even within your systems, different tools and data sources are separated into small “segments.” The AI agent has to prove it has the right to cross into each one—it’s never an all-access pass. 5️⃣ Audit and Monitoring: Everything the agent does - what it accesses, what tools it uses, what data it pulls - is logged. This isn’t just for compliance, but for spotting mistakes or suspicious behavior quickly. 6️⃣ No Hardcoded Secrets: Agents should never have passwords or API keys baked into their code. Use secure vaults or secret managers, and make sure everything is protected and easy to rotate. Why is all this so relevant for Agentic AI? Because these agents are smart and fast - they can access multiple tools in seconds and scale their actions without much human intervention. If you don’t put strong controls in place, a small mistake or security gap can lead to a big problem. So if you’re building, deploying, or even just experimenting with Agentic AI, start with zero trust. Treat every agent as you would an external visitor. Always ask: 👉 Should this agent have access right now? 👉 Is it doing only what it’s supposed to do? 👉 Can I see and control everything it touches? Sometime, I have be been challenged by a few if this will slow down innovation - my answer is a definite "No". In fact, it’s what lets you move faster, knowing your data and systems are protected at every step. I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence PS: All views are personal Vignesh Kumar
-
While AI SOC dominates headlines, security engineering teams are quietly grappling with a 40% annual surge in security data volume. That’s why I’ve long stressed the growing importance of the Data ETL/pipeline market—one of the most critical, yet overlooked, aspects of the SOC. Today, rather than just using AI SOC for incident response triage, we’re seeing a new trend: AI is transforming how SOC engineers process, manage, and extract value from their data. A recent announcement I saw from Observo AI highlights this transformative trend. For context, for non-SOC folks, traditional security data pipelines require specialized engineering expertise, deep knowledge of query languages on Splunk, and time-consuming manual effort. As a result, security teams often face delays in investigation and response, despite having access to large amounts of data. Observo AI just launched (Orion AI). This is one of the first case studies where AI is leveraged to address data pipeline issues. Along with its agentic AI-based platform, Orion AI functions as an AI-powered data engineer, allowing security and DevOps teams to ingest, route and manage data pipelines from multiple sources, optimize workflows, standardize, enrich, correlate, normalize and query cloud-stored data—all through natural language. Some case studies of how we're seeing AI being leveraged in security engineering and what I've seen with Orion AI: 1) Data Pipeline Automation - AI can enable teams to define end-to-end pipelines from multiple sources to multiple destinations through an LLM-based conversational interface. 2) AI-Powered Querying & Search - AI can allow security teams to search and interact with live and archival data using natural language, eliminating the need for complex and proprietary queries. 3) Pipeline Optimization & Cost Efficiency - Machine learning identifies inefficiencies in data processing and reduces storage costs in real-time, while maintaining observability. 4) Interactive Pipeline Management - Provides real-time control over security and observability data pipelines through Agentic AI. 5) Incident Response Acceleration - Streamlines access to security-relevant data, reducing investigation times by 40%+ Why do I think security leaders and engineers should care? IMO, security teams shouldn’t be blocked by data bottlenecks or a reliance on specialized engineers just to extract insights. AI is now able to shift the paradigm by making security and observability data more accessible, actionable, and cost-effective. The question now is: How should security teams integrate AI into their workflows to improve efficiency without compromising control? *** PS: I'll be sharing much more about how AI is being leveraged in the SOC (not for triage, but more so within the data engineering pipeline by the end of March. See the comments to subscribe if interested in this topic)
-
Most security frameworks were built for a world where software does exactly what you tell it to do, every time. Agentic AI breaks that assumption. Agents use LLMs to carry out actions on their own, at machine speed, with real-world consequences. And because they’re non-deterministic, the same request can produce different results each time. That’s a fundamentally different operating model, and it raises questions our industry needs to answer well. NIST’s Center for AI Standards and Innovation recently issued a Request for Information asking for industry input on how we should secure these systems. We submitted a response based on our experience building and operating agentic AI services at AWS, and we published a blog summarizing the four security principles at the core of it. A few points I’d emphasize for anyone thinking about how to secure agents at their own organization: 1. Secure foundations are more important than ever. Every traditional attack technique, including denial-of-service, man-in-the-middle, vulnerability and configuration exploitation, supply chain, log tampering, etc., remains relevant in agentic contexts. AI-specific controls must be additions to foundational security, not replacements for it. 2. Don’t rely on the agent to secure itself. Even if you tell an LLM to refuse certain requests, crafty prompt injection techniques can override those instructions. Security boundaries need to be enforced by infrastructure outside the agent that governs what it can access and do. And these controls must be deterministic. 3. Autonomy should be earned, never granted by default. Start by having humans make the final call on high-consequence operations. As you gather evidence that the agent performs reliably, expand its autonomy gradually. And be ready to pull it back when the data says you should. 4. Be thoughtful about human-in-the-loop oversight. If every action requires approval, reviewers get overwhelmed and start rubber-stamping. Focus human oversight on the decisions that genuinely carry high stakes. We’re all figuring this out in real time, and no single organization has all the answers. The more we share what we’re learning, the faster the whole industry moves forward. For more details on how to apply these principles, check out the links below. Full response to NIST: https://lnkd.in/enxE8R-V Blog post: https://lnkd.in/eRg3uc26
-
(Agentic AI Security contd..) Monitoring for Drift, Not Just Failure. One thing I’ve stopped expecting from agentic AI systems is perfect predictability. They change behavior as contexts, tools, and data evolve. That means security monitoring has to evolve too. What’s helped me reframe this: - track usage trends and decision flows instead of outcomes alone - baseline expected activity for each agent and its tools - surface early anomalies like new tool chains or looping executions - always maintain rapid containment and shutdown capabilities Agentic AI won’t be risk-free. But with the right monitoring and response mechanisms, it can be manageable, observable, and trustworthy.
-
Agentic AI Defenders — The Rise of Autonomous Cyber Response For years, cybersecurity has been a race between human endurance and machine speed. Attackers have automated, accelerated, and scaled their operations — while defenders have been left buried in alerts, dashboards, and manual investigation steps. Even with advanced detection tools, the human bottleneck remains the slowest point in cyber defense. The problem isn’t that we can’t see the threats; it’s that we can’t reason through them fast enough. But a new class of AI is changing that equation. Agentic AI — systems that can perceive, plan, and act independently — are emerging as digital teammates within the Security Operations Center. These aren’t just chatbots or automation scripts. They are reasoning agents capable of understanding analyst intent, gathering evidence across domains, forming hypotheses, and autonomously executing containment actions when confidence is high. In short, they don’t wait for instructions — they think ahead. This shift marks the beginning of autonomous cyber response — where AI not only assists but decides. It’s the evolution from static automation to adaptive defense, from data processing to contextual reasoning. And as these AI defenders grow more capable, they’re poised to redefine what “speed” and “precision” mean in cybersecurity operations. Because soon, the most effective analyst in the SOC may not be human at all — it will be agentic. #Cybersecurity #AI #AgenticAI #AIDefense #SOCAutomation #ThreatResponse #FutureOfCyber
-
🔒 How Amazon Uses Agentic AI to Detect Vulnerabilities at Global Scale Security at scale is one of the hardest challenges in tech. At Amazon, we're tackling it with agentic AI—autonomous systems that can reason, plan, and act to find vulnerabilities before they become threats. Our latest approach combines: ✅ Multi-agent collaboration for comprehensive code analysis ✅ Automated reasoning to understand complex security patterns ✅ Global-scale deployment across millions of lines of code The results? Faster detection, fewer false positives, and proactive security that keeps pace with our innovation velocity. This isn't just about automation—it's about augmenting our security teams with AI that thinks strategically about threats. The security basics still matter, but AI helps us apply them more effectively at unprecedented scale. Key insight: Agentic AI doesn't replace security expertise—it amplifies it. Our strongest security minds are now equipped with tools that can analyze, learn, and adapt alongside them. Read the full technical deep-dive from Amazon Science: https://lnkd.in/gzqzXQ-p #AWS #CyberSecurity #AI #Security #ResponsibleAI #ThreatDetection #CloudSecurity
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development