Day 2 of #SecureWorldBoston While AI continued to be the word of the day, many of today’s sessions focused on the human side of cybersecurity. I was glad to see that emphasis. The fundamentals of security haven’t changed since antiquity - only the attack vectors have. Whether your adversary is advancing in a phalanx, a shield wall, mobile infantry, or through a keyboard with a script, the core principles remain the same: - Defense in depth (moat, walls, archers → firewall, IPS, EDR) - Human intuition and pattern recognition - spotting when something just feels off, whether on a battlefield or in a log file - Stoic principles applied to incident response: staying composed and executing your role under pressure, even as chaos unfolds AI dominated the morning discussions, including a session on how it can now replicate voice and video with just a few seconds of audio. That reality makes it clear: traditional phishing simulations and security awareness training will need to adapt and evolve. My career pivot from support to security is going to keep me very busy. A standout panel on resilience and incident response reinforced the importance of immutable backups, and just as importantly, ensuring teams actually know how to restore them under pressure. One CISO shared a simple but painful lesson: they had documented procedures for graceful server shutdowns… but when the time came, no one could find them. A reminder that documentation only matters if it’s accessible when you need it. The highlight of the conference was the final session. Mark Annati, CISO for the Commonwealth of Massachusetts, shared his personal journey adopting AI in both his professional and personal life. His experience felt familiar and grounding; much of what he shared mirrored my own approach. It’s easy to feel behind with the constant noise around AI - ads, hype cycles, and “must-learn” lists. His perspective helped cut through that. We’re all learning in real time. And like many of us, he sees AI not as a replacement, but as an amplifier of human capability. His caution was clear, though: it will soon be obvious who has embraced AI, and who hasn’t. He also spoke about phishing, and I was genuinely heartened to hear that he personally reaches out to users who click. That human touch still matters, even in an increasingly automated world. That wraps up #SecureWorld Boston 2026. I had the chance to network with peers, reflect on shared challenges across the industry, earn some CPEs, and even carve out time to prepare for my next certification step. See you in the field! #Cybersecurity #InfoSec #InformationSecurity #NeverStopLearning
Human Side of Cybersecurity at SecureWorldBoston
More Relevant Posts
-
AI Is Reshaping Cybersecurity — But Not the Way Most People Think For years, cybersecurity was a race between attackers and defenders, driven by tools, patches, and perimeter defenses. In 2026, that race has changed completely. It’s no longer just about who has better technology. It’s about who uses AI more intelligently. The Shift: From Technical Exploits to Intelligent Attacks Attackers are no longer relying heavily on traditional vulnerabilities. Instead, they’re leveraging AI to: Craft hyper-personalized phishing messages Automate reconnaissance at scale Generate deepfake audio and video for impersonation Bypass traditional detection systems with adaptive malware Frameworks like MITRE ATT&CK already document how adversaries are evolving toward behavior-based and human-focused tactics. The uncomfortable truth? Many organizations are still defending against yesterday’s threats. The Double-Edged Sword of AI in Security AI is not just a weapon for attackers, it’s also a powerful defense tool. Modern security teams are using AI to: Detect anomalies in real time Automate threat hunting Predict potential attack paths Reduce response time from hours to seconds But here’s the catch: AI without context can create noise, not security. The real value comes from combining AI with human expertise, analysts who can interpret, validate, and act on insights. Why This Matters to Recruiters and Organizations ? Cybersecurity talent is no longer defined by certifications alone. Today’s high-value professionals are those who can: Think like an attacker Understand human behavior in security Work with AI-driven tools, not just manual processes Bridge the gap between technical risk and business impact Organizations are shifting from tool operators to strategic security thinkers. My Perspective With a background spanning healthcare systems and cybersecurity training, I’ve seen how critical environments rely not just on technology but on resilience, awareness, and rapid response. Cybersecurity is no longer just about protecting systems. It’s about protecting people, data, and trust in an increasingly intelligent threat landscape. The biggest risk today isn’t that organizations lack tools. It’s that they may not be evolving as fast as the threats they face. So the real question is: Are we preparing for the future of cybersecurity… or still reacting to the past? #CyberSecurity #AI #ThreatIntelligence #InfoSec #DigitalSecurity #CyberDefense #TechCareers #ZeroTrust #SecurityAwareness
To view or add a comment, sign in
-
-
In a world where AI is accelerating everything — innovation, automation… and unfortunately, attacks — one thing remains irreplaceable: real conversations with real people. Over the past few weeks of meeting colleagues, partners, and clients face-to-face, one simple truth keeps getting reinforced: - AI is already reshaping application security - But many conversations are still catching up From AI led attacks to API abuse to AI-driven bots and business logic attacks, the application and API threat landscape is evolving faster than most organisations can adapt. I have experienced that the biggest breakthroughs happen in face-to-face discussions, during whiteboarding, during brainstorming sessions, and during honest exchange of challenges. I really wish that cybersecurity stakeholders meet up more often. Great catching up with so many brilliant minds recently — let’s keep pushing the conversation forward. #AppSec #AI #CyberSecurity #APIsecurity #Partnerships #Innovation
To view or add a comment, sign in
-
-
🚨 Breaking News: AI has officially taken over the RSAC 2026 Conference! But wait—hold your applause, folks. It turns out, the real stars of cybersecurity are still... humans. Shocking, right? In an era where we’re told AI can do everything but make your morning coffee, it’s refreshing to see that when it comes to safeguarding our digital realms, the human touch still reigns supreme. Here’s a little perspective: - History lesson: Remember when we thought antivirus software would solve all our problems? Fast forward, and here we are with data breaches still lurking around like that one uncle at family gatherings. - Today, AI is the shiny new toy. It’s great at spotting patterns and processing data faster than you can say “malware.” Yet, it still lacks that gut instinct that only seasoned professionals possess. - Prediction alert: As AI continues to evolve, the demand for skilled cybersecurity experts will only grow. After all, who do you trust more in a crisis? A chatbot or a human who’s been in the trenches? Let’s face it: AI can automate, analyze, and even predict. But when it comes to strategic decisions and understanding the nuances of human behavior in cybersecurity, it’s the humans who’ll always hold the key. 🔐 So, as we embrace our new AI overlords, let’s not forget the irreplaceable value of human insight and intuition. Cheers to the cybersecurity professionals keeping us safe! #Cybersecurity #AI #TechTrends #ainews #automatorsolutions #CyberSecurityAINews ----- Original Publish Date: 2026-04-07 06:05
To view or add a comment, sign in
-
The Reality of AI Attacks: Just a Quick Research Note! I've not posted here for multiple years, but as part of a case study for my PhD research in AI Security, the news of recent "Agentic" AI attacks caught my attention and I felt it was worth sharing. Recent studies show a massive shift in how cyber attacks work. AI-powered phishing now has a 54% success rate, compared to just 12% for traditional methods (Astra Security Phishing Benchmark 2026). When an email or a voice call is perfectly personalized by AI, the success rate multiplies instantly. As a Global Head of Cybersecurity and a PhD researcher, I see the below shifts in defense reshape: Speed: Looking at campaigns like CyberStrikeAI, which hit over 600 firewalls in 55+ countries at the same time, we see the power of a single autonomous agent. Human response or even our current automated tools might not move fast enough to stop agents that act this rapidly across global borders. Intent: In the past, our focus was built around the W's and H, matching patterns, IoCs, historical behavior, etc., But today, AI can change its pattern in fraction of seconds to hide and the focus now needs to expand to 'Goal Directed Monitoring" Checking the intent of AI behavior (such as Tool-Proxy Pivoting, where an agent chains legitimate API calls to achieve a malicious goal - just a simple one) is the new priority. Agentic Parity: To stop an AI attack, our defense also has to be run by AI. It's becoming obvious that we are moving into a phase where defensive autonomy must match adversarial speed, the era of the "Agentic SOC." I'm finding this quite rewarding to rediscover my interest through these case studies. I believe these studies are helping me design a more resilient defense strategy in my role as we navigate this new phase of the threat landscape. It is a great time to be working in cybersecurity!! #CyberSecurity #AI #PhDResearch #CISO #GlobalLeadership #AgenticAI #CyberStrikeAI #SecurityStrategy
To view or add a comment, sign in
-
#002 - AI Security – From Theory to Reality Following my post yesterday on the new reality of AI attacks, I came across another case study ‘Claude Mythos’ as part of my PhD research that perfectly proves the shifts in Speed and Intent I mentioned in my previous post #001 - The Reality of AI Attacks. This week, the industry is discussing findings from the new model, Claude Mythos (currently under controlled access via Project Glasswing). The data is a massive wake-up call for global security strategy, Speed (The End of "Old" Bugs): In just hours, this AI found a 27-year-old bug in OpenBSD and a 16-year-old flaw in common software (AISI-AI Security Institute). These are vulnerabilities that human researchers and traditional tests missed for decades. Intent (Autonomous Attack Chains): As I mentioned in my previous note regarding "Goal-Directed Monitoring," this model didn't just find one hole, it autonomously chained multiple flaws together to take control of a machine. It did not need a human to tell it the next step. Agentic Parity: While this power is currently in a "sandbox," it proves that Defensive Autonomy is no longer optional. When a machine can find a 27-year-old bug in an afternoon, a human-led patching cycle for about days and weeks is simply too slow to keep up. Truly excited to be diving back into these topics! This research is giving me great ideas to build a stronger and better enterprise defense as we navigate this new landscape. Again, happy to be in cybersecurity in this AI age, might be challenging but incredible time to be at the front lines! #CyberSecurity #AIResearch #CISO #ProjectGlasswing #ClaudeMythos #PhDLife #TechTrends2026 #AISecurityNote002 https://lnkd.in/gJTF3mXs
AI Security Strategist & Thought Partner | Governance, Risk & Compliance Leader | Driving Cloud Security | Automation‑Driven CDC/SOC | DFIR Excellence & Resilient Security Frameworks
The Reality of AI Attacks: Just a Quick Research Note! I've not posted here for multiple years, but as part of a case study for my PhD research in AI Security, the news of recent "Agentic" AI attacks caught my attention and I felt it was worth sharing. Recent studies show a massive shift in how cyber attacks work. AI-powered phishing now has a 54% success rate, compared to just 12% for traditional methods (Astra Security Phishing Benchmark 2026). When an email or a voice call is perfectly personalized by AI, the success rate multiplies instantly. As a Global Head of Cybersecurity and a PhD researcher, I see the below shifts in defense reshape: Speed: Looking at campaigns like CyberStrikeAI, which hit over 600 firewalls in 55+ countries at the same time, we see the power of a single autonomous agent. Human response or even our current automated tools might not move fast enough to stop agents that act this rapidly across global borders. Intent: In the past, our focus was built around the W's and H, matching patterns, IoCs, historical behavior, etc., But today, AI can change its pattern in fraction of seconds to hide and the focus now needs to expand to 'Goal Directed Monitoring" Checking the intent of AI behavior (such as Tool-Proxy Pivoting, where an agent chains legitimate API calls to achieve a malicious goal - just a simple one) is the new priority. Agentic Parity: To stop an AI attack, our defense also has to be run by AI. It's becoming obvious that we are moving into a phase where defensive autonomy must match adversarial speed, the era of the "Agentic SOC." I'm finding this quite rewarding to rediscover my interest through these case studies. I believe these studies are helping me design a more resilient defense strategy in my role as we navigate this new phase of the threat landscape. It is a great time to be working in cybersecurity!! #CyberSecurity #AI #PhDResearch #CISO #GlobalLeadership #AgenticAI #CyberStrikeAI #SecurityStrategy
To view or add a comment, sign in
-
#003 - AI Security - The Counter Move Following my previous posts on how AI-driven attacks have outpaced human defense, I wanted to share a quick update on industry's counter-move which took place at the end of last week. The partnership announcement from TrendMicro AI with Anthropic to embed Claude models directly into enterprise security platforms is exactly the "Agentic Parity" (mentioned in my post #001) which is now turning into reality. Finally the attacker’s weapon will be used against them. As a Head of Cybersecurity, I see this reshaping at not just the SOC, but board-level discussions: Proactive Hunting: Using frontier AI models to find internal zero-days with an "attacker's lens." From standard vulnerability scanning to measurable risk reduction. Agentic Workflows: Moving past simple alerts. We are reaching a point where AI can autonomously chain mitigations together at machine speed, a massive BC enablement. The Narrative: Conversation shift from Are we buying the right tools? to How fast is our autonomous resilience? Seeing my PhD research concepts in global production so quickly is highly rewarding. It provides roadmap to build a more resilient enterprise defense as we navigate this threat landscape. Looking forward to the week ahead! #CyberSecurity #AIResearch #CISO #TrendAI #SecurityStrategy #AgenticSOC #GlobalLeadership #AISecurityNote003 https://lnkd.in/gc2xRSGH
AI Security Strategist & Thought Partner | Governance, Risk & Compliance Leader | Driving Cloud Security | Automation‑Driven CDC/SOC | DFIR Excellence & Resilient Security Frameworks
The Reality of AI Attacks: Just a Quick Research Note! I've not posted here for multiple years, but as part of a case study for my PhD research in AI Security, the news of recent "Agentic" AI attacks caught my attention and I felt it was worth sharing. Recent studies show a massive shift in how cyber attacks work. AI-powered phishing now has a 54% success rate, compared to just 12% for traditional methods (Astra Security Phishing Benchmark 2026). When an email or a voice call is perfectly personalized by AI, the success rate multiplies instantly. As a Global Head of Cybersecurity and a PhD researcher, I see the below shifts in defense reshape: Speed: Looking at campaigns like CyberStrikeAI, which hit over 600 firewalls in 55+ countries at the same time, we see the power of a single autonomous agent. Human response or even our current automated tools might not move fast enough to stop agents that act this rapidly across global borders. Intent: In the past, our focus was built around the W's and H, matching patterns, IoCs, historical behavior, etc., But today, AI can change its pattern in fraction of seconds to hide and the focus now needs to expand to 'Goal Directed Monitoring" Checking the intent of AI behavior (such as Tool-Proxy Pivoting, where an agent chains legitimate API calls to achieve a malicious goal - just a simple one) is the new priority. Agentic Parity: To stop an AI attack, our defense also has to be run by AI. It's becoming obvious that we are moving into a phase where defensive autonomy must match adversarial speed, the era of the "Agentic SOC." I'm finding this quite rewarding to rediscover my interest through these case studies. I believe these studies are helping me design a more resilient defense strategy in my role as we navigate this new phase of the threat landscape. It is a great time to be working in cybersecurity!! #CyberSecurity #AI #PhDResearch #CISO #GlobalLeadership #AgenticAI #CyberStrikeAI #SecurityStrategy
To view or add a comment, sign in
-
#004 - AI Security: Agentic Parity - Shield or Blind Spot? Embedding frontier AI models into our enterprise platforms to achieve the "Agentic Parity" that I mentioned in #003, a critical question emerges for security leaders. When an AI becomes the core of SOC, it instantly becomes the adversary's primary target. Acknowledging risks like prompt injection or data poisoning isn't enough. Through my ongoing PhD research, it is clear we need a concrete architectural strategy to harden the AI layer itself. Here is how we must structure defense for the defender: LLM Firewalls (Semantic Guardrails): Traditional firewalls look at IP addresses and ports, LLM firewalls analyze intent. Before a defensive AI ingests a routine log file or alert, a semantic security layer must sanitize the input to strip out hidden instructions (Indirect Prompt Injections) before it reaches the core model. Context Isolation (Ephemeral Memory): To prevent adversaries from gradually poisoning the AI with subtle, normal-looking noise, our agents cannot rely on continuous, open-ended memory. We must architect "ephemeral" context windows—the AI evaluates a specific event, executes the mitigation and then instantly purges the data to prevent accumulated bias. Model Provenance (AI-BOM): We must treat AI models as our most critical supply chain components. By enforcing AI Bill of Materials (AI-BOMs) and cryptographically signing model weights, we ensure the agent hasn't been silently tampered with between updates. The transition to an Agentic SOC is necessary, but it demands a strict "Secure by Design" approach. Building a more resilient enterprise defense means ensuring our strongest shield doesn't quietly become our biggest blind spot. https://lnkd.in/g7--YQRr #CyberSecurity #AIResearch #CISO #SecurityStrategy #AgenticSOC #AIGovernance #GlobalLeadership #AISecurityNote004
AI Security Strategist & Thought Partner | Governance, Risk & Compliance Leader | Driving Cloud Security | Automation‑Driven CDC/SOC | DFIR Excellence & Resilient Security Frameworks
The Reality of AI Attacks: Just a Quick Research Note! I've not posted here for multiple years, but as part of a case study for my PhD research in AI Security, the news of recent "Agentic" AI attacks caught my attention and I felt it was worth sharing. Recent studies show a massive shift in how cyber attacks work. AI-powered phishing now has a 54% success rate, compared to just 12% for traditional methods (Astra Security Phishing Benchmark 2026). When an email or a voice call is perfectly personalized by AI, the success rate multiplies instantly. As a Global Head of Cybersecurity and a PhD researcher, I see the below shifts in defense reshape: Speed: Looking at campaigns like CyberStrikeAI, which hit over 600 firewalls in 55+ countries at the same time, we see the power of a single autonomous agent. Human response or even our current automated tools might not move fast enough to stop agents that act this rapidly across global borders. Intent: In the past, our focus was built around the W's and H, matching patterns, IoCs, historical behavior, etc., But today, AI can change its pattern in fraction of seconds to hide and the focus now needs to expand to 'Goal Directed Monitoring" Checking the intent of AI behavior (such as Tool-Proxy Pivoting, where an agent chains legitimate API calls to achieve a malicious goal - just a simple one) is the new priority. Agentic Parity: To stop an AI attack, our defense also has to be run by AI. It's becoming obvious that we are moving into a phase where defensive autonomy must match adversarial speed, the era of the "Agentic SOC." I'm finding this quite rewarding to rediscover my interest through these case studies. I believe these studies are helping me design a more resilient defense strategy in my role as we navigate this new phase of the threat landscape. It is a great time to be working in cybersecurity!! #CyberSecurity #AI #PhDResearch #CISO #GlobalLeadership #AgenticAI #CyberStrikeAI #SecurityStrategy
To view or add a comment, sign in
-
#005 - AI Security: Decoupling Reasoning from Knowledge Taking time this Sunday to process the architectural shifts we are seeing in enterprise defense. This past week gave us a perfect, real-world example of what is called as "Context Decay." On April 21, threat actor "MDGhost" posted a breach announcement targeting Reliance Jio on a deep web forum. Static AI models missed it entirely, Why? Because an LLM relying only on its base training weights is essentially looking in the rearview mirror. To achieve true "Agentic Parity," we must structurally decouple our defensive agent's reasoning capabilities from its knowledge base. Here is the technical blueprint for that shift: Dynamic Context Injection (RAG & MCP): The Jio incident proves that without a live pipeline to Cyber Threat Intelligence (CTI), models suffer from severe context decay. We must utilize Retrieval Augmented Generation (RAG) pipelines, integrated via frameworks like the Model Context Protocol (MCP). This allows the agent to ingest live deep-web signals directly into its ephemeral context, converting theoretical knowledge into immediate, actionable intel. Sandboxing Agentic Function Calling: The value of an LLM in the SOC is autonomous execution. However, when an agent can trigger APIs to isolate hosts, it creates a massive new attack vector. We must implement strict execution sandboxing and cryptographic verification for every API call, ensuring an adversary cannot hijack the agent's output layer. Governing Vector Memory: Persistent memory is the new attack vector. If agents store historical telemetry in a persistent vector database without strict oversight, they are highly vulnerable to slow-burn data poisoning. We must move from implicit memory models to explicit, sensitivity-scored retention policies. The control surface is now defined by what the agent is permitted to remember. The architectural narrative is evolving rapidly. We can no longer just keep asking if we have AI in the SOC. The real boardroom question is whether our AI architecture is operating on today's intelligence or yesterday's news. https://lnkd.in/gjtAnqF4 #CyberSecurity #AIResearch #CISO #SecurityArchitecture #AgenticSOC #ThreatIntelligence #ModelContextProtocol #AISecurityNote005 Heading into Monday with a clear focus on building these secure-by-design systems.
AI Security Strategist & Thought Partner | Governance, Risk & Compliance Leader | Driving Cloud Security | Automation‑Driven CDC/SOC | DFIR Excellence & Resilient Security Frameworks
The Reality of AI Attacks: Just a Quick Research Note! I've not posted here for multiple years, but as part of a case study for my PhD research in AI Security, the news of recent "Agentic" AI attacks caught my attention and I felt it was worth sharing. Recent studies show a massive shift in how cyber attacks work. AI-powered phishing now has a 54% success rate, compared to just 12% for traditional methods (Astra Security Phishing Benchmark 2026). When an email or a voice call is perfectly personalized by AI, the success rate multiplies instantly. As a Global Head of Cybersecurity and a PhD researcher, I see the below shifts in defense reshape: Speed: Looking at campaigns like CyberStrikeAI, which hit over 600 firewalls in 55+ countries at the same time, we see the power of a single autonomous agent. Human response or even our current automated tools might not move fast enough to stop agents that act this rapidly across global borders. Intent: In the past, our focus was built around the W's and H, matching patterns, IoCs, historical behavior, etc., But today, AI can change its pattern in fraction of seconds to hide and the focus now needs to expand to 'Goal Directed Monitoring" Checking the intent of AI behavior (such as Tool-Proxy Pivoting, where an agent chains legitimate API calls to achieve a malicious goal - just a simple one) is the new priority. Agentic Parity: To stop an AI attack, our defense also has to be run by AI. It's becoming obvious that we are moving into a phase where defensive autonomy must match adversarial speed, the era of the "Agentic SOC." I'm finding this quite rewarding to rediscover my interest through these case studies. I believe these studies are helping me design a more resilient defense strategy in my role as we navigate this new phase of the threat landscape. It is a great time to be working in cybersecurity!! #CyberSecurity #AI #PhDResearch #CISO #GlobalLeadership #AgenticAI #CyberStrikeAI #SecurityStrategy
To view or add a comment, sign in
-
I got a sales email this morning inviting me to a "can't miss" cybersecurity roundtable. Scheduled for April 8th. Today is April 23rd. The message itself? Polished. Professional. Clearly AI-assisted. But no one applied the most basic control: common sense. I'm all for AI I use it every day. But this is exactly where automation without oversight becomes noise. In cybersecurity, we talk constantly about controls, validation, and human oversight. Yet somehow those principles don't always make it into our own business workflows. AI isn't the problem. Unchecked AI is. If you're using AI for outreach or anything customer-facing one simple rule: trust, but verify… before you hit send. #CyberSecurity #AI #Leadership #CISO #RiskManagement #Automation
To view or add a comment, sign in
Explore related topics
- How Security Teams can Integrate AI
- AI Training for Cybersecurity Engineers
- Importance of Human Factors in AI Security
- How AI Will Shape Software Security
- How AI Transforms Security Practices
- How AI Will Transform Cyber Defense Strategies
- How to Build a Resilient Security Operations Center With AI
- The Future of AI Security Strategies
- How to Secure AI Infrastructure
- Essential Skills for AI in Security
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development