Security can’t be an afterthought - it must be built into the fabric of a product at every stage: design, development, deployment, and operation. I came across an interesting read in The Information on the risks from enterprise AI adoption. How do we do this at Glean? Our platform combines native security features with open data governance - providing up-to-date insights on data activity, identity, and permissions, making external security tools even more effective. Some other key steps and considerations: • Adopt modern security principles: Embrace zero trust models, apply the principle of least privilege, and shift-left by integrating security early. • Access controls: Implement strict authentication and adjust permissions dynamically to ensure users see only what they’re authorized to access. • Logging and audit trails: Maintain detailed, application-specific logs for user activity and security events to ensure compliance and visibility. • Customizable controls: Provide admins with tools to exclude specific data, documents, or sources from exposure to AI systems and other services. Security shouldn’t be a patchwork of bolted-on solutions. It needs to be embedded into every layer of a product, ensuring organizations remain compliant, resilient, and equipped to navigate evolving threats and regulatory demands.
Integrating Security Solutions into Current Technology Stack
Explore top LinkedIn content from expert professionals.
Summary
Integrating security solutions into a current technology stack means building safeguards directly into your organization's tools and systems, not just adding security as an afterthought. This approach ensures that protection is woven through every layer of technology, from design to daily operations, helping companies manage risks as they adopt new technologies like AI and cloud services.
- Consolidate tools: Choose security solutions that work together seamlessly so you can reduce complexity and avoid gaps between platforms.
- Assign clear ownership: Make sure every part of your security system has a dedicated person responsible, so nothing falls through the cracks.
- Map data and assets: Regularly update inventories and data flow diagrams to keep track of what needs protection and to improve your team's visibility.
-
-
🚀 My latest research "Cognitive Integration Process for Harmonising Emerging Risks" is now published in the Journal of AI, Robotics and Workplace Automation. 95% of Australian businesses are SMEs operating on ~$500 cybersecurity budgets. Yet they're being asked to securely integrate AI, quantum computing, and blockchain into their operations. How do you make sound security decisions about emerging technologies when you lack both technical expertise and enterprise-level resources? This is fundamentally a systems engineering challenge that requires first principles thinking. When I presented this research at the Programmable Software Developers Conference in Melbourne in March, I asked the room: "Heard of an AI security incident?" No hands up. "Would you know what an AI security incident looked like?" No hands. This illustrates the gap between AI hype and foundational security understanding - the first principles are missing. That's why I developed CIPHER (Cognitive Integration Process for Harmonising Emerging Risks) - a cognitive mental model that applies systems thinking to technology integration in resource-constrained environments. 🧠 Six cognitive stages: Contextualise, Identify, Prioritise, Harmonise, Evaluate, Refine 🔧 Systems engineering foundation: Built on cognitive science, game theory, and dynamical systems theory 🎯 Technology agnostic: Works across any emerging technology, any environment, any resource constraint CIPHER is a cybersecurity framework that gives smaller organisations the same strategic decision-making capabilities that large enterprises use, designed for their operational realities. It bridges the gap between cutting-edge security research and the practical constraints that define how most Australian businesses operate. The framework recognises that in resource-constrained environments, enterprise security models cannot be applied at scale. You need cognitive tools that help teams think systematically in complex integration challenges without requiring extensive technical depth or large security budgets. My research journey continues: I'm now deep into my UNSW Canberra Masters Research capstone, building on my 2023 work on LLMs in SME cybersecurity. The goal? Developing specialised security models and creating an agnostic, holistic measurement framework for LLMs in Australian SMEs - essentially taking the $500 problem from 2023 into the AI-driven reality of 2025. #CyberSecurity #SystemsEngineering #SME #Australia #AI #EmergingTech #ResourceConstrainedSecurity #CIPHER #FirstPrinciples
-
Security projects don’t fail at the firewall. They fail at the seams. We patch systems. We log events. We deploy “the stack.” Then someone props open a side door. No alert. No handoff. No response. That’s not a tech gap. That’s a choreography failure. Here’s where it breaks: CCTV and access control log to different clocks SIEM sees the signal, but no one owns the action Badge gets revoked, but backup credentials still work. Ops inherits the system, with no runbook, no timeline. Patch approved, but the firmware team “missed the cycle” Security isn’t just a list of tools. It’s the flow between them. So build for the flow: Map each integration: CCTV → VMS → SIEM → Response Assign end-to-end ownership; not just component leads. Timebox patching: detect to deploy, full stack Simulate the chain: test the path, not the part. PMs → track the seams. SOC leads → test the flow. CISOs →measure time to containment, not tools deployed. Dashboards don’t stop breaches. Handoffs do. 🧩 Share if you’ve seen the gap between “deployed” and “defended.” الثغرة في الفجوة، لا في الجدار الناري. The breach is in the gap, not the firewall.
-
If I were leading or advising a security program right now, I would not waste time searching for the "silver bullet" solution. There isn't one. No tool will fix weak fundamentals. No AI engine will replace disciplined execution. And no dashboard will save you from a bad process. Here's exactly what I would focus on instead👇 1️⃣ I would master the basics. Strong identity management, least privilege, and asset inventory may not be exciting, but they can significantly reduce the likelihood of breaches. Most incidents can be traced back to a misconfigured account, an unpatched server, or a forgotten endpoint. Basics win. 2️⃣ I would simplify the security stack. Too many organizations get lost in overlapping tools they don't utilize. Complexity isn't a sign of maturity. Every platform you add increases the attack surface and creates an admin console that is often left unmonitored. Consolidate, integrate, and cut out the noise. Better yet, find tools that collaborate, not necessarily a vendor ecosystem, but vendors that have chosen to work together to make the tools much more effective. 3️⃣ I would establish accountability, rather than just sending alerts. Security isn't about flashing lights — it's about people consistently doing the right thing. Develop tactics, techniques, and procedures; then train, test, and verify. Make it clear who owns what. Ownership reduces risk faster than automation. 4️⃣ Prioritize visibility. You can't defend what you can't see, and you can't patch what you don't know exists. Start with an accurate asset inventory and data flow map — that's your "common operational picture" in cybersecurity. 5️⃣ I would measure outcomes, not activities. Patching 1,000 servers doesn't matter if the one you missed gets exploited. Focus on metrics that show risk reduction — mean time to detect, mean time to respond, number of high-value assets without MFA. VPN without MFA. 6️⃣ I would start having risk-based discussions. The organization doesn't have an unlimited budget. Stop trying to protect everything equally. Start by protecting your highest-risk assets first, according to your organization's risk appetite and tolerance levels. The basics aren't just "old-school security." The basics are security. ✅ Tools enhance fundamentals. ✅ They do not replace them. Stop searching for the magic product. Start enforcing the basics with precision and discipline. That's how you build resilience. That's how you win. ✨ What's one "basic" your organization still struggles to execute consistently?
-
AI security is quickly becoming a real architecture problem, not just a model problem. As more companies deploy copilots, agents, and AI-driven automation, the security stack needs to evolve around how these systems actually operate. Prompts, models, APIs, agents, and automated actions introduce entirely new control points. A practical way to think about the emerging Enterprise AI Security Stack is in four layers. 1. Foundations Identity and Access Data Protection Infrastructure Integrity Start by extending Zero Trust to AI workloads. Every model interaction, API call, and agent action should be tied to a verified identity with clear authorization. 2. Input and Processing Prompt Injection Defense API Security Agent Permissioning Treat prompts as an attack surface. Implement input filtering, strong API authentication, and strict permissioning for agents that can call tools or systems. 3. Output and Actions Output Filtering Monitoring and Anomaly Detection Incident Response Do not just trust model outputs. Monitor behavior for anomalies, filter unsafe responses, and build playbooks for AI-related incidents. 4. Governance and Intelligence Compliance Mapping Encryption and Key Management Risk Intelligence Track where models are used, what data they access, and how they are governed. Encryption, key management, and audit trails become essential. A few practical steps organizations can start with now: 1. Inventory where AI models and agents are already running. 2. Require identity-based access for all model APIs. 3. Implement guardrails for prompts and outputs. 4. Monitor AI systems the same way you monitor production infrastructure. 5. Define incident response procedures for AI failures or misuse. AI security will increasingly look like identity architecture plus runtime monitoring. The organizations that get ahead are the ones designing this intentionally instead of reacting after deployment. How are teams structuring AI security right now?
-
𝗙𝗿𝗼𝗺 𝗖𝗜/𝗖𝗗 𝘁𝗼 𝗖𝗜/𝗖𝗗/𝗖𝗦: 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗻𝗴 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 𝗦𝗹𝗼𝘄𝗶𝗻𝗴 𝗗𝗼𝘄𝗻 In modern DevOps, we talk a lot about CI/CD. But what if the real challenge isn’t just shipping code faster, but doing so securely and without compromising speed? That’s where 𝗖𝗜/𝗖𝗗/𝗖𝗦 (Continuous Security) comes in. It's about embedding security directly into the pipeline, right from the start, and not as a bottleneck at the end. I recently integrated 𝗖𝗼𝗱𝗲𝗤𝗟 into some of my repositories (there's a link to one of them in the comments). Here’s what I experienced: • No added friction for developers or teams • Critical vulnerabilities detected before code was merged • Security feedback directly in the CI pipeline, no manual intervention required Also, some key takeaways for CI/CD/CS I learned: • Automate security scans as part of the build process. Security should be integrated like any other test. • Track security trends over time. Tools like CodeQL allow you to see long-term trends and recurring vulnerabilities. • Present actionable security results. Scan results should be clear and prioritized, so developers know exactly what to address without confusion. • Shift security left. The earlier security is integrated into the pipeline, the easier it is to catch vulnerabilities before they reach production. 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 isn't just about infrastructure or CI/CD pipelines. It’s about embedding secure-by-default practices directly into the developer workflow. By treating security as part of your CI/CD pipeline, you can significantly reduce risk without slowing down the pace of innovation. --- How are you integrating security into your pipelines? Do you use tools like CodeQL, Trivy, or Snyk? Let’s connect and discuss your strategies. #DevOps #DevSecOps #CICD #PlatformEngineering #SecurityAutomation #CodeQL #ShiftLeft #GitHubActions #ContinuousSecurity
-
𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝐂𝐍𝐀𝐏𝐏 𝐰𝐢𝐭𝐡 𝐄𝐱𝐢𝐬𝐭𝐢𝐧𝐠 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐓𝐨𝐨𝐥𝐬: 𝐀𝐯𝐨𝐢𝐝𝐢𝐧𝐠 𝐎𝐯𝐞𝐫𝐥𝐚𝐩 𝐖𝐡𝐢𝐥𝐞 𝐄𝐧𝐡𝐚𝐧𝐜𝐢𝐧𝐠 𝐂𝐨𝐯𝐞𝐫𝐚𝐠𝐞 Last month, our security team looked like a frustrated puzzle assembly crew. Multiple security tools, each claiming to protect our cloud-native applications, but with massive coverage gaps and redundant alerts that made our SOC team want to throw their monitors out the window. 🛡️🤯 𝐓𝐡𝐞 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞: 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐓𝐨𝐨𝐥 𝐂𝐡𝐚𝐨𝐬 - 7 different cloud security solutions - Constant alert fatigue - Unclear ownership of security responsibilities - Significant financial overhead - Potential blind spots in our cloud infrastructure 𝐎𝐮𝐫 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: 𝐂𝐍𝐀𝐏𝐏 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 We didn't just add another tool—we strategically mapped our existing security ecosystem and identified precise integration points for our Cloud-Native Application Protection Platform (CNAPP). 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐑𝐨𝐚𝐝𝐦𝐚𝐩: 1. 𝐂𝐨𝐦𝐩𝐫𝐞𝐡𝐞𝐧𝐬𝐢𝐯𝐞 𝐓𝐨𝐨𝐥 𝐈𝐧𝐯𝐞𝐧𝐭𝐨𝐫𝐲 - Documented every existing security solution - Mapped current capabilities and limitations - Identified potential integration points 2. 𝐂𝐍𝐀𝐏𝐏 𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐨𝐧 𝐂𝐫𝐢𝐭𝐞𝐫𝐢𝐚 - API-driven architecture - Extensive third-party integration support - Machine learning-powered correlation engine - Flexible deployment options 3. 𝐏𝐡𝐚𝐬𝐞𝐝 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 - Pilot testing in non-production environments - Gradual rollout across cloud workloads - Continuous tuning and optimization 𝐑𝐞𝐦𝐚𝐫𝐤𝐚𝐛𝐥𝐞 𝐑𝐞𝐬𝐮𝐥𝐭𝐬 📊 - 65% reduction in security alerts - 40% cost savings on security infrastructure - 92% improvement in threat detection accuracy - Unified visibility across multi-cloud environments - Streamlined compliance reporting 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 - Integration is more important than addition - Choose tools that communicate, not just protect - Continuous evaluation is crucial - User experience matters in security tools 𝐋𝐞𝐬𝐬𝐨𝐧𝐬 𝐋𝐞𝐚𝐫𝐧𝐞𝐝 - Start with clear documentation - Involve cross-functional teams - Prioritize interoperability - Never compromise on granular control Have you successfully integrated cloud security tools? What challenges did you face? Share your experiences below! 👇 #CloudSecurity #CNAPP #CyberSecurity #CloudNative #SecOps #TechInnovation
-
Rising adoption of #5G, #edgecomputing, and #IoT technologies across operational technology (OT) environments is driving organizations to rethink how they protect interconnected machines and critical processes, as cyber-physical security becomes increasingly intertwined. Such integration creates a two-pronged challenge, which includes how to keep operational safety in real-time and contend with policy timelines that are measured in years to secure legacy systems never designed with today’s threat landscape. Industrial Cyber consulted #industrialcybersecurity experts to explore how cyber defenders can reconcile the tension between maintaining real-time operational safety and the extended timelines needed to deliver legacy cyber-physical security. ⁂ Paul Shaver ⁂, global practice leader at Mandiant (part of Google Cloud)’s Industrial Control Systems/ Operational Technology Security Consulting practice, said that ensuring operational safety is a core component of secure and resilient #industrial processes. “That includes protecting modern and legacy systems alike. However, legacy systems come with some challenges, such as the inability to rip and replace or patch them in most cases.” Some technologies can significantly simplify this challenge, Agustín Valencia Gil Ortega, OT security business development lead for Spain and Portugal at Fortinet, said, before illustrating the point with two examples. “First, modern next-generation firewalls equipped with deep packet inspection for industrial protocols can apply intrusion prevention system (IPS) signatures based on OEM security advisories without disrupting ongoing operations. This approach, commonly known as virtual patching, allows these protections to be deployed directly on the firewall while machines remain fully operational.” John Cusimano, chief strategy officer (CSO) and vice president for GRC and training services at Armexa, said that safety and security must be co-engineered, and not treated as separate disciplines. “Legacy CPS often lack modern security controls, yet they perform safety-critical functions that cannot be interrupted. The solution lies in adopting a cyber-safety engineering approach that integrates safety and cybersecurity risk assessments.” He added that methodologies like Cyber PHA or Cyber HAZOP, based on ISA/IEC 62443-3-2, enable cross-functional teams to identify and mitigate cyber risks that could impact safety collaboratively. Marty Edwards, president and CEO of SiriusPPT, said he does not see much ‘tension’ in this issue, noting that defenders have plenty of options to secure systems, modern and legacy alike. “Proper planning, implementation, and especially testing of security-related hardware and software will help ensure a safe and secure environment with minimal downtime requirements.” #OTCyber #OTcybersecurity #CPSSecurity
-
🔐 AI Governance Is No Longer Optional — It Must Be Integrated Into Cybersecurity Training & GRC Now As AI systems become embedded across enterprise security, threat detection, identity workflows, and automation pipelines, the risk surface is expanding faster than traditional controls can keep up. Effective AI governance must now be treated as a first-class component of cybersecurity programs—embedded directly into training, operational security, and GRC frameworks. Here’s how forward-leaning security teams are doing it: 🔎 1. Establish an AI Governance Framework Use structured governance models that mirror established security frameworks: AI risk classification: Identify AI systems, data flows, decision impact, and safety-critical components. Model lifecycle controls: Apply versioning, approval gates, drift monitoring, and performance validation. Security & privacy baselines: Enforce threat modeling, data minimization, PII controls, and red-team evaluations against prompt injection and model exploitation. 🛡 2. Integrate AI Threat Modeling Into Training Extend existing secure engineering and AppSec training to include: AI/ML-specific threat scenarios: Model poisoning, adversarial inputs, jailbreaks, training-data leakage. Secure prompt engineering: Guardrails, context restriction, least-privilege prompts, and API-level access management. Model behavior validation: Teach staff how to evaluate hallucination risk, output integrity, and system response boundaries. Supply chain considerations: Validate datasets, model sources, vendor controls, and licensing compliance. 📘 3. Embed AI Governance Into GRC Processes Treat AI systems like any other technology subject to governance, but with enhanced oversight: Policy Mapping: Align AI use with ISO 42001, NIST AI RMF, and existing enterprise security policies. AI Risk Register Entries: Document model usage, data categories, risk ratings, and compensating controls. Continuous Monitoring: Measure model drift, decision error rates, anomalous outputs, and access patterns. Control Families: Integrate AI-specific controls into your existing GRC stack—access control, data classification, audit logging, third-party risk, and model deployment workflows. 🧩 4. Build AI Governance Into Incident Response AI incidents require new playbooks: Model-driven incident categories: Output manipulation, model degradation, training data exposure, unauthorized fine-tuning. Forensic Support: Log prompts, context injection attempts, and model inference metadata. Rollback Mechanisms: Maintain approved model versions, data lineage tracking, and automated reversion paths. #Cybersecurity #AIGovernance #GRC #CyberRiskManagement #AIsecurity #InformationSecurity #SecurityEngineering #NISTAI #ISO42001 #ThreatModeling #CyberTraining #CISO #RiskAndCompliance #AIMaturity
-
A Software Engineering Mindset for Cloud Security: In the past few years building Twingate with Lior Rozner, I've watched with both fascination and concern as organizations struggle to adapt their security postures to modern cloud infrastructure. The hard truth is this: the on-prem, perimeter-focused security approaches many still rely on are fighting yesterday's war. The center of gravity for cyberattacks has fundamentally shifted to cloud environments. This isn't just a technical observation. It's a paradigm shift that demands we completely rethink how we approach security. The cloud has completely dismantled the old perimeter-based security model. In cloud environments, resources are ephemeral, infrastructure is defined in code, and changes happen continuously. This new reality requires a software engineering mindset to be at the core of your security strategy. What does this engineering-driven approach look like in practice? 1) Code-Defined Security: Security controls must be expressible as code, versioned in repositories, and deployed through the same pipelines as the infrastructure they protect. Manual configurations and point-and-click security tools simply cannot scale to match cloud velocity. 2) Automation Over Gatekeeping: Security teams that operate as approval bottlenecks will inevitably be bypassed. Instead, automated guardrails that provide immediate feedback to developers within their existing workflows lead to both better security and faster delivery. 3) API-First Everything: Every security capability should be accessible programmatically. This enables security to become part of CI/CD pipelines rather than existing outside them. 4) Continuous Verification: Static, point-in-time security assessments must give way to continuous monitoring and real-time validation that matches the dynamic nature of cloud environments. Most notably, companies with the strongest cloud security postures aren't necessarily those with the largest security teams or budgets. They're the ones that have embraced this engineering mindset. They treat security as code, automate remediations, and enable developers to address vulnerabilities within their existing workflows. To make this work, your security stack must integrate seamlessly with Terraform and other infrastructure-as-code frameworks. Monitoring must connect directly to the observability stacks engineering teams already use. Policy enforcement must happen at build and deploy time, not after resources are already running. Etc. IMO, the future of security belongs to the organizations willing to embrace a software engineering mindset across their entire security program. This isn't just about new tools (though the right tools is a part of it). It's about a fundamentally different approach to protecting our most critical assets.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development