If I were assessing a high risk SaaS vendor here are 8 things I would ask for: 𝟭. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗶𝘀 𝗞𝗲𝘆 First, I would understand what they do for my company. What data do they collect, what access do they have, what services do they provide? I would let that context steer how deep I dive. 𝟮. 𝗦𝗢𝗖 𝟮, 𝗜𝗦𝗢 𝟮𝟳𝟬𝟬𝟭, 𝗼𝗿 𝗘𝗾𝘂𝗶𝘃𝗮𝗹𝗲𝗻𝘁 I would ask for their third party audits. I would read the reports to see if they engaged a reputable firm. I would see if the scope, audit period, and controls are applicable to me. This will prevent me needing to ask for basics like copies of policies. 𝟯. 𝗣𝗲𝗻𝗲𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝘀𝘁 I would get a copy of their latest penetration test. I would look at the scope, when it was performed, who performed it, and track down any findings. It is important to make sure the pentest covers the product/network that matters to you. 𝟰. 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗮𝗻𝘀 I would get a sample of 3 months of vulnerability scans including the latest month results. Both network and application level scans. I would make sure they have the right coverage and that there are no red flags. 𝟱. 𝗩𝗲𝘁 𝗔𝗻𝘆𝗼𝗻𝗲 𝘄𝗶𝘁𝗵 𝗔𝗰𝗰𝗲𝘀𝘀 𝘁𝗼 𝗠𝘆 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 I would want to make sure that anyone with access to my systems are appropriately vetted. That likely means via a background screening and qualification requirement in contract. If they are getting remote admin access to my network I probably want to vet them myself or have my company be in on the screening. 𝟲. 𝗣𝗿𝗼𝗼𝗳 𝗼𝗳 𝗦𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 If the company is mission critical to my business, I may request some evidence that the company is stable. Up to and including audited financials, reserving rights to the source code if the company goes bankrupt, or equivalent. This is rare, but important when applicable. If it is serious enough, you may even ask to speak with executives and get commitments directly. 𝟳. 𝗖𝗼𝗺𝗽𝗮𝗻𝘆 𝗜𝗻𝘀𝘂𝗿𝗮𝗻𝗰𝗲 This is just housekeeping for most companies, but I want to make sure they are insured. I am looking for the typical General Liability, E&O, Cyber, etc. at acceptable limits. 𝟴. 𝗟𝗶𝘀𝘁 𝗼𝗳 𝗧𝗵𝗶𝗿𝗱 𝗣𝗮𝗿𝘁𝗶𝗲𝘀 𝗮𝗻𝗱 𝗦𝘂𝗯-𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗼𝗿𝘀 I may ask for a list of my vendor's critical third parties. I want to be sure that they are using credible vendors that may impact me. I would pay close attention to things like technology providers, contractors, anyone who processes my data, etc. --- Anything you would add to this list?
How to Assess Risks in Public Cloud Security
Explore top LinkedIn content from expert professionals.
Summary
Assessing risks in public cloud security means identifying and evaluating potential vulnerabilities and threats that could impact your data and operations when using shared cloud services. This process helps organizations understand where their cloud assets might be exposed and guides them in making smarter security decisions.
- Review vendor audits: Request and read third-party audit reports, such as SOC 2, to ensure the scope and controls match your company’s usage and needs.
- Map your assets: Identify what data and applications are hosted in the cloud and classify their sensitivity, so you know which areas require stronger protection.
- Monitor continuously: Set up ongoing checks and track key security metrics, like detection times and vulnerability remediation, to quickly spot and address new risks.
-
-
Dear Business & IT Audit Leaders, Cloud environments are not inherently secure. They are only as resilient as the questions we ask. As a cybersecurity audit leader, I don’t begin any cloud assessment without interrogating the architecture through 8 critical dimensions. These aren’t just technical checks, they’re strategic filters that reveal business risk, regulatory exposure, and operational blind spots. Whether you're migrating, auditing, or optimizing your cloud stack, these questions reveal the real posture of your environment. They cut through vendor promises and dashboards to expose what matters: risk, resilience, and regulatory readiness. Here’s the framework I use to guide CISOs, CTOs, and audit teams: 📌 Business Purpose & Data Sensitivity Every cloud asset must be mapped to its business function and data classification. If you don’t understand the value and risk of what’s hosted, you’re auditing in the dark. 📌 Cloud Service Model & Deployment Type IaaS, PaaS, SaaS, and Public, Private, Hybrid, each shift the shared responsibility model. Misidentifying this leads to control gaps and audit failures. 📌 Identity, Access & Privileged Account Management IAM policies, MFA enforcement, and least privilege aren’t optional, they’re the backbone of cloud security. I assess not just design, but operational discipline. 📌 Encryption at Rest & In Transit I validate cryptographic standards, key lifecycle management, and segregation of duties. Weak encryption is a silent breach waiting to happen. 📌 Network & Perimeter Defense Firewalls, segmentation, and intrusion prevention must be tested for effectiveness, not just existence. I look for real-world resilience, not checkbox compliance. 📌 Vulnerability Management & Threat Detection Scanning cadence, patch velocity, and incident response maturity determine whether threats are contained or compounded. I benchmark against threat intelligence and business risk. 📌 Business Continuity & Disaster Recovery Validation RTO/RPO metrics are meaningless without tested recovery capabilities. I simulate failure scenarios to assess readiness under pressure. 📌 Regulatory Compliance & Governance Frameworks From HIPAA to NIST to ISO 27001, I verify not just policy alignment but operational execution. Governance must be embedded, not just documented. These 8 dimensions form the backbone of my cloud audit methodology. They help organizations move from reactive security to proactive resilience. If you're leading cloud transformation, audit readiness, or cybersecurity strategy, this is where your assessment should begin. Let’s discuss: Which of these questions do you think is most overlooked in your organization? #CloudSecurity #CyberAudit #ITAudit #AIaudit #RiskManagement #CloudSecurityRisk #CyVerge #CloudSecurityAudit #Cyberverge #Governance #CloudResilience #CloudGovernance
-
If you can’t prove impact, don’t expect to keep your budget. 🧙🏼♂️I see it all the time as an Advisor, 2 extremes: 💥 50 dashboards of useless data OR 😞 Zero visibility of metrics that matter. Making decisions on gut feeling Struggling to translate cyber to business, unable to get budget Fingers crossed hoping nothing bad happens 🤞🏼 But the right data changes everything. So I put this together based on 26 years in the industry to help you watch what matters. Here’s what you should be tracking:👇🏼 1. Mean Time to Detect (MTTD): ↳Average time to spot an incident, how quickly can you identify threats before damage. 2. Mean Time to Respond (MTTR): ↳Average time to contain an incident, how effective your team can limit impact. 3. Incident Volume: ↳Total incidents over a period, the level of threat activity and your team’s workload. 4. Phishing Click Rate: ↳ Percentage of employees who fall for simulated phishing, showing the org's human risk exposure. 5. Patch Compliance Rate: ↳Percentage of systems patched on time, how well you’re closing common attack paths. 6. Vulnerability Remediation Time: ↳ Average time taken to fix vulnerabilities, how quickly you reduce exploitable weaknesses. 7. % of Critical Vulns Open Past SLA: ↳High-risk vulnerabilities left unresolved past deadlines, revealing dangerous delays in protection. 8. Endpoint Detection Coverage: ↳Endpoints with security agents deployed, showing where attackers may still have blind spots. 9. MFA Coverage: ↳Percentage of accounts/apps protected by MFA, reflecting how well identity risks are controlled. 10. Backup Success & Test Rate: ↳Percentage of backups completed & verified, readiness to recover = resilience. 11. Security Awareness Training Completion: ↳Percentage of staff who finish training, orgs commitment to reducing human risk. 12. Third-Party Risk Assessment Coverage: ↳Percentage of vendors assessed, how much supply chain risk you actually understand. 13. % of Incidents Escalated to External Notification: ↳Incidents requiring disclosure, how often issues affect legal & reputation. 14. Dwell Time: ↳ Average time attackers stay undetected, how long adversaries have to move before you respond. 15. False Positive Rate: ↳Percentage of alerts deemed false, how much noise distracting your team. 16. % of Privileged Accounts Reviewed: ↳Percentage of high-level accounts audited, control over insider and admin misuse risks. 17. Compliance Alignment Score: ↳Percentage of required controls in place, indicating audit readiness & regulatory obligations. 18. % of Incidents with Root Cause Identified: ↳Incidents where the true cause is found, preventing repeat attacks. Get these in place and you'll sleep at night, and get that budget to improve. Which one do you find the most important?⤵️ 🔄 Repost to help others improve cybersecurity 📲 Follow Wil Klusovsky for wisdom on cyber & tech business
-
💡 Stop Guessing: The Right Risk Assessment Drives Your Strategy Choosing the right type of Risk Assessment is not a detail—it's a critical strategic decision. Too often, organizations use a one-size-fits-all approach and end up misallocating resources or missing key threats. The key difference often lies in the data. Qualitative Risk Assessment uses expert judgment and descriptive, non-numeric scales (like High/Medium/Low) to rate severity and likelihood. This helps small teams prioritize quick fixes with a simple heat map. For a data-driven approach, Quantitative Risk Assessment is essential. It uses numerical values (P, %, frequency) to evaluate risk and forecast potential losses or calculate the ROI on controls. A middle ground is the Semi-Quantitative method, which assigns numeric scores (like 1-5 or 1-10) to impact and likelihood, offering more structure than a purely qualitative approach. Risk isn't static. In evolving situations, a Dynamic Risk Assessment is an on-the-spot, real-time evaluation performed when risks shift rapidly or new ones emerge unexpectedly. Furthermore, a Continuous Risk Assessment is a proactive, ongoing process where risks are constantly monitored and adjusted based on new information or threats. Finally, for operational precision, you must choose between: Generic Risk Assessment: A general evaluation covering common hazards across similar tasks or environments. Use this for standardized operations. Site-Specific Risk Assessment: A focused evaluation of risks unique to a particular location, event, or project setup, considering the environment and layout. Choosing based on your environment, data availability, and industry needs is the key to making stronger decisions. #RiskManagement #CyberSecurity #BusinessStrategy #RiskAssessment #DecisionMaking #Security
-
I recently led a couple of cloud-incident workshops, got a lot of great questions, had wonderful exchanges, frankly learned a lot myself, and wanted to share a few takeaways: • 𝗔𝘀𝘀𝘂𝗺𝗲 𝗯𝗿𝗲𝗮𝗰𝗵 - 𝘀𝗲𝗿𝗶𝗼𝘂𝘀𝗹𝘆: Treat "when, not if" as an operating principle and design for resilience. • 𝗖𝗹𝗮𝗿𝗶𝗳𝘆 𝘀𝗵𝗮𝗿𝗲𝗱 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆: Most gaps aren’t exotic zero-days - they’re governance gray zones, handoffs, and multi-cloud inconsistencies. • 𝗜𝗱𝗲𝗻𝘁𝗶𝘁𝘆 𝗶𝘀 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗽𝗹𝗮𝗻𝗲: MFA everywhere (but not enough), push passwordless, least privilege by default, regular access reviews, strong secrets management, and a push to passwordless. • 𝗠𝗮𝗸𝗲 𝗳𝗼𝗿𝗲𝗻𝘀𝗶𝗰𝘀 𝗰𝗹𝗼𝘂𝗱-𝗿𝗲𝗮𝗱𝘆: Extend log retention, preserve/analyze on copies, verify what your CSP actually provides, and rehearse with legal and IR together. • 𝗗𝗲𝘁𝗲𝗰𝘁 𝗮𝗰𝗿𝗼𝘀𝘀 𝗽𝗿𝗼𝘃𝗶𝗱𝗲𝗿𝘀: Aggregate logs (AWS/Azure/GCP/Oracle), layer in behavior-based analytics/CDR, and keep a cloud-specific IR/DR runbook ready to execute. • 𝗕𝗼𝗻𝘂𝘀 𝗿𝗲𝗮𝗹𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸: host/VM escapes are rare - but possible. Don’t build your program around unicorns; prioritize immutable builds, hardening, and hygiene first. If you’d like my cloud IR readiness checklist or the TM approach I’ve been using, drop a comment, and we’ll share. Let’s raise the bar together. #CloudSecurity #IncidentResponse #ThreatModeling #CISO #DevSecOps #DigitalForensics #MDR EPAM Systems Eugene Dzihanau Chris Thatcher Adam Bishop Julie Hansberry, MBA Ken Gordon Sharon Nimirovski Aviv Srour
-
Here's what 𝗠𝗼𝗱𝗲𝗿𝗻 𝗥𝗶𝘀𝗸 𝗮𝗻𝗱 𝗘𝘅𝗽𝗼𝘀𝘂𝗿𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 looks like in 2025, based on practitioner interviews, vendor briefings, deep evaluation of emerging as well as established players and countless hours spent in researching. Report link: https://lnkd.in/gUS-z327 Vulnerability management isn’t what it was in the 2000s. The days of telling people to scan their assets for vulnerabilities, counting number of remediated CVEs and relying on CVSS scores are behind us. This report highlights key challenges that practitioners voiced, deep dive into innovative ways vendors are evolving under risk and exposure management category, using our DDPER (Deployment, Data Collection, Prioritization, Exposure, Remediation) framework, practical 5 step guide for practitioners and our prediction. 1️⃣ 𝗘𝘅𝗽𝗼𝘀𝘂𝗿𝗲 𝗜𝘀 𝗕𝗲𝗶𝗻𝗴 𝗥𝗲𝗱𝗲𝗳𝗶𝗻𝗲𝗱 Modern platforms move beyond traditional configuration reads to define exposure. We see solutions using innovative ways to not just define but validate exposure. Taking approaches such as true network reachability analysis, detection of compensating controls in place, ingesting unstructured data, and even assessing social chatter to define exploitation probability, beyond KEV and EPSS databases. 2️⃣ 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗖𝗼𝗻𝘃𝗲𝗿𝗴𝗲𝗻𝗰𝗲 𝗜𝘀 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 Acronyms like VM, RBVM, ASM, CAASM, ASPM, BAS, CTEM, and CNAPP are no longer independent. The future lies in all of these platforms delivering dynamic scoring and context-driven risk and exposure management. 3️⃣ 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗼𝗿 𝘃𝘀. 𝗣𝘂𝗿𝗲-𝗣𝗹𝗮𝘆 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀 We’re seeing two clear market paths emerge: 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗼𝗿 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀: Unify vulnerability data from external scanners into a normalized risk view - ideal for organizations with diverse vulnerability tooling already in place. 𝗣𝘂𝗿𝗲 𝗦𝗰𝗮𝗻𝗻𝗶𝗻𝗴 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀: Conduct continuous native scanning across cloud, infrastructure, identity, and data (such as CNAPP platforms) - ideal for organizations looking for a single solution coverage. 4️⃣ 𝗥𝗲𝗺𝗲𝗱𝗶𝗮𝘁𝗶𝗼𝗻 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗾𝘂𝗶𝗰𝗸𝗹𝘆 𝗴𝗮𝗶𝗻𝗶𝗻𝗴 𝗽𝗿𝗲𝗰𝗲𝗱𝗲𝗻𝗰𝗲 Leading platforms now bridge security and IT with bi-directional ticketing, in-depth recommendations, SLA tracking, and fix validation turning findings into measurable risk reduction. 5️⃣ 𝗧𝗵𝗲 𝗣𝗿𝗮𝗰𝘁𝗶𝘁𝗶𝗼𝗻𝗲𝗿’𝘀 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 Selecting the right platform now requires a structured approach, one that maps business needs, operational maturity, and desired automation outcomes to the right vendor model. This 5 step guide is to provide organizations with a quick way to evaluate how to approach the market. Top Vendors evaluated in-depth: Astelia Axonius Cogent Security Orca Security Seemplicity Tonic Security XM Cyber Nagomi Security Zafran Security
-
🔍 From CVEs to Exposure Intelligence -- A Technical Model for Risk-Based Vulnerability Management The traditional CVSS-based approach is no match for today’s attack surfaces. A modern exposure management strategy must integrate telemetry, threat intel, and control-plane signals to defend against adversaries who chain misconfigs, stale privileges, and unpatched services. Here’s a breakdown of key InfoSec risks—and technically grounded remediations: 🔴 Risk #1: CVE overload with no context-aware prioritization 🟢 Remediation: - Implement exploitability filters using threat intelligence feeds (e.g., Exploit-DB, CISA KEV, Mandiant TI). - Use EPSS (Exploit Prediction Scoring System) and MITRE ATT&CK mapping for attacker-centric triage. - Weight vulns by asset criticality using tagging (e.g., public-facing, prod, regulated). 🔴 Risk #2: Fragmented visibility across hybrid/cloud environments 🟢 Remediation: - Aggregate telemetry from EDR (e.g., osquery, Sysmon), CSPM tools, and IAM logs. - Build an exposure graph to visualize relationships between identities, misconfigs, and data stores. - Continuously scan for unknown/rogue assets across on-prem and cloud. 🔴 Risk #3: Configuration drift and unmonitored assets 🟢 Remediation: - Use IaC drift detection (e.g., driftctl, AWS Config) to catch unintended changes. - Enforce compliance-as-code using CIS/NIST baselines with automated remediation pipelines. - Align infrastructure with source-of-truth inventories (CMDB, IaC repos). 🔴 Risk #4: Disconnected workflows between security and IT/DevOps 🟢 Remediation: - Shift security left using tools like Trivy, Checkov, or GitHub Actions in CI/CD. - Pipe exposure insights directly into ITSM platforms (e.g., Jira, ServiceNow). - Use policy-as-code (OPA, Rego) to enforce guardrails without manual approvals. 🔴 Risk #5: Alert noise with no correlation to real risk 🟢 Remediation: - Enrich findings with identity posture (e.g., dormant admin accounts), open ports, and data classification. - Use attack path analysis to correlate and score multi-step exposures. - Prioritize remediation based on blast radius and business impact, not just vuln count. 📌 Exposure management isn’t about more alerts—it’s about graph-driven visibility, risk-aligned prioritization, and automation-first remediation. This isn’t just a shift in tooling—it’s a shift in mindset. The future of InfoSec lies in exposure-centric, not alert-centric defense. 📖 Learn more: 👉 https://lnkd.in/gPJtATGu #InfoSec #CyberSecurity #ExposureManagement #SecurityEngineering #ThreatModeling #CloudSecurity #AttackSurfaceReduction #RiskBasedSecurity #DevSecOps #SecurityArchitecture #BlueTeamOps #MITREATTACK
-
Reading A Practitioner’s Guide to Post-Quantum Cryptography from the Cloud Security Alliance made me pause. It highlights something many organizations still underestimate very often: modern cryptography was not designed for a future with cryptographically relevant quantum computers (CRQCs). This threat is also not theoretical. The risk comes from Store Now, Decrypt Later attacks, where encrypted data can be harvested today and broken once quantum capabilities mature. Time, not just technology, becomes the critical risk factor. Key highlights from the guide • Shor’s and Grover’s quantum algorithms threaten most public-key cryptography in use today, including RSA, Diffie-Hellman, and elliptic-curve algorithms • CRQCs may emerge by the early 2030s, putting long-term-value data at risk even if systems are secure today • Data confidentiality and integrity are both impacted by Store Now, Decrypt Later attacks • NIST published post-quantum cryptography standards in 2024 (FIPS-203, FIPS-204, FIPS-205), but enterprise adoption will take time and investment • Risk assessment must begin by identifying which data assets still hold value at “Q-Day,” not by blanket cryptographic replacement Who should take note • Security leaders responsible for long-term data protection strategies • Architects managing encryption for data at rest, data in transit, and non-repudiation • Compliance and governance teams evaluating regulatory and sector-specific quantum readiness requirements • Engineering teams responsible for cryptographic libraries, TLS, VPNs, KMS, and certificate management Why this matters Unlike most cyber threats, quantum risk is driven by time. Data intercepted today may be compromised years later. If enterprises wait until CRQCs arrive, it will already be too late for data with long-term value. At the same time, mitigation is costly, complex, and not yet fully supported by mainstream products. The path forward The guide emphasizes starting with disciplined risk assessment, identifying vulnerable cryptographic functions, and mapping technology components before committing to mitigation. Enterprises should periodically reassess risk, track technology maturity, and align mitigation efforts with CSA Cloud Controls Matrix guidance rather than rushing into premature or unnecessary changes.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development