Triage at Scale: Severity × Exploitability × Exposure

Triage at Scale: Severity × Exploitability × Exposure

The math of modern AppSec broke years ago, and most programs are still pretending it didn't. Roughly 48,185 CVEs were published in 2025 (up from 40,009 in 2024 and 29,066 in 2023), yet only about 6% of all published CVEs are ever exploited in the wild, and Cyentia/Kenna research has consistently shown organizations realistically patch only 10–15% of open vulnerabilities each month. When you combine those two facts, the implication is unavoidable: choosing the right 10% to fix is the entire job. Severity-driven triage cannot make that choice, because more than 47% of CVEs are rated High or Critical by NVD and 88% of "Critical"-labeled CVEs and 57% of "Highs" are overstated relative to real-world exploitability (Kodem audit data, 2025). The shift practitioners need is operational.

This playbook describes how to actually run a triage program built on three multiplicative axes: Severity × Exploitability × Exposure, with the executive framing required to defend it upstairs.

The Triage Problem Is Now an Arithmetic Problem

The headline numbers are now extreme enough that anyone defending a CVSS-only program is implicitly accepting an unsolvable workload. NIST itself reports CVE submissions grew 263% between 2020 and 2025, with Q1 2026 submissions running roughly one-third higher than Q1 2025. Even after enriching nearly 42,000 CVEs in 2025 (45% more than any prior year) the NVD could not keep pace. On April 15, 2026, NIST formally moved to risk-based enrichment: only three categories now receive prioritized analysis: (1) CVEs in CISA's KEV catalog (target SLA: one business day), (2) CVEs for software used within the federal government, and (3) CVEs for EO 14028 critical software. Everything else is now categorized as "Not Scheduled." All backlogged CVEs published before March 1, 2026 were swept into that same bucket. NIST also stopped routinely issuing its own CVSS score when the originating CNA has already provided one - meaning the long-standing assumption that NVD provides an authoritative, independent severity rescore is no longer reliably true. The CVE program itself survived 2025's MITRE funding cliff only via an 11th-hour CISA contract extension and the launch of the independent CVE Foundation on April 16, 2025. The infrastructure that AppSec leaders treated as background reality for 25 years is now a managed-decline service - and any triage pipeline that depends on NVD enrichment or NVD severity scores as its primary signal is already broken.

Inside organizations the picture is no better. Veracode's 2025 State of Software Security analyzed 1.3M applications and 126.4M raw findings; the average flaw fix time has grown from 171 days in 2020 to 252 days in 2025 - a 47% increase. Half of organizations now carry critical security debt (high-severity flaws open more than a year), and 70% of that critical debt sits in third-party code. Black Duck OSSRA 2025 found 86% of audited codebases contain at least one vulnerable open-source component, with the average codebase containing 911 OSS components and the number of OSS files per app tripling from ~5,300 in 2020 to over 16,000 in 2024. Sonatype's 10th annual State of the Software Supply Chain documented 6.6 trillion OSS download requests in 2024 and a 156% YoY increase in malicious packages, while 80% of application dependencies remain un-upgraded for over a year despite safer alternatives existing 95% of the time.

The downstream human cost is well-documented and increasingly unsustainable. Cross-survey data shows SOC teams now process roughly 2,992 alerts per day, with 63% going unaddressed and 42% ignored entirely71–84% of analysts report burnout and 70%+ are considering leaving. Tenable's 2025 contributing data to Verizon's DBIR found 60% of breaches involved vulnerabilities for which patches had existed more than a month. The Suffolk County 2022 ransomware incident ($25M cleanup) was traced to an alert flood that analysts had redirected into an ignored Slack channel. Severity-driven triage is, at scale, just an alert-fatigue generator with a CVSS branding.


Why Severity Is Necessary But Insufficient

Carnegie Mellon SEI has been delivering this critique in writing since 2018. The seminal "Towards Improving CVSS" paper was blunt: "CVSS takes ordinal data and constructs a novel regression formula, via unspecified methods… we have been given no evidence that the formula is empirically or theoretically justified… CVSS is inadequate." The same paper documented that more than half of CVSS survey respondents could not score a flaw within four points of consensus, and Allodi/Banescu (2018) found only 57% accuracy in CVSS scoring among trained practitioners. NVD's historical policy explicitly stated that when published vulnerability details are insufficient, the score defaults to 10.0 - meaning information vacuum is treated as worst case. Red Hat publicly disputes NVD scores routinely because "NVD focuses on the flaw as a worst-case in the broadest sense, regardless of compilation options or the operating system." With NIST's April 2026 move to stop rescoring CNA-supplied CVSS values, those CNA scores - and their inherent vendor self-interest - now flow through to your scanners untouched. The "second opinion" your pipeline used to assume is gone for most CVEs.

The distributional consequences are exactly what you would expect. Of 2024 CVEs, 12.7% were Critical and 34.9% High (47.6% combined); 2022's combined figure was 60.1%. CVSS-only prioritization has been measured by Cyentia/Kenna at roughly 5% efficiency - i.e., for every 20 vulnerabilities you remediate using a CVSS≥8.8 strategy, only one is actually exploitable. A peer-reviewed 2023 paper (Jacobs et al., WEIS 2023) put it bluntly: CVSS-based prioritization performs "no better than choosing at random." More damning: Recorded Future's analysis of high-risk CVE lists found the highest average attacker risk scores belong to medium- and low-rated CVSS CVEs (85.98 and 85.29), not Highs (75.74) - sophisticated adversaries actively select lower-CVSS vulnerabilities precisely because defenders deprioritize them.

CVSS v4.0, released November 1, 2023, partially addresses these issues with new Attack Requirements, granular Vulnerable/Subsequent System scope, and crucially the new Automatable and Value Density metrics adapted directly from SSVC. Adoption has been weak: as of late 2025, Microsoft, Cisco, Oracle, and Red Hat advisories still ship CVSSv3.1-only. Treat v4.0 as forward progress on the input data, not a substitute for risk-based triage. Severity remains a useful technical-impact ceiling - it tells you what the vulnerability could do under ideal attacker conditions. It does not tell you whether anyone will exploit it, or whether your specific system is exposed.


Exploitability: The EPSS, KEV, and SSVC Stack

The exploitability axis is where the modern stack has matured most dramatically since 2020. Three signals matter, and they answer different questions.

EPSS (Exploit Prediction Scoring System) answers "how likely is exploitation in the next 30 days?" The current model, EPSS v4 (released March 17, 2025), ingests exploitation activity for ~12,000 vulnerabilities per month, malware/EDR telemetry, public exploit code presence, Shodan scanning data, HackerOne Hacktivity, and CWE categorization, producing a daily-updated 0–1 probability. Performance against the CVSS≥7 baseline is striking: per Empirical Security/FIRST, holding coverage constant, EPSS v4 needs roughly 6% of the effort that CVSS≥7 needs (an 8× efficiency gain). A 2024 Cyentia/FIRST study confirmed that EPSS ≥0.6 yields ~80% efficiency at 60% coverage, and EPSS ≥0.1 yields ~50% efficiency at 80% coverage. EPSS now scores 328,362 CVEs with ~410 new ones added daily. FIRST officially declines to publish a single threshold; 0.10 (top ~12% of CVEs by percentile) is the most-cited practitioner cut-off, with 0.6 used for higher-confidence triage.

CISA KEV (Known Exploited Vulnerabilities) answers a different question: "is exploitation already happening?" As of April 2026 the catalog contains 1,583 entries - roughly 0.5% of all known CVEs. KEV grew by 245 entries in 2025 versus 185 in 2024 (a 30% acceleration), and CISA uses SSVC internally to make inclusion decisions. Inclusion requires (1) an assigned CVE, (2) reliable evidence of in-the-wild exploitation, and (3) clear remediation guidance. KEV's authority makes it the de facto cross-industry SLA driver - and its centrality just became structural: it is now the only category of CVE NIST commits to enriching within one business day. But practitioners must understand its coverage gap: VulnCheck's parallel catalog tracked 717 newly exploited CVEs in 2024 versus CISA's 170, and provided earlier evidence than CISA in 71.4% of overlapping cases with a mean lead time of 41.6 days. KEV is necessary, not sufficient. Augment it with VulnCheck's KEV, GreyNoise observed exploitation, and your own threat-intel feeds.

SSVC (Stakeholder-Specific Vulnerability Categorization) is the structural alternative for organizations exhausted by score arbitrage. Developed by SEI/CERT in 2019 and adopted by CISA in 2020, SSVC produces a decision rather than a score: Track, Track*, Attend, or Act. The CISA Coordinator tree walks five decision points - Exploitation status, Automatable, Technical Impact, Mission Prevalence, Public Well-Being Impact. Because Exploitation is the first node, only Active exploitation can lead to Act in the default model. Patrick Garrity's analysis found only ~0.06% of CVEs reach Act at default settings, and a Bitsight analysis of CISA Vulnrichment data showed 0.52% Act, 21.9% Attend, the remainder Track/Track*. Operationally, SSVC is most valuable as the output policy layer sitting on top of EPSS and KEV inputs - Nucleus Security has shown that as few as 16 rules can implement a working SSVC tree.

The empirical bottom line on exploitability: in 2024 only 768 CVEs (~2% of those published) were publicly reported as exploited in the wild, and even among "exploited" CVEs fewer than 5% reach more than 1 in 10 organizations. Mandiant's M-Trends data has tracked time-to-exploit collapsing from 63 days (2018–19) to 32 days (2021–22) to 5 days (2023), and by Verizon's 2025 DBIR the median time to mass exploitation for new critical edge-device CVEs is effectively zero days. Rapid7's 2026 reporting shows median time from CVE publication to KEV inclusion dropped from 8.5 days to 5 days. Exploits have been the #1 initial intrusion vector in Mandiant data for six consecutive years (32% in 2025), while DBIR 2025 shows vulnerability exploitation rose 34% YoY to 20% of breaches and edge-device exploitation grew 8× from 3% to 22% of exploit actions.


Exposure: The Dimension That Finally Separates Noise From Risk

If exploitability tells you what attackers want to hit, exposure tells you whether they can hit you. This is where the most defensible noise reduction lives, and where most programs leave the largest amount of value on the table. Four sub-dimensions matter.

Reachability analysis is the highest-leverage AppSec lever of the last five years and has remarkably consistent cross-vendor data. Endor Labs' 2024 State of Dependency Management Report, covering seven languages, found that fewer than 9.5% of vulnerabilities are reachable at function level, and that combining reachability with EPSS yields ~98% noise reduction. Snyk's data on Java Maven projects estimates roughly 3% of vulnerabilities are reachable. Semgrep's foundational study of 1,614 Dependabot alerts across 1,100 OSS projects found only 1.9% reachable, and their dataflow reachability across 10 languages claims up to 98% false-positive reduction. Backslash Security reports that ~89% of "noisy unused" packages can be discarded via reachability analysis. The convergent figure across Endor, Snyk, Semgrep, Backslash, and Coana is unmistakable: reachable vulnerabilities are typically 1.9–10% of total SCA findings. Endor's 2022 research separately established that 95% of vulnerabilities live in transitive dependencies, not directly imported ones - meaning package-level reachability alone is insufficient and you need at least class-level (preferably function-level or dataflow) precision.

Runtime context is the second-order signal that confirms what static reachability suggests. Datadog's State of DevSecOps 2025 - the most empirically credible cross-customer dataset, drawn from tens of thousands of applications - reported the single most quotable statistic of the year: after runtime context was applied, only 18% of vulnerabilities with a critical CVSS score remained critical. Average high-and-critical vulnerabilities per service drop from 12.2 to 7.5 with runtime adjustment. Datadog's 2024 report found that out of tens of millions of automated scanner attack attempts, only 0.0065% successfully triggered a vulnerability - a near-perfect proof point that severity in isolation is meaningless against real attacker telemetry. Oligo's eBPF-based research found only ~50% of files loaded at runtime are actually executed; ARMO claims 90%+ reduction in Kubernetes CVE work via runtime reachability; Sweet Security customers report focusing on roughly 8 fixes per cluster per month. The cross-vendor convergence is again clear: runtime context filtering yields 80–95% noise reduction, broadly overlapping the static reachability signal rather than additive to it.

Internet exposure is the third filter and the one most easily measured. ESG research finds 69% of organizations have experienced at least one cyberattack starting from an unknown or unmanaged internet-facing asset, and Trend Micro's 2025 data shows over 70% of cybersecurity incidents originated from unknown or unmanaged assets. Palo Alto's Unit 42 2024 Attack Surface Threat Report identified software vulnerabilities as the #1 initial-access vector, with 23% of exposures involving critical IT/security infrastructure. Wiz's 2025 Cloud Data Security Snapshot found 35% of cloud environments have compute assets that both expose sensitive data AND are vulnerable to high/critical threats, and 12% have publicly exposed containers with high-severity vulnerabilities that have known exploits - Wiz's "toxic combinations" model collapses tens of thousands of vulnerabilities to tens of critical attack paths.

Asset criticality and data sensitivity is the fourth filter and the one most often skipped. Treat it as a multiplier of business impact: ARMO's industry research found only 47% of organizations re-evaluate vulnerabilities based on the criticality of IT assets, and one-third take more than a month to even locate where newly disclosed CVEs reside. The pragmatic move is a tiered classification - Tier 0 (regulated/revenue-bearing/customer-data), Tier 1 (customer-facing supporting), Tier 2 (internal critical), Tier 3 (everything else) - fed from the CMDB or service catalog and tagged into the vulnerability platform. Snyk's "Business Criticality" project attribute is one productized version; FAIR's distinction between asset criticality (denial impact) and asset sensitivity (disclosure impact) is the conceptual backbone.


Operationalizing the Formula: From Signal to Ticket

The triage formula is multiplicative on purpose: Risk = Severity × Exploitability × Exposure × Asset Criticality, where any factor approaching zero collapses the score. This is the right behavior. A theoretical CVSS 10 with no public exploit, in a non-reachable transitive dependency, on an internal dev system, is not a fire - and your developers know it even if your scanner doesn't.

A useful concrete funnel - adapted from a Picus Security mid-size financial-services case study - looks like this: 15,000 open issues → 9,400 critical/high by CVSS → 6,700 after EPSS + asset criticality → 1,300 after exposure validation → 300 after compensating controls applied, a 95%+ reduction landing on a queue a real team can actually clear. Phoenix Security's published 4D risk model, Microsoft Defender Vulnerability Management's recent integration of EPSS plus internet-facing tagging plus critical-asset designation, AWS Inspector's correlation of CVSS with network reachability, and Tenable's VPR all implement variations of the same multiplicative pattern. The value is not in the specific weights - it is in the discipline of refusing to triage on any single axis.

Translate the score into risk-based SLA tiers that bypass severity-only contracts. A defensible modern policy looks like: Critical + KEV + internet-exposed = 24–72 hoursCritical, or EPSS ≥ 0.10 and reachable = 7 daysHigh and reachable = 14–30 daysstandard High = 30 daysMedium = 60 daysLow or not reachable = 90 days or formally accepted. CISA BOD 22-01 sets 14 days for KEV in federal civilian agencies (FedRAMP 20x is proposing 3 days for credibly exploitable internet-reachable findings). PCI-DSS 4.0.1 requires 30 days for critical/high. None of those compliance regimes prevent you from running a tighter, risk-tiered SLA - they set a floor, not a ceiling.

Three operational disciplines determine whether the model survives contact with engineering reality. First, automate the routing, not just the scoring: Tenable's 2025 acquisition of Vulcan Cyber, the ASPM consolidation around Apiiro/ArmorCode/Cycode/Phoenix Security, and Wiz's Dazz acquisition all reflect the same insight - score-without-route is theater. Bidirectional Jira/ServiceNow/GitLab Issues integration with auto-tagging, ownership inferred from CODEOWNERS, and grouped tickets (one ticket per fixable upgrade, not one per CVE) are table stakes. Second, invest in paved roads, not exhortation: Jason Chan's Netflix model - "a collection of well-supported, optional solutions for common problems" - and Google's "Security Signals" project (CSP rolled out in monitoring mode first via reverse proxy across all services) consistently outperform mandate-driven programs because optionality builds adoption. BSIMM16's data on security champions programs (1:6 ratio in small orgs, 1:17 in large) is the cultural complement. Third, measure the right things: Cobalt's 2026 State of Pentesting found a 42-point gap between C-suite belief and practitioner reality on SLA compliance (57% vs. 15%), with top-performer half-life on high-risk findings at 10 days and bottom-decile at 249 days - a 25× spread that exists almost entirely because of measurement and feedback discipline.


Tooling: The Consolidation Is Real But Not Finished

The market has restructured rapidly around the framework above. The acquisitions speak louder than the analyst reports: Wiz acquired Dazz in November 2024 (~$450M), then Google acquired Wiz in March 2025 (~$32B, closed March 2026); Tenable acquired Vulcan Cyber in February 2025 (~$150M); Cisco's Kenna Security reached end-of-life in late 2024, driving large customer migrations; Armis acquired Silk Security; Cisco acquired Splunk ($28B). Gartner's 2025 Hype Cycle for Application Security positioned ASPM and Reachability Analysis as transformational, predicting that by end of 2026 at least 40% of organizations will default to their AST vendors for AI-based autoremediation of vulnerable code, and by 2027 30% of security exposures will stem from vibe coding practices. Gartner's 2022 CTEM forecast - that organizations prioritizing security investments via continuous threat exposure management will be 3× less likely to suffer a breach by 2026 - remains directionally supported by Forrester TEI work showing 90%+ reduction in severe-breach likelihood, though it has not been independently empirically validated. Treat the tooling as an enabler, not the strategy. The strategy is the formula and the operating model around it.


Executive and Board Framing: The Language of Capital

The framing shift that most determines whether this program survives a budget cycle is the one most security leaders skip. Boards do not buy CVSS distributions. They buy risk reduction in dollars, regulatory defensibility, and customer trust, in that order. The SEC's July 2023 cybersecurity disclosure rules require Form 8-K Item 1.05 disclosure within four business days of a materiality determination - meaning your board now has fiduciary exposure to whether your triage program can credibly distinguish material from non-material. NIS2 (in force October 2024) imposes 24-hour incident reporting and C-level executive accountability with penalties up to €10M or 2% of global turnover. DORA (applicable from January 2025) mandates threat-led penetration testing and oversight of critical third-party ICT providers. Risk-based triage is now a regulatory primitive, not a security preference.

The board narrative writes itself when you have the data. Lead with quantified exposure reduction tied to business outcomes, not vulnerability counts. The CrowdStrike Falcon Exposure customer Hereford reported to their board: "In less than a year… we reduced critical vulnerabilities by 98% in our DMZ, 92% across our entire server board, and 86% on all workstations." That sentence works because each number maps to a defensible asset tier with documented exposure context. Pair that with FAIR-style quantification (Loss Event Frequency × Loss Magnitude expressed as Value at Risk), Monte Carlo-driven loss exceedance ranges, and a small set of sustained-trend metrics: MTTR by risk tier, SLA compliance by tier, KEV exposure window, % of production-reachable critical findings, and engineering hours recovered through noise reduction. Strobes' published case data shows triage automation reclaiming ~600 hours of manual work per quarter - enough to recover the platform license inside one quarter. The IBM 2024 Cost of a Data Breach figure ($4.88M global average; $9.36M U.S.; $9.77M healthcare) is the standard anchor, but the more powerful number is your organization's quantified expected annual loss before and after the program. Cyber-insurance carriers now operationalize this directly: SAFE Security's Mosaic Insurance partnership offers 15% premium discounts for "average" breach-likelihood organizations and 30% for "best-in-class" based on FAIR-quantified risk.

The two things to never present to a board are CVE counts and CVSS distributions. Both are inputs. Neither is a measure of your program. If your board reporting still leads with "we have 47,000 critical vulnerabilities," you are reporting the size of your scanner's input queue, not your business risk posture.


Surviving the Post-Mythos Reality

On April 7, 2026, Anthropic publicly announced Claude Mythos Preview and Project Glasswing - then promptly restricted access to 40 companies after the model demonstrated it could autonomously find and exploit zero-day vulnerabilities in every major operating system and web browser. In seven weeks of testing, Mythos discovered over 2,000 previously unknown vulnerabilities, including a 27-year-old bug in OpenBSD and sophisticated exploit chains that required no human guidance. More troubling: a Discord group gained unauthorized access to Mythos within hours of announcement, and the same capabilities that find bugs for defenders work equally well for attackers.

The math is stark. Mythos autonomously wrote exploits that expert penetration testers said would have taken weeks to develop. It turned CVE identifiers and git commit hashes into functional N-day exploits "much faster, cheaper, and without intervention" than any human researcher. Most critically, Mythos represents only the beginning - as Anthropic's red team noted, "we see no reason to think that Mythos Preview is where language models' cybersecurity capabilities will plateau." Other frontier labs are developing similar capabilities, and the proliferation is inevitable.

This is where risk-based triage transforms from operational efficiency to organizational survival. Here's why the Severity × Exploitability × Exposure framework becomes your defense against AI-assisted attackers:

Exploit development just became commoditized. What historically required deep expertise and weeks of work now happens autonomously in hours. Every published CVE is potentially a weaponized threat within days, not months. CVSS-only triage assumes human-paced exploitation timelines that no longer exist. Organizations still prioritizing by severity alone are optimizing for a threat landscape that ended in April 2026.

Volume will overwhelm human-driven triage. If Mythos found 2,000+ zero-days in seven weeks, imagine the disclosure volume when every major software project gets similar scrutiny from multiple AI systems. Your existing backlog of "critical" and "high" findings will be joined by an exponential increase in newly discovered vulnerabilities. The only sustainable path is multiplicative filtering that collapses 15,000 findings to 300 actionable ones - exactly what the Severity × Exploitability × Exposure model delivers.

AI attackers will target your false negatives. The Recorded Future data showing sophisticated adversaries selecting medium/low CVSS vulnerabilities becomes prescient in a world where AI can systematically analyze your entire attack surface. Mythos-class models don't just find bugs - they understand which ones defenders will deprioritize. Risk-based triage forces you to consider reachability, exposure, and asset criticality for every finding, closing the blind spots that severity-only approaches leave wide open.

Speed becomes the only sustainable advantage. Anthropic's guidance is clear: "software users and administrators will need to drive down the time-to-deploy for security updates" and "out-of-band releases are reserved for in-the-wild exploits" may no longer be viable. When AI can turn any disclosed vulnerability into a working exploit faster than your patch cycle, the defensive advantage belongs to the side that can identify, prioritize, and remediate the truly dangerous findings before they reach AI-assisted attackers. This requires automated routing, risk-based SLAs, and the operational discipline to fix the high-confidence exposures while formally accepting the noise.

Noise reduction becomes a strategic capability. In a world where every vulnerability could theoretically be exploited rapidly, the organization that can correctly identify the 5% that are actually reachable, exposed, and business-critical gains decisive advantage. The other 95% still need to be tracked, but they don't need to consume engineering cycles in the first 72 hours. The Severity × Exploitability × Exposure approach is the difference between "drinking from the fire hose" and "taking targeted action."

The practitioners who survive the post-Mythos landscape won't be the ones with the most advanced AI tools - they'll be the ones who built risk-based triage systems resilient enough to handle AI-scale vulnerability discovery and fast enough to outpace AI-assisted exploitation. This isn't a future problem. Mythos exists, unauthorized groups have accessed it, and other labs are racing to develop competing capabilities. The question isn't whether your organization will face AI-assisted attackers. The question is whether your triage program can defend against them.


Conclusion: Triage Is Now a Business Capability

The honest read of the 2024–2026 data is that the vulnerability ecosystem itself has changed shape underneath us. NVD has formally retreated to a federal-priority service as of April 15, 2026, with KEV inclusion now the only guarantee of prompt enrichment and NVD second-opinion CVSS scores no longer routinely produced. The CVE program survived 2025 only via an emergency contract option and a parallel non-profit foundation. Attackers are exploiting in five days while organizations patch edge devices in 32 days median, and AI-assisted development is generating insecure code in 45% of cases (Veracode 2025) while introducing entirely new risk classes like prompt injection (HackerOne reported 540% YoY growth). In that environment, severity-only triage is no longer just inefficient - it is a governance failure.

The Severity × Exploitability × Exposure framework is not a new invention; it is the synthesis of fifteen years of Cyentia/Kenna research, EPSS/SSVC/KEV instrumentation, and reachability tooling that has finally matured enough to be operationalized at scale.

The practitioners who get this right in 2026 are not the ones running the most scanners. They are the ones running the fewest decisions - replacing thousands of severity-driven escalations with a small number of high-confidence, business-contextualized actions, defended in front of the board with quantified risk reduction rather than vulnerability inventory. The math of vulnerability management used to be a counting problem. It is now a prioritization problem, and prioritization is, in the end, an act of executive judgment. Build the formula, automate the routing, invest in the paved roads, and report the right metrics - and the conversation about your program changes from "why are we still doing this manually" to "how much risk did we just retire this quarter." That is the conversation worth having.

Spot on Codrut! We’ve reached the point where CVSS-chasing is effectively a distraction. The transition from 'Vulnerability Management' to 'Exposure Management' isn't just a trend; it’s a survival mechanism. I love your point on the multiplicative model, if reachability or exploitability is zero, the risk is effectively zero, regardless of the severity score.

Good read Codrut! A couple things I'd add from what we see at customers: - The 48K CVE headline understates the operational reality we see. Raw findings volume is growing triple digits QoQ in a lot of the orgs we work with, mostly because one CVE on a popular library ends up in hundreds of services - On the EPSS/KEV point you make: both are backward looking by definition. They tell us what got exploited, or what's statistically likely based on past patterns. Neither tells you what an AI-enabled attacker weaponizes tonight from a CVE that dropped this morning. This needs to change with the new time-to-exploits time :( We can't wait and need to know asap if a CVE is exploitable, with actual proof. Which is also basically what the new regulatory frameworks you mention are pushing toward anyway The teams that come out of the next 24 months in decent shape are the ones putting AI on the triage side, fast, with evidence chains that actually hold up in front of a board or an auditor

Have you seen enterprise update their vulnerability management policy to reflect this prioritization approach ? many organization still have the 7/30/90 days approach based on CVSS scoring

To view or add a comment, sign in

More articles by Codrut A.

Others also viewed

Explore content categories