Alert fatigue undermines SOC effectiveness by overwhelming analysts with noise. To reduce false positives and optimize detection coverage, implement a structured, metric-driven tuning cycle: 1. Unique Analytic Identification - Ensure every detection rule carries a globally unique identifier. Embed this ID and the analyst’s final disposition (True Positive / False Positive) in each alert record. 2. Weekly Accuracy Reporting - Retrieve all resolved alerts on a weekly cadence. - Group records by alert ID to determine total firings per analytic. - Within each group, calculate the ratio and count of true versus false positives. - Produce comparative charts (e.g., stacked bars) to highlight high-volume and low-accuracy alerts. 3. Impact-Driven Prioritization - High Volume + Low Accuracy Example: Alert C fires 125 times but yields only 20 true positives (84% FP rate). Action: Refine detection logic, introduce additional context enrichment (threat intelligence feeds, user-/asset-based whitelisting), or consider rule deactivation if not business-critical. - High Volume + High Accuracy Example: Alert A fires 200 times at 90% true-positive rate. Action: Investigate upstream preventive controls (network segmentation, endpoint hardening) to reduce true detections at the source. - Low Volume + High Accuracy Example: Alert D fires 10 times with 100% accuracy. Action: Validate that tuning has not inadvertently introduced false negatives; maintain existing configuration. 4. Supplementary Metrics for Continuous Improvement - Mean Time to Triage (MTTT): Monitor triage latency to identify process bottlenecks. - False Negative Identification: Correlate incident post-mortems with missing alerts to uncover blind spots. - Automation Potential: Leverage enrichment playbooks and SOAR workflows to auto-close low-risk false positives or accelerate context gathering. 5. Institutionalizing the Tuning Lifecycle - Weekly SOC Briefings: Present alert-accuracy dashboards and tuning progress to stakeholders. - Quarterly Reviews: Reassess critical use cases, adjust thresholds based on evolving threat patterns, and validate rule efficacy against recent adversary behaviors. - Tuning Standard Operating Procedure: Maintain a living document that captures best-practice tuning techniques (e.g., threshold calibration, enrichment integration, correlation rule templates). By embracing this structured tuning methodology, SOCs can systematically reduce false-positive noise, accelerate genuine incident identification, and allocate analyst capacity toward proactive threat hunting rather than reactive noise management.
How to Reduce False Positives in Scanning
Explore top LinkedIn content from expert professionals.
Summary
Reducing false positives in scanning means improving the accuracy of detection tools so that harmless items are not incorrectly flagged as threats or issues. This process helps limit wasted time and resources, whether scanning for cyber threats, cancer, or drug candidates.
- Fine-tune detection: Adjust scanning rules or algorithms based on real-world results and known patterns to minimize harmless items being incorrectly flagged.
- Integrate reliable data: Use additional context, like threat intelligence or healthy tissue data, to help scanners differentiate between genuine threats and benign findings.
- Automate review process: Set up systems that automatically suppress or resolve alerts known to be false positives, allowing experts to focus on real issues.
-
-
Today, Radiology published our latest study on breast cancer. This work, led by Felipe Oviedo Perhavec from Microsoft’s AI for Good Lab and Savannah Partridge (UW/Fred Hutch) in collaboration with researchers from Fred Hutch , University of Washington, University of Kaiserslautern-Landau, and the Technical University of Berlin, explores how AI can improve the accuracy and trustworthiness of breast cancer screening. We focused on a key challenge: MRI is an incredibly sensitive screening tool, especially for high-risk women—but it generates far too many false positives, leading to anxiety, unnecessary procedures, and higher costs. Our model, FCDD, takes a different approach. Rather than trying to learn what cancer looks like, it learns what normal looks like and flags what doesn’t. In a dataset of over 9,700 breast MRI exams—including real-world screening scenarios—our model: Doubled the positive predictive value vs. traditional models Reduced false positives by 25% Matched radiologists’ annotations with 92% accuracy Generalized well across multiple institutions without retraining What’s more, the model produces visual heatmaps that help radiologists see and understand why something was flagged—supporting trust, transparency, and adoption. We’ve made the code and methodology open to the research community. You can read the full paper in Radiology https://lnkd.in/gc82kXPN AI won't replace radiologists—but it can sharpen their tools, reduce false alarms, and help save lives.
-
Adding a short molecular-dynamics (MD) step after docking in virtual drug screening can cut wet-lab costs by > 50 %. Savings that matter especially for startups and small biotechs needing to stretch their runway, yet few are using it. 🔸 <5 ns “shake-out” MD run + MM/PBSA rescoring can more than double confirmed hit-rate by removing docking false-positives (Graves 2008; Brooijmans 2010). 🔸 Wet-lab costs scale almost linearly with compounds tested (~$800/compound). Twice the hit rate means half the compounds and half the spending. 🔸 A few GPU minutes per ligand cost pennies but can save hundreds or thousands in assays. Back-of-the-envelope example (1 M-compound screen) • Docking only → 10 % hit rate (100 / 1 000) ≈ $800 k • Docking + MD → 20 % hit rate (100 / 500) ≈ $400 k Feel free to reach out, if you are planning a screening campaign. Happy to chat. SimAtomic #MolecularDynamicsSimulation #HitIdentification #VirtualDrugScreening #Biotech
-
In the fast-paced world of cybersecurity, alert storms can overwhelm Security Operations Centres (SOCs), causing analyst fatigue and increasing the risk of critical threats slipping through unnoticed. Managing these storms effectively is crucial to maintaining operational stability and protecting sensitive data. 5 WAYS TO AVOID ALERT STORMS IN SECURITY OPERATION CENTRE (SOC) 1. UNIFY THREAT MONITORING Fragmented security tools generate isolated alerts, leading to duplicate notifications and poor threat correlation. By unifying threat monitoring across systems, you can: • Centralise all alerts from firewalls, SIEMs, EDR and other tools in a single platform. • Streamline threat visibility to identify patterns across multiple attack vectors. • Reduce manual effort and improve incident prioritisation. Example: Use a well-integrated SIEM solution to ingest and correlate logs from multiple sources, reducing noise from disparate systems. 2. FINE-TUNE DETECTION RULES Default detection rules often generate excessive false positives. Analysts can avoid unnecessary alerts by fine-tuning detection mechanisms to: • Set specific thresholds based on the environment and use case. • Reduce false positives by excluding benign behaviour patterns. • Update rules regularly to reflect evolving threats. Tip: Regularly review and customise detection rules in your SIEM or EDR tool based on your organisation’s risk profile. 3. GROUP ALERTS INTELLIGENTLY Alert storms often occur when multiple alerts are triggered for a single incident. Intelligent grouping helps analysts focus on the bigger picture by: • Aggregating alerts related to the same event or threat. • Using correlation rules to identify connections between logs and alerts. • Reducing the number of tickets created for similar incidents. Example: Implement alert deduplication and correlation logic in your SOC tools to group login attempts from the same source IP into a single incident. 4. PRACTICE GOOD ALERT HYGIENE Poorly managed alerts can clog the system, overwhelming analysts. Practising alert hygiene ensures that: • Old, irrelevant or low-priority alerts are reviewed and resolved promptly. • Alerts with no actionable outcomes are tuned or suppressed. • Historical alert data is archived but accessible for compliance and review. Tip: Conduct regular alert reviews to identify noisy rules and disable alerts that do not add value. 5. AUTOMATE REPETITIVE TASKS Manual alert triaging during a storm is time-consuming and error-prone. Automation can help SOC teams handle large volumes efficiently by: • Automating triage processes for known low-risk events. • Using SOAR tools to investigate and respond to alerts without human intervention. • Deploying playbooks for common incidents to reduce response time. Example: Configure your SOAR tool to automatically resolve low-risk phishing alerts by blocking the sender and tagging the email for further review. For more details, please refer to the attached PDF.
-
All of the variants are getting in on MRD testing, even the phased ones! Minimal Residual Disease (MRD) describes the amount of cancer left in the body after treatment. Measuring MRD over time allows us to know when a cancer might be coming back. It also can help us make decisions about what treatments to use, or tell us if a therapy is working. But determining residual disease in patients has historically been pretty challenging. That’s changed now that the price of high throughput sequencing has come down drastically. And it's now cheap enough to use routinely to help us find early clues about cancer’s recurrence. We can do this because normal cells and cancers dump some of their DNA into the bloodstream as they grow. These DNA fragments can be sampled in a blood draw and the “cell free” or “circulating tumor” DNA can be sequenced to determine if the fragments are regular old normal DNA or DNA that came from a cancer. This is usually done by looking for single nucleotide variants (SNVs) and comparing those variants to a person’s healthy tissue. Fragments found to have differences could indicate that a cancer is present. Or they could just be random errors that get introduced during the process of manipulating a sample. These errors, unfortunately, can sometimes end up producing a false-positive result. But wouldn’t it be nice if we had a better way to determine which fragments of DNA came from tumors, and which were just the product of weird errors made by the enzymes that we use to process these samples? A group of researchers recently wondered if they could do this using phased variants to both reduce the false positive rate and track tumor DNA within the bloodstream. Phased variants are just variants that occur on the same DNA fragment. And because cancers mutate frequently, they often have more phased variants in their genomes than are found in healthy tissue. With this in mind, these researchers developed Phased Variant Enrichment and Detection Sequencing (PhasED-Seq) and validated its use as an MRD test in large B-cell lymphoma (DLBCL). Here, the test starts by sequencing tumor and healthy DNA to develop a tumor specific phased variant (PV) list. That list is then used for subsequent monitoring of plasma samples to see if those same PVs reappear in the bloodstream! The figure below shows the basics behind the method which involves the capture of DNA fragments from regions known to be frequently mutated in DLBCL followed by short-read high throughput sequencing to detect PVs. Informative molecules are those that span two phased variants, uninformative ones are those that span the region but only contain one or none of the variants. In their analytical validation, they show that their test using PVs had a very low false positive rate (0.24%), a limit of detection of 0.7 parts in 1,000,000 and a precision of more than 96%. ### Boehm N, et al. 2025. DOI: 10.1101/2024.08.09.24311742
-
Most alerts we are seeing at Alpha Level are repetitive garbage. They come from some regular IT process that causes an alert to fire. The natural reaction is to "tune" the detector that is picking them up. Specifically, tuning means introducing an exception in the detection logic to exclude or suppress alerts that fit a pattern of false postives. The problem is that among all those false positives might just lie the bad guy. So how does one pick the true positives out amongst all the noise without introducing a static exception that causes blindness for the SOC? Our solution is to use models, time series models that can learn the normal patterns that regular system processes are causing, and dynamically identify the alerts that don't fit the patterns. In this way, we don't need to have static exceptions, we can identify the false positives on the fly, letting the true positives through to the analysts. In a recent study, our customer removed exceptions from several of their detectors. The result: 3300 additional alerts over a month. But Alpha Level called all but 22 benign anyway. The punchline: among the 22 remaining, 7 were investigation worthy. These would never have been seen if the detectors still had their exceptions. This is the balance one can achieve if you use dynamic alert analysis rather than static rules. Low false positives without the risk introduced by static exceptions.
-
When I started working in the SOC, I used to treat correlation rules like checklists: If X and Y happen → alert Add another condition → done But over time, I realized that correlation rules are more like storytelling than switch logic. Because the best rules don’t just trigger—they explain: Who is doing what, where, when, and why it might be dangerous. Example: Instead of just detecting a single PowerShell execution, build a rule that looks for: PowerShell run Followed by a network connection to a rare domain Followed by credential access or lateral movement attempt That’s not just a rule — it’s a narrative of compromise. What I’ve learned building and tuning rules: Good data beats complex logic. Know your log sources inside-out Frequency matters. A single event may be benign, but repeated behavior shows intent False positives are feedback. Every noisy alert is a lesson Use MITRE mapping. Know what tactics/techniques you’re targeting Work with your team. Rules aren’t written in stone—iterate and improve them One of my proudest moments was tuning a rule that originally had 85% false positives into one that caught a real lateral movement incident two weeks later. Not by adding more logic—but by simplifying the focus: Intent > Events As SOC analysts, we don’t just monitor systems. We connect dots, reduce noise, and give context. A great correlation rule doesn’t just say “look here”—it says “this is why it matters.” #CyberSecurity #SOCAnalyst #SIEM #CorrelationRules #ThreatDetection #LogAnalysis #MITREATTACK #BlueTeam #DailyPost #DetectionEngineering
-
🗺 We’ve come to a time and place in #productsecurity where uncovering more #vulnerabilities is no longer what any of us need. Every tool that we integrate into our #cyber program brings a new, but somehow different, deluge of vulnerabilities that we need to review, prioritize, and fix. These vulnerabilities cause churn in the #security and #development teams. What’s worse is that the users of the tools eventually lose confidence in them when the majority of what they find are false positives. 🛠 And this is true, regardless of the tool being used…SAST, DAST, IAST, SCA, etc… For instance, using #SCA tools (which scan for packages that are brought in from 3rd parties, often open-source) allow teams to locate packages that are running with vulnerable code. However, often this is a ham-fisted method that simply just flags packages that have been identified as having an associated CVE. 🤯 That doesn’t mean that your code is actually vulnerable though. Your team may toil along to either resolve the vulnerable package by upgrading to the latest version, often requiring other upgrades or changes to make the new package compatible with your application. What’s worse….your team may take the time to research the vulnerability only to determine that the package is never even called externally. All that work for nothing! SCA scanners like the one from Backslash Security (part of their AppSec offering) can help reduce the number of OSS findings by identifying package as reachable only when there is an application flow originating from the application code and the knowledge that the transitive packages are used by the application. 🎁 This makes a significant difference in the findings presented to the teams. ⚙And it can help reduce the churn! The sample application that I used with Backslash showed that there was a total of 147 OSS vulnerabilities, but only 49 of them were reachable. And out of those 49 only 20 were critical/high and only 1 of those was reachable with a high EPSS/KEV value. This doesn’t mean that 146 OSS vulnerabilities can safely be ignored, but it does provide a more intelligent way of managing vulnerabilities. More vulnerabilities getting thrown on to the backlog of existing vulnerabilities is an outdated approach to managing the security of applications. We need smarter, more intelligent methods of identifying what is actually impactful to our risk bottom line. Reachability is not a panacea, but it certainly can help provide intelligence around where our focus in vulnerability management should be. Shahar Man Czesia Glik
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development