Dear SOC Heroes, To detect and respond to any attack correctly, you must make a threat modeling to your business to understand all attacks and identify their attack surface and impact, then you should map each attack to an incident response framework that your organization follows. A well-structured approach that you follow, will enable you to manage and mitigate the impact of any attack. For example, let's map a data exfiltration attack to the NIST incident response framework. 1. Preparation - Establish Baselines: Understand normal data flows and behaviors within your network. - Implement Monitoring Tools: Deploy and configure SIEM, DLP, and IDS/IPS. - Develop Incident Response Plans: Have clear procedures and roles defined for responding to data exfiltration incidents. 2. Detection - Monitor Network Traffic: Look for unusual data transfer volumes, particularly to external IP addresses. - Analyze Logs: Check logs from firewalls, proxies, and network devices for anomalies. - Utilize Behavioral Analytics: Use tools to detect deviations from normal user and system behavior. - Build SIEM Use-Cases: Configure alerts for potential exfiltration activities, such as large data transfers or access to sensitive files. 3. Identification - Correlate Events: Use SIEM to correlate alerts and logs from different sources to identify patterns. - Validate Alerts: Confirm that alerts are not false positives by cross-referencing with known baselines and activities. - Identify Data Sources: Determine which data was accessed and potentially exfiltrated. 4. Containment - Isolate Affected Systems: Disconnect compromised systems from the network to prevent further data loss. - Block Malicious Traffic: Implement firewall rules to block data exfiltration channels. - Reset Credentials: Change passwords and revoke access for compromised accounts. 5. Eradication - Remove Malware: Conduct a thorough scan and clean-up of affected systems to remove any malicious software. - Patch Vulnerabilities: Apply patches and updates to fix exploited vulnerabilities. - Secure Configurations: Ensure systems and network configurations follow best security practices. 6. Recovery - Restore Systems: Rebuild or restore systems from clean backups. - Monitor for Recurrence: Closely watch the affected systems for signs of recurring issues. - Communicate: Inform clients/stakeholders and possibly affected individuals as required by law and policy. 7. Post-Incident Analysis - Conduct a Root Cause Analysis: Determine and document how the exfiltration occurred and why it wasn't detected earlier. - Review and Improve: Update security policies, incident response plans, and monitoring tools based on lessons learned. You must test this procedure/approach with your SOC team to make sure it's well understood and effective and will be followed once you are this type of attack. #SOC #IR #NIST_IR #Data_exfilteration #Cybersecurity
How to Detect Anomalies in Network Traffic
Explore top LinkedIn content from expert professionals.
Summary
Detecting anomalies in network traffic means identifying unusual or suspicious behavior that could signal security threats, system faults, or policy violations. This involves knowing what “normal” activity looks like on your network, monitoring for anything out of the ordinary, and investigating the causes to protect your systems and data.
- Establish normal patterns: Begin by learning typical network behaviors and creating baselines so you can spot anything that doesn’t fit.
- Monitor and analyze logs: Regularly check network logs, firewall entries, and alerts for signs of unexpected activity, such as unknown connections or unusual data transfers.
- Investigate anomalies quickly: When you notice something strange, dig deeper using available tools and collaborate with your team to understand and respond to potential threats.
-
-
3 Investigation Levels Every SOC Analyst Should Master Not all alerts are the same and neither are your investigations. Whether you're handling a phishing case or a malware alert, knowing where to look is just as important as knowing what to look for. Here are the 3 core investigation types in security operations, explained with a phishing example: 🖥️ 1. Host-Level Investigation Focuses on the device itself. ✅ Did the user download a file? ✅ Are there any suspicious processes? ✅ Any new or hidden executables running? Use tools like EDR, PowerShell logs, and AV alerts. 👤 2. User/Account-Level Investigation Focuses on the user’s credentials and activity. ✅ Was the account compromised? ✅ Suspicious logins from other countries? ✅ MFA failures or mailbox forwarding rules? Check login history, identity logs, email activity, and AD changes. 🌐 3. Network-Level Investigation Focuses on network behavior and traffic. ✅ Did the system try to connect to a known malicious IP or domain? ✅ Are there abnormal outbound connections? ✅ Firewall or proxy logs raising alerts? Use DNS logs, firewall rules, proxy traffic. Phishing Example Let’s break it down: 📩 Email arrives → Check if the user received it, how many users received the same email (User Level) 🔗 Link clicked and file downloaded → Look at endpoint behavior (Host Level) 🌍 Connection made to a suspicious domain → Inspect traffic (Network Level) Mastering this layered approach will help you respond faster, investigate deeper.
-
Every day, I spend a big chunk of time reviewing firewall logs. At first, I saw it as repetitive — just scanning IPs, ports, protocols. But over time, I realized something powerful: Firewall logs are a goldmine for understanding your network — and your attackers. Here are a few key patterns I look for and what I’ve learned along the way: 1. Repeated Port Scanning Seeing consistent hits on ports like 3389, 22, or 23? That’s usually a clear sign someone is looking for an exposed RDP, SSH, or Telnet service. I flag these quickly and correlate them with IDS/IPS alerts. 2. Geo Anomalies Why is traffic coming from countries where we don’t operate? I always verify geolocation of source IPs. Unexpected regions = investigate deeper. 3. Internal Traffic That Shouldn’t Exist Sometimes, I catch internal devices talking to each other in strange ways — like a printer querying an SQL server. That’s either misconfiguration or a potential compromise. 4. Unusual Time Patterns Midnight scans or access attempts outside business hours? That often highlights compromised credentials or scheduled attacker activity. 5. Allowed But Suspicious Not everything blocked is bad — and not everything allowed is safe. I’ve learned to pay more attention to allowed traffic that seems out of place. My Personal Tips for Firewall Log Analysis Use color-coded dashboards to speed up your triage Correlate IPs with threat intelligence feeds (AbuseIPDB, AlienVault, etc.) Set thresholds for anomalies (ex: >100 hits/min on a single port) Document patterns you notice — they’ll help you write better correlation rules later Final Thought Reading logs isn’t exciting at first. But once you understand the story behind each entry, it becomes one of your sharpest tools in defense. What’s one thing YOU always look for in firewall logs? Let’s share knowledge and grow together. #CyberSecurity #SOCAnalyst #FirewallLogs #NetworkSecurity #BlueTeam #DetectionEngineering #SIEM #Wazuh #LogAnalysis #InfoSecCareers #LearningByDoing
-
Question of the Date: What’s the difference between Clause 9.1 - Monitoring, Measurement, Analysis and Evaluation and Annex A.8.16 - Monitoring Activities? Under ISO 27001:2022, Clause 9.1, Monitoring Measurement, Analysis and Evaluation is about monitoring the effectiveness of the ISMS. The standard states that you need to determine what should be monitored and measured, the methods to be used, ensuring comparable and reproducible results, when monitoring and measuring will be performed, by whom, when the results will be analyzed and evaluated and by whom. You must be able to present documentation of the results. What does “effectiveness” of the ISMS mean? At this point in the process, you’ve already identified your risks and defined your objectives. Consider the following: Are your policies and procedures understood and complied with? Is your awareness and training program effective? How will you know when your objectives have been achieved, and will they change overtime (continual improvement)? Are you tracking your risks and the effectiveness of your risk treatment plans? Where clause 9.1’s purpose is to evaluate the effectiveness of the ISMS, Annex A.8.16’s is technical, with the purpose to detect anomalous behavior and potential information security incidents. That is to say behavior that is out of the ordinary - Which means you should begin with establishing baselines for what is normal in your organization. You need to understand how and where people work, what are the normal tools and applications in use, how is administrative access managed, what is typical bandwidth utilization during peak and off-peak hours, etc. Consider inbound / outbound network traffic, system and application usage, access to systems, servers, network equipment, and critical applications. When identifying anomalous behavior, consider any unplanned termination of processes or applications, activity that could be associated with malware or traffic from known malicious IP addresses or network domains, denial of service and buffer overflows, keystroke logging, process injections, unauthorized access (actual or attempted), unauthorized scanning of applications, systems or networks, etc. Monitor for changes to administrative access and network configurations. Utilize logs from security tools such as IDS, IPS, web filters, firewalls and data leakage prevention tools, and event logs relating to system and network activities. In short, Clause 9.1 asks if we are measuring and evaluating whether the ISMS is working as intended and Annex A.8.16 asks if we are technically monitoring to detect security issues in real time. They complement each other and ensure activities are aligned with your ISMS goals and risks. #ISO27001 #EmagineIT
-
🌟 Day 26 of My 90-Day AI Learning Journey 🌟 𝗔𝗻𝗼𝗺𝗮𝗹𝘆 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 - 𝗧𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝗦𝗽𝗼𝘁𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗨𝗻𝗲𝘅𝗽𝗲𝗰𝘁𝗲𝗱 Ever wondered how credit card companies instantly flag suspicious transactions, or how QA teams detect faulty products before they reach customers? That’s where 𝗔𝗻𝗼𝗺𝗮𝗹𝘆 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 steps in. It is about learning what “normal” behavior looks like in your data - and then flagging anything that strays too far from it. Once we define “normal,” the model measures 𝗵𝗼𝘄 𝗳𝗮𝗿 𝗲𝗮𝗰𝗵 𝗻𝗲𝘄 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝘁𝗶𝗼𝗻 𝗱𝗲𝘃𝗶𝗮𝘁𝗲𝘀 from this pattern. If that deviation crosses a certain threshold, it’s labeled an 𝗮𝗻𝗼𝗺𝗮𝗹𝘆 (𝗼𝘂𝘁𝗹𝗶𝗲𝗿). Whether it’s a sudden spike in transactions hinting at 𝗰𝗿𝗲𝗱𝗶𝘁 𝗰𝗮𝗿𝗱 𝗳𝗿𝗮𝘂𝗱, or a drop in user engagement revealing a 𝗵𝗶𝗱𝗱𝗲𝗻 𝘀𝘆𝘀𝘁𝗲𝗺 𝗯𝘂𝗴 - detecting anomalies early means saving time, money, and reputation. Technically, it’s about modeling “normality” and measuring deviation: 𝟭. 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗠𝗲𝘁𝗵𝗼𝗱𝘀: Used for simple, structured data and real-time monitoring. These use basic math to detect outliers. • 𝗭-𝗦𝗰𝗼𝗿𝗲 → Measures how many standard deviations a value is from the mean. Example: If most transactions are around $100 but one is $10,000, it’ll have a very high Z-score. • 𝗜𝗤𝗥 (𝗜𝗻𝘁𝗲𝗿𝗾𝘂𝗮𝗿𝘁𝗶𝗹𝗲 𝗥𝗮𝗻𝗴𝗲) → Finds points that fall far outside the middle 50% of the data. • 𝗚𝗮𝘂𝘀𝘀𝗶𝗮𝗻 𝗠𝗼𝗱𝗲𝗹𝘀 → Assume data follows a normal distribution and flag points that have a very low probability of occurring. 𝟮. 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Useful for high-dimensional or unlabeled data (like sensor readings or web traffic). When data is too complex for simple math, we use algorithms that learn “normal patterns” automatically. • 𝗜𝘀𝗼𝗹𝗮𝘁𝗶𝗼𝗻 𝗙𝗼𝗿𝗲𝘀𝘁 → Randomly isolates data points; anomalies are easier to isolate because they behave differently. • 𝗢𝗻𝗲-𝗖𝗹𝗮𝘀𝘀 𝗦𝗩𝗠 → Creates a boundary around normal data; anything outside is an anomaly. • 𝗗𝗕𝗦𝗖𝗔𝗡 → Groups data into clusters; points that don’t fit any cluster are outliers. 𝟯. 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Best for dynamic systems like IoT, fraud detection, predictive maintenance, or QA testing. For time-dependent or highly non-linear data, deep learning shines: • 𝗔𝘂𝘁𝗼𝗲𝗻𝗰𝗼𝗱𝗲𝗿𝘀 → Neural networks that learn to reconstruct normal data. If reconstruction error is high, it’s an anomaly. • 𝗟𝗦𝗧𝗠𝘀 (𝗟𝗼𝗻𝗴 𝗦𝗵𝗼𝗿𝘁-𝗧𝗲𝗿𝗺 𝗠𝗲𝗺𝗼𝗿𝘆) → Great for time series, like stock prices or server logs, because they understand temporal dependencies. • 𝗚𝗔𝗡𝘀 (𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗱𝘃𝗲𝗿𝘀𝗮𝗿𝗶𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀) → Learn to generate “normal” data; anomalies are detected when something looks too different from what the model can generate. #DataScience #MachineLearning #AI #AnomalyDetection #FraudDetection #PredictiveAnalytics #QualityAssurance #OpenToWork
-
STOP CHASING GHOSTS: Why Your Fraud Team is Missing the Kingpins 🕵️♀️ Your current fraud tools are looking at transactions. Fraudsters are looking at networks. Losses don't just happen randomly—they're engineered through connected entities like mule accounts, collusive merchants, and shared devices. The critical flaw in traditional detection? It can't tell you which entity matters most. The Game Changer: Graph Centrality Measures We've been using graph analytics to identify the most influential nodes in a network, turning reactive monitoring into proactive defense. This isn't just about finding anomalies; it's about finding the linchpins. How it works (and what your rules engine misses): * PageRank for Influence: Just like Google ranks web pages by influence, we use PageRank Fraud Detection to score risk. An account connected to 3 confirmed fraud merchants is exponentially more dangerous than one connected to 50 low-risk ones. PageRank finds the hidden kingpins. * Betweenness Centrality for Bridges: This metric exposes the accounts that serve as essential bridges between otherwise separate fraud rings (the classic mule hub). Disrupt the bridge, and you collapse two networks at once. * Degree Centrality for Hidden Connectors: Surfaces a single device or IP address logging into dozens of synthetic identities, revealing the common infrastructure bad actors are secretly recycling. The result for banks like JP Morgan Chase and Nubank? They achieved multi-million dollar annual savings, significantly boosted fraud model recall, and drastically reduced false positives—giving their analysts precision, speed, and an explainable audit trail for regulators. The takeaway: Fraud isn't random; it's networked. You need to see beyond the transaction and uncover the influence behind it. Want to shift your fraud defense from reactive to proactive? Read our latest blog to dive into the mechanics of PageRank, Betweenness, and Degree Centrality and see how TigerGraph delivers these insights at enterprise scale. 🔗 Read the full breakdown here: https://lnkd.in/diBeRXc2 #FraudDetection #GraphAnalytics #FinancialCrime #AML #BankingTechnology #GraphCentrality #TigerGraph #FinTech
-
Scenario-Based Question 1: (Asked in EY, Mastercard, Jio, Persistent, etc) Suppose you are given firewall logs, and the condition is that you cannot use any tools or sources except these firewall logs. How will you detect an attack? 1. Analyze Traffic Patterns: -Examine high volumes of traffic being denied from specific sources, multiple sources, or from the same/similar or nearby countries. This traffic might be blocked due to hardware hardening policies like whitelisting only necessary legitimate traffic. -Identify which ports/services this traffic is targeting to understand the potential threat. 2. Check for Unusual Traffic Spikes: -Sudden and significant increases in traffic can be a red flag. Compare current traffic patterns with historical data to spot anomalies that may indicate an attack. 3. Inspect Packet Characteristics/Analyze Connection States: -Look for malformed patterns in the packets that could signify malicious activity. -SYN Floods: Large numbers of half-open connections. -Legitimate Traffic: Normal request-response behavior, meaning traffic should complete the handshake and not remain in SYN-SENT or SYN-RECEIVED states. 4. Identify Repeated Failed Login Attempts: -Multiple failed login attempts from the same IP address could indicate a brute force attack. Monitoring these attempts can help you identify and block the source. 5. Monitor for Unusual Port Activity: -Traffic on uncommon ports or ports not typically used by your organization could indicate an attempt to exploit vulnerabilities. Keep an eye on these activities to prevent unauthorized access. 6. Check for IP Spoofing: -Look for patterns where the source IP address changes frequently, which might indicate an attempt to bypass security measures. Identifying and blocking these IPs can enhance your security. 7. Review for Data Exfiltration Attempts: -Large outbound data transfers, especially to unfamiliar IP addresses, could indicate data exfiltration. Monitoring and controlling these transfers can help protect sensitive information.
-
Real-time DNS Monitoring: Metrics and Logs! A free DNS observability dashboard for the Unbound resolver for real-time DNS behavior monitoring using metrics (Prometheus) and logs (Loki) visualized in Grafana. • DNS query volume and trends • Cache hits vs misses (performance & efficiency) • Response errors (NXDOMAIN, SERVFAIL) • Query latency and spikes • Client and domain activity patterns • Detailed DNS logs for investigation This turns DNS from a blind spot into a visible, measurable security signal: https://lnkd.in/gpMcbni3 SOC / Blue Team: Spot DNS anomalies, spikes, and abuse early | Detect possible C2, tunneling, or misconfigured clients Threat Hunting: Baseline “normal” DNS behavior | Pivot from metrics → logs for deeper analysis DFIR: Reconstruct DNS activity during incidents | Correlate suspicious domains with timelines Network / Infra Team: Monitor DNS performance and reliability | Troubleshoot latency, cache issues, and failures Security Architecture / Management: Visibility into a critical control plane (DNS) | Evidence for risk discussions and improvements I don't know what you heard about me! I practice, test, learn in public, and share what actually works ... daily and free! Threat hunting 101 series on YouTube: https://lnkd.in/g7D4pTVk Daily Cyber Drops on Medium: https://lnkd.in/gsekf7kB Nothing Cyber. Keep hunting. #cybersecurity #threathunting #threatdetection #opensource #tips #career #blueteam #soc #socanalyst #skillsdevelopment #careergrowth #IR #dataanalysis #incidentresponse
-
𝐇𝐨𝐰 𝐝𝐨𝐞𝐬 𝐄𝐃𝐑 𝐰𝐨𝐫𝐤? 𝐀𝐝𝐯𝐚𝐧𝐜𝐞𝐝 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐓𝐞𝐜𝐡𝐧𝐢𝐪𝐮𝐞𝐬 EDR employs behavioural analysis and machine learning to detect anomalies. Instead of relying on known signatures, it monitors processes, file changes, and network activity for deviations from normal patterns. For example, if ransomware starts encrypting files rapidly, EDR flags this abnormal behavior even if the malware is new. 𝐓𝐡𝐫𝐞𝐚𝐭 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 𝐢𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 enhances detection by cross-referencing global attack data. CrowdStrike’s EDR, for instance, matches activity sequences to known adversary tactics (TTPs) using behavioral analytics. 𝐈𝐧𝐝𝐢𝐜𝐚𝐭𝐨𝐫𝐬 𝐨𝐟 𝐂𝐨𝐦𝐩𝐫𝐨𝐦𝐢𝐬𝐞 (𝐈𝐎𝐂) 𝐒𝐜𝐚𝐧𝐬 EDR tools like Kaspersky use IOC files to search for compromise indicators, such as suspicious registry entries or file hashes. These files follow the OpenIOC standard and are scanned in critical areas (e.g., downloads folder, system registries). Results are retained for 30 days to support forensic investigations. 𝐑𝐨𝐨𝐭 𝐂𝐚𝐮𝐬𝐞 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 (𝐑𝐂𝐀) After an incident, EDR enables RCA by mapping attack timelines and identifying primary causes (e.g., phishing emails, unpatched software). Tools like Check Point’s RCA methods use "Five Whys" or fishbone diagrams to trace events back to their origin, helping prevent recurrence. 𝐀𝐭𝐭𝐚𝐜𝐤 𝐕𝐢𝐬𝐮𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧 EDR platforms provide visual attack chain diagrams, showing how threats infiltrate and spread. Analysts can drill into each stage (e.g., initial access, lateral movement) to understand TTPs and prioritize remediation. Symantec’s Carbon Black EDR, for instance, visualizes command-line activity and process trees for clarity. 𝐄𝐧𝐫𝐢𝐜𝐡𝐞𝐝 𝐀𝐥𝐞𝐫𝐭 𝐃𝐚𝐭𝐚 Alerts include contextual details like threat actor attribution, affected endpoints, and MITRE ATT&CK framework mappings. CrowdStrike enriches alerts with adversary intelligence, reducing false positives and accelerating triage37. 𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐞𝐝 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞𝐬 Upon detection, EDR can: Isolate infected endpoints from the network. Terminate malicious processes. Roll back encrypted files (e.g., ransomware remediation). Block suspicious IP addresses. For example, automated containment stops ransomware from spreading before human intervention. 𝐌𝐮𝐥𝐭𝐢𝐩𝐥𝐞 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐎𝐩𝐭𝐢𝐨𝐧𝐬 Security teams choose between: Automated actions (e.g., quarantining files). Manual responses (e.g., patching vulnerabilities). Hybrid approaches (e.g., AI-driven suggestions with analyst approval). 𝐐𝐮𝐢𝐜𝐤 𝐈𝐧𝐯𝐞𝐬𝐭𝐢𝐠𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 EDR’s real-time data logging and 90-day searchable history let analysts query events in seconds. Microsoft’s EDR, for instance, provides exhaustive activity records to trace breaches swiftly. Image Source: Cyber Edition #ciso #cybersecurity #technology #learning
-
Start with #SIEM Rules: ➡ Security Event Logs (Event IDs): ♦4624 (Successful Logon): Look for suspicious successful logins, especially from unknown users. ♦4625 (Failed Logon): Check for repeated failed login attempts (may indicate brute-force attacks or credential theft). ♦4688 (Process Creation): Analyze new process creation to find ransomware and its source. ♦7045 (Service Creation): Investigate new services as ransomware may use them. ♦5145 (File Share Access): Watch for unusual file share access, a common target for ransomware. ➡ Firewall and Network Logs: ♦Analyze firewall logs for unusual outgoing traffic or connections to known malicious IP addresses. ♦Check for firewall modifications as attacker tend to add/delete rules to bypass host firewall. ♦Check for incoming and outgoing network connection from infected device to other devices in the network. (Indicative of Lateral Movement) ➡ Antivirus and EDR logs: ♦Check for alerts or notifications from antivirus or EDR solutions. ♦Investigate any quarantined files or suspicious activities (+) (-) 30 mins. ➡ Script Execution: ♦Ransomware often uses scripting languages like PowerShell for execution. Monitor Event IDs related to PowerShell or script execution, such as Event ID 400 and Event ID 4104. ➡ DNS Logs: ♦Investigate DNS logs for unusual domain resolutions or requests. ➡ File Modification and Access Logs: ♦Look for patterns of file modifications and access to encrypted files. ♦Monitor for changes in file extensions and timestamps. ♦Inspect connected external drives for encryption. ➡ Event ID 1102 (Security Log Cleared): ♦Check for unauthorized clearing of the security log, as attackers may attempt to cover their tracks. ➡ User and Account Activity Logs: ♦Review user account activity logs for unusual or unauthorized user actions, such as account creation, modification, or elevation of privileges. ➡ Backup Logs: ♦Analyze backup system logs to determine if backups were tampered with or deleted by the ransomware. ♦Check for shadow copies deletion activity. ➡ Remote Desktop Protocol (RDP) Logs: ♦If RDP is enabled, check RDP logs for unauthorized access. ➡ Active Directory Logs and later movement check: ♦Investigate Active Directory logs for any changes to user accounts, group memberships, or organizational units that may be related to the attack. ♦Check for lateral movement attacks like Pass-the-hash, Pass-the-ticket, Golden/Silver/Sapphire/Diamond tickets etc. attacks. ➡ Disabled Security Tools: Ransomware often disables or interferes with antivirus and security software. and continue with the use cases in the included document.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development