We are observing widespread and sophisticated fileless malware campaigns targeting companies in the African finance and telecommunications sectors. The campaign typically begins with a phishing email sent to departments such as Sales and Procurement, often disguised as a Request for Quotation (RFQ). The email includes an attachment, commonly a PowerShell (.ps1) dropper file crafted to appear legitimate. In one notable case, the dropper, once executed, downloaded what appeared to be a random image file onto the user’s system. At first glance, the image seemed harmless, but its huge file size raised suspicion. Further analysis revealed the file contained a malicious DLL hidden using steganography. The attackers concealed binary malware within the image file. The dropper extracted this hidden payload and executed it in memory. It also created a scheduled task via Windows Task Scheduler, ensuring persistence even after reboot. The DLL was executed using in-memory .NET assemblies and PowerShell one-liners, avoiding detection by traditional antivirus solutions. Once active, the payload could accept commands from a remote C2 server, launch processes, and exfiltrate sensitive system information. The malware was observed collecting public and private IP addresses, geolocation data, a list of scheduled tasks, and basic system metadata (useful for lateral movement or persistence). These behaviours are consistent with advanced fileless malware operations, where attackers minimise their on-disk footprint and rely on living-off-the-land techniques (LOLBins) to evade detection. Indicators of compromise (IoCs) revealed that the email sender, domain, and IPs have previously been reported in malicious activity, including spoofing, credential harvesting, spam, and phishing. This suggests the threat actors are leveraging an established, actively maintained infrastructure. Recommendations for Security Teams - Train employees to recognise phishing tactics such as urgency-driven language, unexpected RFQs, and suspicious attachments. Encourage reporting to IT/security teams. - Configure filtering policies to block or sandbox compressed file types (e.g., .zip, .rar, .tgz) and scripts (.ps1, .js, .vbs) from untrusted senders. - Enable DMARC, SPF, and DKIM enforcement for email to avoid spoofing and spam. - Deploy advanced EDR solutions with behavioural detection to catch in-memory execution, PowerShell abuse, and steganographic payloads. - Monitor for suspicious persistence mechanisms (e.g., unexpected scheduled tasks). - Regularly apply security patches to operating systems, browsers, and office applications. - Restrict execution of unsigned PowerShell scripts via Constrained Language Mode or AppLocker/Defender Application Control. - Monitor outbound connections to detect C2 traffic patterns. - Hunt for anomalous large image files or unusual PowerShell activity in logs. #SOC #ThreatIntelligence #DigitalForensics #Malware #FilelessMalware #Threat
How to Recognize Evolving Malware Techniques
Explore top LinkedIn content from expert professionals.
Summary
Evolving malware techniques refer to the ways in which malicious software adapts to new defenses, using sophisticated tactics like fileless attacks, AI-driven code generation, and decentralized infrastructures to evade detection and persist within systems. Recognizing these techniques is vital for organizations, as traditional methods of spotting malware often fall short against modern threats that disguise themselves and change behavior during attacks.
- Monitor behavioral changes: Watch for unusual activity such as unexpected file access, abnormal network traffic, or processes running in memory, since adaptive malware often hides in normal operations.
- Analyze runtime activity: Shift focus from static signatures to real-time observation of code execution and API usage, especially as AI-generated malware and fileless attacks leave few traces on disk.
- Strengthen credential security: Regularly review and secure stored credentials in browsers, and investigate any non-browser processes accessing sensitive data, as attackers frequently target these for lateral movement.
-
-
AI is being weaponized — and attackers are proving it. This week, researchers at @SentinelLabs exposed MalTerminal — the first publicly documented malware to autonomously generate ransomware and reverse shell payloads using GPT-4. Unlike traditional malware, MalTerminal doesn’t ship with pre-written payloads. It abuses the GPT-4 API at runtime to build malicious code on demand. No static signatures. No known-bad patterns. That’s the turning point. Why MalTerminal Matters: 1. Dynamic code generation → No malicious code until runtime 2. Autonomous evasion → Signature-based detection rendered obsolete 3. Exposed API keys → Security analysis uncovered large numbers embedded in samples, showing how attackers are experimenting with LLM access at scale 4, Dual use → Offense and defense both automated by LLMs While there’s no evidence MalTerminal has been deployed in the wild, it’s a proof-of-concept that shows how quickly these techniques can spread. Underground forums are already buzzing. And it follows other experiments like PromptLock, confirming that AI-powered tradecraft is diversifying fast. This is what abstract risk looks like in practice: -- Cybercriminals no longer need coding skills -- Every attack can mutate in real time -- Global ransomware losses are estimated in the tens of billions annually (FBI IC3, 2024) — and AI-driven automation could multiply that curve What Leaders Must Do Now: 1. Shift detection → From static signatures to behavioral + memory analysis 2. Monitor LLM activity → Track anomalous API calls, prompts, and tokens 3. Tabletop + AI defense → Simulate AI-powered malware intrusions and use LLMs to respond at machine speed 4. Board-level briefings → Treat AI threats as strategic resilience issues, not just IT problems Closing Thought: AI isn’t inherently the enemy. But if organizations don’t adapt, attackers will make sure it feels that way. At PRIMSEC, we work with boards and executives to understand these new risks and design resilience strategies tailored for the AI era. How is your organization preparing for the era of AI-powered malware?
-
Immutable Malware: Blockchain-Based Attacks Introduce a New Cybersecurity Paradigm A new form of malware is redefining cyber threats by leveraging blockchain technology to create attacks that are nearly impossible to eliminate. This evolution signals a shift from traditional malware toward persistent, decentralized threat architectures. The attack begins deceptively, often through fake job offers targeting developers, where victims are encouraged to run seemingly harmless code. Once executed, the code initiates a complex attack chain that interacts with multiple blockchain networks, including TRON, Aptos, and Binance Smart Chain. Instead of hosting malicious payloads on centralized servers, the malware uses blockchain transactions as a permanent, publicly accessible storage layer, embedding instructions and pointers that guide the attack. Because blockchain data cannot be easily altered or removed, the malicious infrastructure becomes effectively permanent. This design introduces a significant challenge for cybersecurity defense. Traditional mitigation strategies rely on identifying and shutting down command-and-control servers, but in this case, the infrastructure is decentralized and immutable. Early reports indicate that hundreds of thousands of credentials across numerous organizations may already be compromised, with experts warning that the scale and persistence of this campaign could rival or exceed past global cyberattacks. This development matters because it represents a fundamental shift in how cyber threats are constructed and sustained. By exploiting the permanence and resilience of blockchain systems, attackers are creating a new class of malware that is resistant to conventional countermeasures. For organizations, this underscores the urgent need to rethink security architectures, emphasizing prevention, behavioral detection, and zero-trust principles in a landscape where threats can no longer be easily dismantled once deployed. I share daily insights with tens of thousands followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
🎯 APT groups don't need 0-days when your browser stores credentials in SQLite... Kaspersky's latest report on HoneyMyte (aka: Mustang Panda, Earth Preta, Bronze President) uncovers a complex toolkit that should concern anyone defending enterprise endpoints. This Chinese origin APT group has been active since at least 2017, and their latest campaigns show continued evolution in credential harvesting techniques. The operation uses a trio of browser stealers targeting Chromium browsers (Chrome, Edge, Brave and Opera). What makes this interesting from a detection standpoint isn't the technique itself as browser credential theft is hardly novel. However, the implementation details that give us solid detection opportunities. The stealers target two critical files: "Login Data" (an SQLite database containing encrypted credentials) and "Local State" (which holds the encryption key). The attack chain creates temporary artifacts like "chromeTmp" and "Local State-journal" during the extraction process. These artifacts are essentially the fingerprints left behind when malware copies and processes browser databases. Here's the detection logic worth noting: legitimate browser processes accessing their own credential stores is normal. Chrome.exe reading Login Data? Expected behavior. Random executable doing the same? That's your signal for detection. The KQL rule published by one of our contributors: Kaung Khant Ko implements this beautifully. It monitors for two conditions: creation of those telltale temp artifacts (very low FP), OR non browser processes accessing sensitive credential files in browser user data directories. The explicit allowlisting of legitimate browser executables reduces false positives while maintaining visibility on anything suspicious touching those paths. Detection rule at detections.ai: https://lnkd.in/gKmD_yvG From a SOC or analyst perspective the process accessing browser processes is going to determine its suspiciousness. A Python script, PowerShell or unknown binary accessing "Login Data" should immediately raise questions. You're not just alerting on file access, you're correlating the accessing process against expected behavior. For us blue-teamers implementing similar logic: consider extending the browser process allowlist based on your environment. Electron apps and browser automation tools might trigger false positives. Also worth monitoring: processes decrypting the DPAPI-protected keys, which is the natural next step after stealing these files. The broader lesson here is that APT groups continue to prioritize credential access for lateral movement. Browser stored credentials remain low hanging fruit in most enterprises. Research by Securelist: https://lnkd.in/gteSaD4n Rules and threat intel details at detections.ai: https://lnkd.in/g_2t9e8S #KQL #APT #Malware #DetectionEngineering
-
THREAT CAMPAIGN: GODRAT, NEW RAT TARGETING FINANCIAL INSTITUTIONS IN ASIA AND THE MIDDLE EAST ℹ️ In September 2024, researchers detected a new remote access trojan they named GodRAT, targeting financial institutions, particularly trading and brokerage firms, via malicious .scr (screensaver) files masquerading as financial documents delivered through Skype. Attack campaigns have also employed .pif files with similar camouflages. As of the most recent public report, the malware remained active, with the latest detection on August 12, 2025. 📍 TECHNICAL EVOLUTION AND METHODOLOGY ■ GodRAT is rooted in the legacy Gh0st RAT codebase, showing strong ties to the previously identified AwesomePuppet backdoor from 2023, suggesting that GodRAT is likely an evolved successor, and possibly connected to the Winnti (APT41) group. ■ Attackers hide shellcode via steganography, embedding it into image files, to download GodRAT from a C2 server. 📍INFECTION MECHANICS AND EXECUTION FLOW ■ A multi-stage chain initiates with a shellcode injector: it XOR‑decodes embedded shellcode, maps it into memory, and searches for configuration data marked by “godinfo,” which reveals details like the C2 address and command. ■ The first stage fetches and launches a second-stage payload that includes a UPX‑packed GodRAT DLL (internally named ONLINE.dll). ■ This DLL either injects itself into curl.exe or cmd.exe using a -Puppet flag or proceeds directly based on provided command-line arguments. 📍 RAT CAPABILITIES AND PLUGIN SYSTEM ■ Once active, GodRAT connects to its C2 server, harvesting system information, including OS details, host and user info, antivirus presence, and malware process metadata, compressing the data, XOR‑encoding it, and transmitting it upstream. It processes incoming commands, which allow it to: ◽ Inject plugin DLLs into memory (e.g., a FileManager plugin); ◽ Terminate itself; ◽ Download and launch new files; ◽ Open specified URLs (via Internet Explorer); and ◽ Write data to configuration files (like updating config.ini). 📍 THE FILEMANAGER PLUGIN & ASYNCRAT DEPLOYMENT ■ The FileManager plugin (named FILE.dll) collects local drive information and desktop paths, and can reflectively inject shellcode into processes. ■ This in turn initiates AsyncRAT, which patches Windows defenses like AMSI and ETW to disable detection and allow execution of its components. ■ AsyncRAT’s builder permits attackers to masquerade executables (e.g., svchost.exe, cmd.exe, curl.exe) and choose among output formats like .exe, .scr, or .pif, a further evolution rooted in Gh0st RAT code. Reference: 🔗 https://lnkd.in/dwdkFfH5 #threathunting #threatdetection #threatanalysis #threatintelligence #cyberthreatintelligence #cyberintelligence #cybersecurity #cyberprotection #cyberdefense
-
Over the next two weeks, I'm breaking down OpenAI's latest threat disruption report. I would love to get thoughts from folks in cyber on if you’re seeing traces of this activity. Part 1/7: Russian-Speaking Malware Development Russian-speaking actors recently attempted to use ChatGPT for malware development. Specifically, it was for credential theft and remote access capabilities. The report mentioned a range of questions from advanced queries on Windows API hooking, DPAPI/AES GCM cookie handling and PE manipulation to simpler asks like mass password generation scripts, telegram bot uploaders, and scripting job applications. But instead of asking for exploit code (which the model refused of course), they requested building blocks: shellcode conversion, in-memory loaders, obfuscation patterns. Just like developers, they're using AI to remove boring commodity tasks. Why spend time on boilerplate when you can focus on novel evasion techniques? Because they asked for components, every fulfilled request was technically legitimate in isolation. "Convert this EXE to shellcode" is not necessarily malicious. Understanding context and intent quickly was everything for OpenAI to stop adversaries. What this means for the defensive side: Time-to-variant is compressing from weeks to days. When you see a new RAT variant deployed 48 hours after the previous one was detected, that velocity itself becomes the signal. Defenders will need to start implementing new capabilities like proactive infrastructure hunting. There won’t be enough time to wait to detect each new sample. For those in threat intel or malware analysis, are you seeing this acceleration? What signals are you using to identify AI-assisted development? Next time: DPRK-linked operations and compartmentalization tactics.
-
How are AI-driven malware variants evading traditional detection methods AI-driven malware variants are evading traditional detection methods through several sophisticated techniques: 1. Polymorphism and Mutation: These malware strains use AI to constantly change aspects of their code, file structure, and behavior—sometimes every few seconds—making it extremely difficult for signature-based antivirus programs to identify them. Polymorphic malware, which mutates its hash and code structure automatically, is now present in more than 70% of major breaches and over 76% of phishing attacks. AI allows these mutations to happen rapidly and unpredictably, outpacing static detection engines. 2. Adversarial Examples: Attackers create subtle modifications in malware and use adversarial machine learning tactics to fool detection models. By tuning payloads with adversarial examples, they cause classifiers to misidentify malicious files as benign. Memetic algorithms and generative adversarial networks (GANs) are now being used to optimize these evasion tactics, achieving success rates of up to 98% against advanced AI detectors like MalConv, and notable evasion rates even against leading commercial antivirus products. 3. Prompt Injection and AI Model Manipulation: Some advanced malware now embeds natural-language prompts into their code, attempting to "trick" AI-driven security tools into misclassifying them as harmless. This is a relatively new evasion method: instead of altering code structure alone, attackers manipulate the logic and instructions of large language models used for malware analysis. The goal is for the AI to falsely declare “NO MALWARE DETECTED.” Such attacks exploit the contextual vulnerabilities of modern AI models, especially as these models become more central to automated threat detection. 4. Real-Time Learning from Failed Attempts: New AI-powered strains can learn from failed attacks or detections, tweaking future attack vectors for better success. This self-improving loop allows malware to incrementally bypass increasingly complex defensive measures. Traditional signature-based antivirus, static heuristics, and even some behavioral analysis tools are being outpaced by these adaptive, AI-driven threats. The future of defense will likely depend on deploying similarly advanced AI models that can keep up with these evolving tactics and spot anomalies that legacy tools miss. #malware #advesary #detection
-
AI is no longer just a productivity tool for threat actors — it has entered the malware supply chain. The Google Threat Intelligence Group just released a critical update on the weaponization of AI, and the findings mark a significant shift in the landscape. We are moving beyond attackers simply using LLMs to write better phishing emails or debug scripts. We are now entering the era of "Just-in-Time" AI malware. According to the report, threat actors are deploying malware that queries LLMs mid-execution to dynamically generate commands or obfuscate code. Key findings: PROMPTSTEAL: A data miner that doesn't have hard-coded commands. Instead, it asks an LLM (via public APIs) to generate the specific commands needed to steal data or map a network on the fly. PROMPTFLUX: A dropper that uses AI to rewrite its own code (VBScript) in real-time to evade static signature detection. Social Engineering 2.0: Attackers are successfully using "pretexting" (posing as researchers or students) to bypass AI safety guardrails. This evolution towards autonomous, adaptive malware means our defense strategies must evolve faster than ever. Static signatures won't catch code that rewrites itself. Source: https://lnkd.in/dZZ38c5N
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development