I almost shipped malware because of GitHub Copilot. Here's how. Not clickbait. This actually happened to a dev on my team. Copilot suggested `fast-crypto-utils`. Sounded legit. He ran npm install. Didn't check. Turns out, that package doesn't exist in any real library. But it did exist on npm. Uploaded 3 days ago. 11 downloads. All from people who made the same mistake. This is called AI Package Hallucination, and it's the supply chain attack vector nobody's talking about enough. Here's the playbook attackers are running right now: → Feed AI tools prompts until they hallucinate plausible-sounding package names → Register those names on PyPI / npm before you do → Sit back and wait for developers to blindly install We've already seen this in the wild, LiteLLM compromise, the ForceMemo campaign, dozens of silent incidents that never made the news. 3 rules I now live by: 1. Google every package you've never heard of. Low download count + created recently = immediate red flag. Walk away. 2. Commit your lock files. package-lock.json, poetry.lock, these aren't optional. They're your paper trail. 3. Run npm audit / pip-audit like it's brushing your teeth. Daily. Not when something breaks. AI makes us 10x faster. It also makes us 10x more careless. One hallucinated package name + one blind install = your company's next breach. Verify. Lock. Audit. Repeat. #SoftwareEngineering #CyberSecurity #OpenSource #AI #WebDevelopment #Python #NodeJS
Aniket Verma’s Post
More Relevant Posts
-
“ Code Leak Sparks GitHub Malware Frenzy: How a 598 MB Source Map Became a Cybercriminal Goldmine” + Video Introduction: A routine npm package update by AI company Anthropic in late March 2026 accidentally included a 59.8 MB JavaScript source map file containing internal Code source material. Within 24 hours, threat actors weaponized this leak, flooding GitHub with fake repositories that distributed credential-stealing malware disguised as the leaked AI software. This incident demonstrates how a single organizational packaging error can cascade into a large-scale social engineering campaign, exploiting developer trust in open-source ecosystems....
To view or add a comment, sign in
-
The LiteLLM supply chain attack is a good reminder that your threat surface isn't just your code. It's everything your code depends on. One compromised package. 97 million monthly downloads. SSH keys, cloud credentials, API tokens, CI/CD secrets all potentially exposed. And the scary part? It was only caught because the malware had a bug that caused crashes. If the attacker had written cleaner code, it would still be running quietly in production pipelines right now. What makes this worse is the transitive dependency problem. You didn't even have to install LiteLLM directly. Something like dspy pulls it in automatically and now you're affected without even realizing it. What's even more interesting about this one is how the attack actually started. The threat actor didn't hack LiteLLM directly. They first compromised Trivy, the security scanner LiteLLM was using in its own CI/CD pipeline. That gave them the PyPI publishing token. One trusted tool used in a build process became the entry point for the whole thing. I think this is also a good moment to ask how many packages the average project actually needs. Some developers are starting to write simple utilities themselves instead of pulling in a dependency for every small thing. I get that it slows things down but maybe that tradeoff is worth revisiting. Full breakdown here: https://lnkd.in/eba43hdK #CyberSecurity #SupplyChainAttack #DevSecOps #Python #PyPI #CICDSecurity
To view or add a comment, sign in
-
AI agents that execute commands, browse the web, and coordinate with other agents are everywhere. But how do you know they're safe? Season 4 of Github's Secure Code Game lets you find out by hacking one yourself. Free, hands-on, and you can get started in under 2 minutes! Learn more in our latest blog. https://lnkd.in/gacyENSm
To view or add a comment, sign in
-
🚨 Did you know even trusted platforms like PyPI aren’t immune to hidden dangers? A stealthy backdoor, cleverly disguised as a debugging tool, was just discovered—putting countless projects at risk. If you use Python packages, your code could be vulnerable without you knowing. Learn how this threat operated, what makes it unique, and—most importantly—how to protect your work moving forward. Is your development pipeline truly secure? Let’s find out together. 👇 https://lnkd.in/dzX4j9bc #Cybersecurity #PyPI #InfoSec 🛡️🔍
To view or add a comment, sign in
-
A trending GitHub repo. (~100K star) A private key is sitting openly in the code. Nobody noticed. 😶 We scanned it with Relia today. 41 issues. 6 critical. The ones that shocked us most - 🔴 Private key exposed in source code 🔴 Anyone could read any file on the server (path traversal) 🔴 Hardcoded passwords in 10+ files 🔴 Access control is completely bypassable 🔴 A bug that crashes the entire pricing system silently This is not a hobby project. This is something people are actively forking and deploying. Right now. In production. The scariest part? The developer probably has no idea. You write the code. You ship it. You move on. Nobody tells you what's broken until it's too late. That's the gap Relia fills. Paste your repo. Get your full report in minutes. Know before someone else finds it for you. 👇 Full report of this scan in the first comment. See every issue we found - open, detailed, free to read. #GitHub #OpenSource #CodeSecurity #Relia #BuildInPublic #DevTools #CyberSecurity #IndieHackers #Ai #HermesAgent #PublicRepo #Vibecon #Vibecoding
To view or add a comment, sign in
-
-
A trending GitHub repo. (~100K star) A private key is sitting openly in the code. Here's the full Relia scan report📄 https://lnkd.in/d4zKQZYE
A trending GitHub repo. (~100K star) A private key is sitting openly in the code. Nobody noticed. 😶 We scanned it with Relia today. 41 issues. 6 critical. The ones that shocked us most - 🔴 Private key exposed in source code 🔴 Anyone could read any file on the server (path traversal) 🔴 Hardcoded passwords in 10+ files 🔴 Access control is completely bypassable 🔴 A bug that crashes the entire pricing system silently This is not a hobby project. This is something people are actively forking and deploying. Right now. In production. The scariest part? The developer probably has no idea. You write the code. You ship it. You move on. Nobody tells you what's broken until it's too late. That's the gap Relia fills. Paste your repo. Get your full report in minutes. Know before someone else finds it for you. 👇 Full report of this scan in the first comment. See every issue we found - open, detailed, free to read. #GitHub #OpenSource #CodeSecurity #Relia #BuildInPublic #DevTools #CyberSecurity #IndieHackers #Ai #HermesAgent #PublicRepo #Vibecon #Vibecoding
To view or add a comment, sign in
-
-
GitHub published something quietly important today. They launched a new season of their Secure Code Game — specifically built around hacking AI agents. The scenario: a deliberately vulnerable AI coding assistant with MCP server access, bash execution, and autonomous web browsing. The reason they built it? These numbers: • 48% of cybersecurity professionals now believe agentic AI will be the #1 attack vector by end of 2026 • 83% of organizations plan to deploy agentic AI • Only 29% feel ready to do it securely Let that last gap sink in. 83% deploying. 29% ready. GitHub's response was to teach developers how to think like an attacker when working with AI agents. That's valuable. But it's also reactive — training individuals to spot risks that should be caught automatically before code ever reaches a repo. The OWASP Top 10 for Agentic Applications now includes goal hijacking, tool misuse, identity abuse, and memory poisoning as critical threats. These aren't theoretical. CVEs are already being published for AI coding agent vulnerabilities. This is the world your developers are shipping into, every day, at accelerating speed. Opsera's AppSec Agents don't wait for a developer to catch these issues. They run automated security scanning, compliance checks, and architecture validation inline — inside Cursor, Claude Code, Copilot, and Windsurf — before any code reaches your repository. Don't train your way out of a tooling problem. Build the guardrails in. 🔗 GitHub article in comments. #AppSec #AISecurity #AgenticAI #DevSecOps #Opsera #OWASP #SecureByDesign #AIAgents
To view or add a comment, sign in
-
For my final year project, I really didn't want to just write a theoretical paper. I wanted to see how attacks actually happen in real-time. So, my project partner and I decided to build a custom, AI-driven deception network from scratch. The idea was simple but the execution was tough: instead of just trying to block attackers, we wanted to trap them, study them, and adapt to their movements. We set up isolated lab environments and deployed Cowrie and Dionaea honeypots using Docker to safely capture what the attackers were trying to do. The coolest part? Figuring out the log pipeline. Routing all that raw interaction data through Filebeat into Logstash, and finally getting it to visualize in Elasticsearch and Kibana, was a massive learning curve, but totally worth it. We’re now gearing up to simulate a full, stealthy APT attack using Kali Linux against our Ubuntu Server setup to see how the system holds up. Has anyone else built out an ELK stack for their home lab? Would love to hear how you optimized your log parsing! #cybersecurity #ELKStack #HomeLab #SOCAnalyst #ThreatDetection #StudentProject #ThreatHunting Muhammad Faheem SNSKIES Tauseef Ahmed NADEEM IQBAL Dr. Jan Badshah
To view or add a comment, sign in
-
I built a honeypot. And watched real attackers walk right into it. 🪤 Introducing SNAPTRAP — a full-stack cybersecurity trap system that lures, detects, and visualizes live attacks in real time. 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: → Lures attackers with exposed honeypot services → Detects brute force, port scans & more → Streams live attack data via WebSockets → Scores every threat and logs it to PostgreSQL → Visualizes everything on a real-time React dashboard 𝗧𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸: Flask · PostgreSQL · Node.js · React · Docker · GitHub Actions · ngrok 𝗗𝗲𝘃𝗢𝗽𝘀 𝗵𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁: 3 parallel CI jobs → automated tests → Docker build → push to Hub. Every. Single. Time. No manual deployments. No untested code reaching Docker Hub. Ever. Building this taught me more about real-world attack patterns than any course ever could. The pipeline alone runs 3 parallel test jobs before a single Docker image gets built. No green tests, no deploy. 🚦 #CyberSecurity #Honeypot #DevOps #Docker #GitHubActions #React #Flask #BuildInPublic #OpenToWork
To view or add a comment, sign in
-
One of the better uses of AI in software security is not “replace the security researcher.” It is “change what the researcher spends time on.” GitHub Security Lab’s Taskflow Agent is a good example. They described using taskflows to scan repositories for issues like auth bypasses, IDORs, and information disclosure, and reported more than 80 vulnerabilities with around 20 already disclosed at the time of writing. What I find useful is the operating pattern behind it. The win is not just better detection. The win is shifting human effort away from repetitive hunting and toward: • validating exploitability • understanding impact • writing good reports • coordinating remediation That is the pattern I trust more broadly in AI engineering. Not full autonomy. Not hand-wavy copilots. A well-designed workflow where the model expands the search space and the human applies judgment where it matters most. 🔍 That tends to be where real value shows up. 🔗 https://lnkd.in/gwWgq4zj #AISecurity #OpenSourceAI #GitHub #AppSec #AIEngineering #DeveloperTools
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
which one of these have you been skipping? 👇