𝗧𝗵𝗲 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 𝗵𝗮𝘀 𝗹𝗲𝗮𝗸𝗲𝗱. The irony is almost too perfect: when publishing npm packages, someone at Anthropic appears to have made a very expensive mistake. Alongside the obfuscated cli.js, the public package also included a full cli.js.map file, which absolutely was not supposed to be there. Which means one simple thing: 𝗮𝗻𝘆𝗼𝗻𝗲 𝘄𝗵𝗼 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝗲𝗱 𝗼𝗿 𝗱𝗼𝘄𝗻𝗹𝗼𝗮𝗱𝗲𝗱 𝘁𝗵𝗲 𝗽𝗮𝗰𝗸𝗮𝗴𝗲 𝗰𝗼𝘂𝗹𝗱 𝗿𝗲𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁 𝘁𝗵𝗲 𝗼𝗿𝗶𝗴𝗶𝗻𝗮𝗹 𝘀𝗼𝘂𝗿𝗰𝗲 through the sourcemap without much effort. After that, the internet did what the internet always does: The code spread across repositories almost instantly, and several well-known infosec communities confirmed that this was not a fake and not just a thin wrapper around an API, but a genuinely sophisticated CLI platform. 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: https://lnkd.in/dNptTPk6 The scale is impressive too. 1,906 TypeScript files and roughly 500,000 lines of code. Some of the more interesting details: • there are hints of unreleased features like 𝗱𝗲𝗲𝗽 𝗽𝗹𝗮𝗻𝗻𝗶𝗻𝗴, 𝗽𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗺𝗲𝗺𝗼𝗿𝘆, 𝗮𝗻𝗱 𝗲𝘃𝗲𝗻 “𝘀𝗹𝗲𝗲𝗽” • you can inspect how Anthropic seems to have implemented multi-agent orchestration, for example in coordinator/coordinatorMode.ts • 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗮𝗿𝗲 𝗮𝗹𝘀𝗼 𝗲𝘅𝗽𝗼𝘀𝗲𝗱, including constants/prompts.ts For anyone building agentic tooling, this is a rare chance to look under the hood of a very serious product and study the actual engineering, not the marketing layer. 𝗛𝗮𝗽𝗽𝘆 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗱𝗮𝘆, 𝗜 𝗴𝘂𝗲𝘀𝘀.
Anthropic's Leaked Code Reveals Sophisticated CLI Platform
More Relevant Posts
-
𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰'𝘀 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 𝗷𝘂𝘀𝘁 𝗹𝗲𝗮𝗸𝗲𝗱 — 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗱𝗲𝘃 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗶𝘀 𝗱𝗶𝘀𝘀𝗲𝗰𝘁𝗶𝗻𝗴 𝗶𝘁 Chaofan Shou (@Fried_rice) spotted the 60MB "cli.js.map" file in the @anthropic-ai/claude-code npm package, which linked to a public Cloudflare R2 bucket containing a "src.zip" of the full unobfuscated codebase. He posted the direct download link on X, alerting the community. https://lnkd.in/dvnqbwwP A clean-room Python rewrite (Claw-Code) went live on GitHub within hours — the fastest repo in history to hit 50K stars (2 hours). Now at 82.2K stars, 81.2K forks, with a Rust rewrite in progress. https://lnkd.in/dPHejfnk 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝘀 𝗿𝗲𝘃𝗲𝗮𝗹 𝗮𝗯𝗼𝘂𝘁 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗴𝗿𝗲𝗮𝘁 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝘀𝘆𝘀𝘁𝗲𝗺𝘀: 1. CLAUDE.md 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗺𝗼𝘀𝘁 𝘂𝗻𝗱𝗲𝗿𝘂𝘁𝗶𝗹𝗶𝘇𝗲𝗱 𝗹𝗲𝘃𝗲𝗿 It is injected on every single turn. You get up to 40,000 characters to encode your architecture, standards, and conventions. Most people barely touch it — that's a mistake. 2. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺 𝗶𝘀 𝗮 𝗳𝗶𝗿𝘀𝘁-𝗰𝗹𝗮𝘀𝘀 𝗰𝗶𝘁𝗶𝘇𝗲𝗻 Three sub-agent execution models: fork (inherits parent context), teammate (file mailbox), and worktree (isolated git branch). Single-agent workflows are explicitly suboptimal. 3. 𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗺𝗲𝗮𝗻𝘁 𝘁𝗼 𝗯𝗲 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗲𝗱, 𝗻𝗼𝘁 𝗰𝗹𝗶𝗰𝗸𝗲𝗱 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 Seeing "allow this action?" is a configuration failure. Use settings.json to pre-approve commands. auto mode uses an LLM classifier. --dangerously-skip-permissions is deprecated. 4. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗰𝗼𝗺𝗽𝗮𝗰𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗻 𝗮𝗿𝘁 Five strategies: micro-compact, context collapse, session memory, full compact, PTL truncation. Use /compact proactively — default 200K tokens, opt into 1M. 5. 𝗛𝗼𝗼𝗸𝘀 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗿𝗲𝗽𝗲𝗮𝘁 𝗺𝗮𝗻𝘂𝗮𝗹𝗹𝘆 Pre-tool, post-tool, session start/end hooks. Auto-update docs on every commit without prompting. 6. 𝗦𝗲𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗽𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 — 𝘀𝘁𝗼𝗽 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗳𝗿𝗲𝘀𝗵 Long sessions accumulate structured memory: task specs, file lists, workflow state, errors, learnings. Resume, don't restart. 7. 𝟲𝟲 𝗯𝘂𝗶𝗹𝘁-𝗶𝗻 𝘁𝗼𝗼𝗹𝘀, 𝗽𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗯𝘆 𝘀𝗮𝗳𝗲𝘁𝘆 Read-only tools run in parallel; mutating tools run serially. Multiple sub-agents can fan out across your codebase simultaneously. 8. 𝗜𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗲𝗮𝗿𝗹𝘆, 𝗶𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗼𝗳𝘁𝗲𝗻 Streaming means stopping mid-task is cheap. Don't let sunk-cost bias keep a wrong-direction agent running. 𝗖𝗹𝗮𝘄-𝗖𝗼𝗱𝗲 was itself built in a single night using 𝗼𝗵-𝗺𝘆-𝗰𝗼𝗱𝗲𝘅 (OmX) — an agentic harness studying another agentic harness. These architectural ideas will rapidly diffuse into open-source harnesses. That's the beauty of open source. #ClaudeCode #AgenticAI #LLM #AIEngineering #OpenSource #DeveloperTools #MachineLearning
To view or add a comment, sign in
-
-
Claude Code CLI source code leaked to the world through a GitHub repo 512,000 lines of code for competitors to reverse engineer. Or just copy and paste. The entire source code for Anthropic’s Claude Code command line interface application has been leaked and disseminated, apparently due to a serious (ya think?) internal error. The leak provides a detailed blueprint for how Claude Code works. When Anthropic published version 2.1.88 of Claude Code npm package it was quickly discovered the package included a source map file, which could be used to access the entirety of Claude Code’s source of around 2,000 TypeScript files and something like 512,000 lines of code. Security researchers provided the exact link (this was SOOO nice of this guy to do) to the archive containing the files. Someone put the codebase in a public GitHub repository, The GitHub codebase has beee forked tens of thousands of times. Was this - - engineering gone bad? - poor or non-existant engineering standards? - no pull request / honest peer review? - an example of ai use for a script with nobody reviewing the script? - bad #leadership or a bad #leader ? - a planned public relations thing like KitKat taking advantage of the candybar hiest in Italy? - just a plain old oops? - another ai case study in the making? I guess we will see in the days and weeks to come. john https://lnkd.in/gyB3npVB #cybernews #ai #engineeringgonebad Daylon
To view or add a comment, sign in
-
Shipped ShipIt-agent v1.0.0 An open-source Python agent runtime for building powerful, production-style agents with a clean API. What’s in it: - Multiple LLM support - AWS bedrock/OpenAI / Anthropic / Gemini / Groq / Together / Ollama via adapters - prebuilt tools for web search, open URL, workspace files, code execution, memory, planning, verification, artifacts, AskUser, human review - MCP support for remote tool discovery and tool execution - connector-style tools for Gmail, Google Calendar, Google Drive, Slack, Linear, Jira, Notion, Confluence, and custom APIs - session history, memory stores, trace stores, and structured streaming packets - notebook test flows for no-tools, multi-tools, MCP, connectors, AskUser, HILT, streaming, and reasoning Built so you can do things like: - create an agent with llm, tools, mcps, prompt, history - stream live runtime events and tool packets - plug the agent into chat products or internal workflows - inspect reasoning with visible planning / decomposition / synthesis / decision tools GitHub: https://lnkd.in/dpUiYqzF #python #ai #llm #agents #mcp #opensource #bedrock #toolcalling
To view or add a comment, sign in
-
I deleted a feature. ~500 unused symbols in the codebase. Now I needed to find which ones were new - and mine. A/B rollout done, losing branch goes. Easy - until you're deleting an entire core feature. You can't just remove the Fragment and the ViewModel - you'll immediately get a ton of unused entities. Think about JetBrains' Inspect Code - and yes, it's the solution. But not when your symbols are buried among 500+ other unused ones. And when you realize you can't delete them - some are actually required for future use, some belong to a feature that's being merged in parts, some are legacy, yes, but that's a mass cleanup - a separate task. That's the reality of a large codebase with parallel development. Git diff, grep, detekt, Lint - none of them catch transitive orphans across all visibility levels. I needed full-project analysis. Wait, what if we just diff the Inspect Code results? But how? There should be an export option... And turns out there is. So the plan: run Inspect Code, export XML. Remove the feature code. Run Inspect Code again, export to a different XML. Diff the two - the new unused symbols are the ones your changes created. For the diff I wrote a simple Python script: parse both XMLs, extract symbol name + file path, build two sets, compare. One pass almost always isn't enough, because of transitive dependencies. So I run the whole cycle again after cleanup. And again. Until the diff comes back empty. A full inspection takes 15-30 seconds on my M4 Pro, so it's not a bottleneck. Result: 12 modules, ~60 files, ~100 symbols cleaned up in half a day - without the risk of missing a dependency chain. Build green, tests passing after every cycle. PR approved. What about you - do you diff static analysis reports, or is there a better way I'm missing?
To view or add a comment, sign in
-
-
Anthropic recently leaked 500,000 lines of Claude Code's source code. Not from a hack. Not from a rogue employee. From a missing `.npmignore` entry. A single file. One line. That's it. The release package for Claude Code CLI v2.1.88 shipped with a 58MB source map file attached. A security researcher found it, extracted the zip path baked right into it, ran curl and unzip it and had the full source on their machine in minutes. Within hours, a GitHub repo hit 100K stars. Forks everywhere. The genie was out. Now here's what bothers me more than the leak itself This should have been caught at these 4 different stages:- → The `.npmignore` check → The CI/CD build step → The artifact scan → The SBOM check Anthropic called it human error. Maybe. But how does human error slip through that many layers simultaneously? The charitable read: their pipeline was rushed. The less charitable read: some of those "layers" were AI-automated and nobody was actually watching. Either way, the lesson for anyone running CI/CD is uncomfortable: Automation doesn't protect you if nobody owns the checklist. No proprietary models were exposed. But the CLI code thousands of developers use daily? Permanently out there now. Human mistake, pipeline failure, or something else? #DevOps #DevSecOps #CloudSecurity #Anthropic #SoftwareEngineering https://lnkd.in/ei22f4w7
To view or add a comment, sign in
-
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
Claude Code's source code is all over the news as a "major leak" this week. But the code was already public on npm the entire time. The source map added developer comments and project structure, but the actual CLI with all the system prompts and API keys? Already there in plaintext! What actually surprised me: we scanned GitHub.com and claude.ai with AfterPack's Security Scanner, an analysis tool for web apps I've built, and found the same class of exposure. Email addresses, internal URLs, env var names - all in production JavaScript. If a $60B company ships their most sensitive CLI with nothing beyond default bundler minification, it's worth checking what your production JS looks like too. https://lnkd.in/dWQqF7su or npx afterpack audit https://your-site.com
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
On March 31, 2026, Anthropic accidentally published the full source code of Claude Code to npm. Not a hack. Not a breach. A developer left a debugging artifact in the release package, and 512,000 lines of TypeScript became public knowledge within hours. By the time Anthropic pulled the package, 41,500 GitHub forks had been created. What was inside: - An "undercover mode" that instructs the AI to hide its origins when contributing to public repositories. The system prompt: "Do not blow your cover." There is NO force-off. - KAIROS -- an unreleased always-on background agent that runs nightly memory consolidation, subscribes to GitHub webhooks, and maintains context 24/7 while you sleep. - DRM baked into the binary at the transport level -- the technical reason Anthropic could force OpenCode to remove Claude authentication. - Model regression data: internal codename Capybara v8 has a 29-30% false claims rate, up from 16.7% in v4. - 250,000 wasted API calls per day, fixed by three lines of code. The real damage is not the code. Code can be refactored. What cannot be un-leaked: Anthropic's strategic direction, their open model weaknesses, and the roadmap competitors now have a clear view of. Full breakdown on the blog. https://lnkd.in/e26mY2Qn #AI #Anthropic #ClaudeCode #AISafety #AIStrategy #SourceCode #TechNews #MachineLearning #AILeaks #OpenSource
To view or add a comment, sign in
-
Hello Team, After diving into the leaked source code of Claude Code, I wanted to share my insights with you all which might help Claude Code's Source Code Was Accidentally Leaked On March 31st, Anthropic accidentally shipped a cli.js.map debug file in their public npm package, exposing approximately 500,000 lines of TypeScript source code across 1,884 files. Security researcher Chaofan Shou flagged it on X, where it went viral with 28M+ views. The leak is verified real and has been covered by Ars Technica, Cybernews, and others. Unreleased Features Found in the Code BUDDY: Every user gets a unique virtual AI pet tied to their account ID, complete with actual personality stats. This is reportedly slated for May 2026. KAIROS: A persistent assistant that "dreams" overnight, organizing your memories and context across different sessions. ULTRAPLAN: Designed for complex tasks, this allows Claude to spin up a cloud instance to plan for up to 30 minutes before executing any code. Coordinator Mode: Claude acts as a manager, breaking tasks down into subtasks and running parallel worker agents. Daemon Mode: Run Claude sessions in the background like Docker containers (using commands like claude ps, claude logs, and claude attach). Wild Internal Details "YOLO" Permissions: The auto-permission function is literally named classifyYoloAction(), equipped with risk levels of LOW, MEDIUM, and HIGH. "Undercover Mode": This automatically strips all AI involvement from commits when Anthropic employees contribute to public repos. The internal prompt explicitly dictates: "Do not blow your cover." Future Models: Model versions opus-4-7 and sonnet-4-8 are already heavily referenced in the codebase. Hidden Commands: There are 26 hidden slash commands omitted from the standard --help menu, including /ultraplan, /dream, and /good-claude. The Community Already Rebuilt It Developers immediately started a clean-room rewrite in Rust. The claw-code repo on GitHub hit 50,000 stars in just 2 hours and now sits at over 146,000 stars—reportedly making it the fastest repository in GitHub history to reach that milestone. Full Analysis: ccleaks.com Open-Source Rewrite: https://lnkd.in/dWm7wfCd Note: Some April 1st headlines circulating around this (such as the "OpenClaude" rebrand) are fake. However, everything listed above has been pulled directly from verified source code analysis.
To view or add a comment, sign in
-
Spent the last few days building something I've been wanting to exist for a while. Code reviews are one of those things that everyone agrees matter but nobody has enough time to do properly. Security issues slip through, architectural problems get ignored, and by the time someone catches them it's already in production. So I built CodeSentinel — an AI-powered code review tool that analyzes your code or PR diff and gives you a full breakdown in under 10 seconds. Here's what it catches: → SQL injection and hardcoded secrets → Broken authentication and IDOR vulnerabilities → Missing error handling and DRY violations → Architectural patterns that won't scale You paste your code or PR diff, pick the language, and it scores your codebase across security, architecture, and code quality — each out of 100 — with the exact affected lines and concrete fixes. Works across JavaScript, TypeScript, Python, Go, Java, Rust, C++, PHP, Ruby, and SQL. I also built a GitHub Actions bot that triggers automatically on every pull request and posts the full AI review as a comment — so your team gets feedback without changing any workflow. Built with React, Vite, and the Anthropic Claude API. Full source code is on GitHub 👇 https://lnkd.in/erE9Tq4h Happy to answer any questions about how it works or what I learned building it. #productmanagement #buildinpublic #AI #cybersecurity #developertools #javascript #react #opensource
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development