So, Claude Code leaked source files we have Some hours ago Claude Code "by human mistake" pushed the source-maps of their TS code base into their NPM registry, and we all can learn more about the "Machines of Loving Grace" or at least about the client wrapper for those AGI "in 6 month only plumbers will survive" things. What's inside? It looks like a wrapper app with layered instructions and modes to proxy and enrich user's prompts to LLM and back. It has multiple layers, different modes, internal tools and other things including Anti-distillation stubs. Yep, this is a simplification from my side. Fun stuff from the code base: - 90 TODOs. I love the `// TODO: figure out why` and `// TODO: Clean this up`. - 152 eslint-disable including 37 react-hooks/exhaustive-deps - 29 deprecated labels - Exposed full names of Anthropic Safeguards team. Damn, this is so secure! - "Code review skill" that is 3 agents running in parallel (to limit context window?) with checklists instructions .md. - 'Local review skill" that starts with "You are an expert code reviewer. Follow these steps:". - "Security review skill" that starts with "You are a senior security engineer conducting a focused security review". - "Agent creation skill" that starts with "You are an elite AI agent architect specializing in crafting high-performance agent configurations.". Elite? Why not to add "Supa-dupa-mega-VIP" to make the skill even better? If the "coding is solved" for Boris Cherny, then why do they have deprecated labels, eslint-disable, TODO things, being on the verge of AGI breakthrough? If they can use custom code or even assembly code, why do they use React, Axios, Bun, Crypto, Electron.js, Lodash, Chalk? And finally, I like their their negative sentiment analysis "AGI engine" that is a single regex with red-flag words. To me all of that looks like a nice medium-sized application developed by the team of enthusiastic people with its own trade-offs, common popular dependencies, standard architectural patterns. Yes, according to the versioning number Anthropic does a lot of automated code generation, but Claude Code has the same signs of any modern app, with its pros and cons. This is not the God-like sentient machine code. This is average industry code. Tech bros, guys, stop anthropomorphizing your forward text generators, they are tools, not Gods. AI Skills are just sets of instructions. Jenkins had agents for years, but nobody sold them as singularity sentient entities. Cut this mystifications and build a better world for humans, instead of twisting meanings.
Anthropic's Claude Code Leaked Source Files Revealed
More Relevant Posts
-
𝗧𝗵𝗲 .𝗺𝗮𝗽 𝗙𝗶𝗹𝗲 𝗧𝗵𝗮𝘁 𝗘𝘅𝗽𝗼𝘀𝗲𝗱 𝗘𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴: 𝗪𝗵𝘆 𝗦𝗼𝘂𝗿𝗰𝗲 𝗠𝗮𝗽 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 𝗠𝗮𝘁𝘁𝗲𝗿 𝗠𝗼𝗿𝗲 𝗧𝗵𝗮𝗻 𝗘𝘃𝗲𝗿 𝗧𝗟;𝗗𝗥: A forgotten `.map` file just leaked 512,000 lines of Claude Code's source to the public. Meanwhile, the web dev community just finished ECMA-426, the first official source map standard. The timing couldn't be more urgent. 𝗪𝗵𝗮𝘁 𝗛𝗮𝗽𝗽𝗲𝗻𝗲𝗱 On March 31, 2026, Anthropic accidentally published Claude Code v2.1.88 on npm with a 59.8 MB source map file embedded. Anyone who pulled that package between 00:21–03:29 UTC got direct access to 512,000 lines of unobfuscated TypeScript across ~1,900 files—a complete blueprint of proprietary AI infrastructure. The root cause? One missing line in .npmignore. Someone on the release team failed to add *.map to .npmignore or configure the files field in package.json to exclude build artifacts. Claude Code is built on Bun, which generates source maps by default. The resulting cli.js.map contained sourcesContent arrays with every original TypeScript file—readable, commented, complete. Extraction was trivial: npm pack, untar, parse JSON. Within hours, the code was mirrored globally and rewritten in Python (claw-code hit 50,000 GitHub stars in 2 hours—the fastest-growing repo in GitHub history). 𝗧𝗵𝗲 𝗜𝗿𝗼𝗻𝘆 Days earlier, Bloomberg announced that source maps just became an official ECMA-426 standard after a decade of fragmented, informal coordination between browsers, bundlers, and devtools. The standard exists partly to prevent exactly this kind of chaos. But even with a decade of industry collaboration and billions in resources across Google, Mozilla, and Bloomberg, a single .npmignore oversight exposed over half a million lines of code. Worse? Anthropic had specifically built 𝗨𝗻𝗱𝗲𝗿𝗰𝗼𝘃𝗲𝗿 𝗠𝗼𝗱𝗲 a system prompt forcing Claude to hide its AI nature and strip attribution when contributing to external repos—to prevent internal information leakage. Then shipped the entire codebase by accident. 𝗪𝗵𝗮𝘁 𝗧𝗵𝗶𝘀 𝗧𝗲𝗹𝗹𝘀 𝗨𝘀 1. Tooling is a security layer. Source maps aren't just nice-to-haves for debugging they're now a critical attack surface. A standard means we can build guardrails into bundlers. 2. Formalization saves lives. For years, there was no official spec just a Google Doc. Companies shipped their own variants with no consistency. ECMA-426 formalizes the format and opens the door for validation tools that catch these mistakes at build time. 3. We're still learning. Even "simple" tools like source maps can cascade into supply chain vulnerabilities. Standardization is the first step to adding proper safeguards. The lesson? In an age of supply chain attacks and AI code exposure, even "debugging artifacts" are security decisions. Read more on source maps: [Source Maps: Shipping Features Through Standards]https://lnkd.in/gMwNwvvd
To view or add a comment, sign in
-
Claude Code's source code leaked through a .map file in the npm package. I read through 512,000 lines of TypeScript. Here's what I found. 90 compile-time feature flags. Each one gates a feature that Anthropic is building but hasn't shipped yet. Dead code elimination strips them from public builds — so you'd never know they exist. Here are the most interesting ones: ANTIDISTILLATIONCC — injects fake tools into API requests to poison training data if someone tries to distill Claude's outputs. An active defense against model theft, built right into the CLI. COORDINATOR_MODE — a multi-agent orchestrator where the main model only plans and delegates. It can't write code directly — it spawns worker agents and synthesizes their results. A full agent swarm architecture. TREESITTERBASH — replaces regex-based shell command validation with tree-sitter WASM parsing. The old regex approach had documented bypasses (eval, quoting, IFS tricks). Currently running in shadow mode, comparing results against legacy. KAIROS — a proactive assistant mode. The agent acts without waiting for your input. Monitors deploys, watches PRs, sends push notifications. Has its own "dream" mode for background memory consolidation. WEBBROWSERTOOL — a built-in browser tool, not just web fetch. Registers a WebBrowserPanel inside the terminal UI. BUDDY — an ASCII virtual pet. 18 species, 7 hats, RPG stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK). 1% chance of shiny. Deterministic per user ID. Timed to April 1, 2026. VOICE_MODE — push-to-talk voice input via WebSocket STT streaming to Claude's speech-to-text API. ULTRATHINK — extended thinking mode beyond standard chain-of-thought. Gated behind both a build flag and a remote kill-switch. DAEMON — a long-running supervisor process. Combined with BRIDGE_MODE, it becomes a remote control server for IDE integrations. NATIVECLIENTATTESTATION — adds a client attestation placeholder to billing headers so the runtime can prove it's a legitimate Claude Code binary. PERFETTO_TRACING — Chrome-style performance tracing. Writes Perfetto JSON traces for profiling the entire CLI. Some other flags worth noting: — FORK_SUBAGENT: implicit sub-agent forking with cache-aware tool matching — AGENT_TRIGGERS: cron-style scheduled agent runs — WORKFLOW_SCRIPTS: registered workflow automation tool — TERMINAL_PANEL: terminal capture tool with meta+j toggle — ULTRAPLAN: remote plan refinement with choice dialogs — TEAMMEM: team memory sync across developers 90 flags total. Most are stripped from what you download from npm. The source is out there now. It's a fascinating look at how a modern AI coding tool is architected — swarm agents, tree-sitter security, anti-distillation defenses, and a virtual pet. What surprised you most? #ClaudeCode #AI #SoftwareEngineering #Anthropic #AIArchitecture
To view or add a comment, sign in
-
-
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
Claude Code's source code is all over the news as a "major leak" this week. But the code was already public on npm the entire time. The source map added developer comments and project structure, but the actual CLI with all the system prompts and API keys? Already there in plaintext! What actually surprised me: we scanned GitHub.com and claude.ai with AfterPack's Security Scanner, an analysis tool for web apps I've built, and found the same class of exposure. Email addresses, internal URLs, env var names - all in production JavaScript. If a $60B company ships their most sensitive CLI with nothing beyond default bundler minification, it's worth checking what your production JS looks like too. https://lnkd.in/dWQqF7su or npx afterpack audit https://your-site.com
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰'𝘀 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 𝗷𝘂𝘀𝘁 𝗹𝗲𝗮𝗸𝗲𝗱 — 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗱𝗲𝘃 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗶𝘀 𝗱𝗶𝘀𝘀𝗲𝗰𝘁𝗶𝗻𝗴 𝗶𝘁 Chaofan Shou (@Fried_rice) spotted the 60MB "cli.js.map" file in the @anthropic-ai/claude-code npm package, which linked to a public Cloudflare R2 bucket containing a "src.zip" of the full unobfuscated codebase. He posted the direct download link on X, alerting the community. https://lnkd.in/dvnqbwwP A clean-room Python rewrite (Claw-Code) went live on GitHub within hours — the fastest repo in history to hit 50K stars (2 hours). Now at 82.2K stars, 81.2K forks, with a Rust rewrite in progress. https://lnkd.in/dPHejfnk 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝘀 𝗿𝗲𝘃𝗲𝗮𝗹 𝗮𝗯𝗼𝘂𝘁 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗴𝗿𝗲𝗮𝘁 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝘀𝘆𝘀𝘁𝗲𝗺𝘀: 1. CLAUDE.md 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗺𝗼𝘀𝘁 𝘂𝗻𝗱𝗲𝗿𝘂𝘁𝗶𝗹𝗶𝘇𝗲𝗱 𝗹𝗲𝘃𝗲𝗿 It is injected on every single turn. You get up to 40,000 characters to encode your architecture, standards, and conventions. Most people barely touch it — that's a mistake. 2. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺 𝗶𝘀 𝗮 𝗳𝗶𝗿𝘀𝘁-𝗰𝗹𝗮𝘀𝘀 𝗰𝗶𝘁𝗶𝘇𝗲𝗻 Three sub-agent execution models: fork (inherits parent context), teammate (file mailbox), and worktree (isolated git branch). Single-agent workflows are explicitly suboptimal. 3. 𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗺𝗲𝗮𝗻𝘁 𝘁𝗼 𝗯𝗲 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗲𝗱, 𝗻𝗼𝘁 𝗰𝗹𝗶𝗰𝗸𝗲𝗱 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 Seeing "allow this action?" is a configuration failure. Use settings.json to pre-approve commands. auto mode uses an LLM classifier. --dangerously-skip-permissions is deprecated. 4. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗰𝗼𝗺𝗽𝗮𝗰𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗻 𝗮𝗿𝘁 Five strategies: micro-compact, context collapse, session memory, full compact, PTL truncation. Use /compact proactively — default 200K tokens, opt into 1M. 5. 𝗛𝗼𝗼𝗸𝘀 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗿𝗲𝗽𝗲𝗮𝘁 𝗺𝗮𝗻𝘂𝗮𝗹𝗹𝘆 Pre-tool, post-tool, session start/end hooks. Auto-update docs on every commit without prompting. 6. 𝗦𝗲𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗽𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 — 𝘀𝘁𝗼𝗽 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗳𝗿𝗲𝘀𝗵 Long sessions accumulate structured memory: task specs, file lists, workflow state, errors, learnings. Resume, don't restart. 7. 𝟲𝟲 𝗯𝘂𝗶𝗹𝘁-𝗶𝗻 𝘁𝗼𝗼𝗹𝘀, 𝗽𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗯𝘆 𝘀𝗮𝗳𝗲𝘁𝘆 Read-only tools run in parallel; mutating tools run serially. Multiple sub-agents can fan out across your codebase simultaneously. 8. 𝗜𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗲𝗮𝗿𝗹𝘆, 𝗶𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗼𝗳𝘁𝗲𝗻 Streaming means stopping mid-task is cheap. Don't let sunk-cost bias keep a wrong-direction agent running. 𝗖𝗹𝗮𝘄-𝗖𝗼𝗱𝗲 was itself built in a single night using 𝗼𝗵-𝗺𝘆-𝗰𝗼𝗱𝗲𝘅 (OmX) — an agentic harness studying another agentic harness. These architectural ideas will rapidly diffuse into open-source harnesses. That's the beauty of open source. #ClaudeCode #AgenticAI #LLM #AIEngineering #OpenSource #DeveloperTools #MachineLearning
To view or add a comment, sign in
-
-
512,000 lines of Claude Code source code. Public. March 31, 2026. Anthropic forgot to add *.map to their .npmignore file. A single packaging mistake spilled everything onto npm and into 41,500 GitHub forks. An Anthropic engineer called it what it was: "an honest mistake" caused by manual steps in the deploy process. They're fixing it with more automation. No one was fired. That's the right response. But the source code itself is the real story. THE REAL STORY Anthropic already concluded single-agent AI coding loops are not enough. They built the fix. And haven't shipped it to you yet. EMPLOYEE-ONLY RELIABILITY STACK 🎯 Advisor model: A second, stronger model critiques the first agent's work before and after implementation. Employee-only. 🎯 Verification agent: Separate adversarial system designed to "break" implementations, not confirm them. Output: PASS, FAIL, or PARTIAL. Not shipped externally. 🎯 29 to 30% internal false-claims rate. Documented in the source. They know Claude Code reports success on broken code at this rate. This isn't a company hiding a problem. It's a company testing the solution internally before rolling it out. The frustration is understandable — if you're paying for Claude Code today, you want those safety nets too. But the architecture is genuinely impressive. THE CONCURRENT THREAT Same day: axios npm package compromised. Maintainer account hijacked. Malicious versions deployed Remote Access Trojans. Claude Code depends on axios. Anyone who ran npm install during that window may have a RAT on their machine. This is the part that should worry you most. WHAT ELSE THE SOURCE REVEALS ⏳ A cloud agent platform hiding behind a CLI: remote sessions, scheduled agents, 30-minute autonomous exploration, SSH execution ⏳ Enterprise governance already built: org-level policy restrictions, managed-settings security dialogs ⏳ KAIROS: Always-on daemon mode. Watches your environment. Acts without prompts. Includes "auto-dream" memory consolidation ⏳ Undercover Mode: Conceals AI involvement in open-source contributions. Raises questions under the EU AI Act (Aug 2026) ⏳ Official plugin marketplace: An app store for AI agent capabilities with Anthropic at the center IF YOU USE CLAUDE CODE 🔴 Audit npm lockfiles for axios 1.14.1 or 0.30.4 🔴 Build your own verification loops. Do not trust "Done!" without evidence 🔴 Understand where shared team memory and cloud agents send your data 🔴 Review your AI attribution policies — Undercover Mode hides AI involvement by default The full analysis breaks down all eight major findings, what the source does and does not support, and what it means for enterprise compliance teams. Read it here 👉 : https://lnkd.in/gJCWjT38 What interests you most: the multi-agent reliability architecture, the supply chain risk, or the autonomous execution model? #AI #Cybersecurity #AgentArchitecture #Compliance
To view or add a comment, sign in
-
-
Anthropic pushed a software update at 4AM - a debugging file was “accidentally” bundled inside - 512,000 lines of proprietary source code. When an npm package surfaced that looks a lot like the guts of Claude Code, I did what I have done since my DOS days. I pulled the repo and read it like a thriller. It is messy, occasionally hilarious, and very revealing. If authentic, Anthropic just handed the industry a playbook for agentic coding assistants. Highlights from my read 1. Anti distillation is a real thing. They inject fake tool definitions into API traffic so anyone scraping to train a rival gets poisoned data. Corporate honeypots for model theft. 2. Undercover Mode is built to keep Claude from outing itself on open source repos. Commits read human. There is no force off switch. Somewhere a security team just breathed into a paper bag. 3. Client attestation appears on every API call with a computed hash. Translation. Prove you are a real install, not a wrapper or a scraper. 4. User frustration detection is a hardcoded regex chain for swear words. No model call. If anyone from Anthropic is reading this, you forgot jfc. Unreleased ideas that jumped out 1. A Dream system that runs a background consolidation cycle with a three gate trigger and four phases named orient, gather, consolidate, prune. Purely reflective. Read only. The prompt literally says you are performing a dream. 2. Coordinator Mode turns Claude Code into a true multi agent system. Research in parallel. A coordinator synthesizes. Implementation executes. Verification tests. The prompt bans lazy delegation with do not say based on your findings. Read the actual findings. 3. BUDDY is a full Tamagotchi for developers. Species, rarity tiers, shiny variants, stats like chaos, snark, and debugging. Maybe an April Fools easter egg. Maybe an onboarding dopamine loop. Does this matter? People say client code is not a moat. For raw inference, sure. But blueprints matter. Orchestration patterns, prompts, trust plumbing, and the roadmap are gold. Any competent team can point an agent at this repo and get the same tour in twenty minutes. I did. The bigger signal. The center of gravity is shifting from chat to ambient. Always on observers. Memory consolidation. Verified clients. Poisoned data for would be scrapers. Multi-agent choreography as the default, not a research blog post. So the leading labs now face a tough ethical fork. Claim they did not look or quietly learn from the most useful codebase that just landed in their lap. We all know how this movie ends. The real question is for the rest of us. Do we lean into ambient agents, client attestation, and background memory, or pretend this was a one off oops and go back to chatbot theater. I know where I am placing my bet.
To view or add a comment, sign in
-
-
math-codegen, Remote Code Execution (RCE) via String Literal Injection, GHSA-p6x5-p4xf-cc4r (Critical) The vulnerability resides in how `math-codegen` processes string literals. When an application passes user‑controlled input to cg.parse(), the library does not sanitize or escape the string content. Instead, it injects that content verbatim into the body of a dynamically generated JavaScript function using new Function(...). This turns any unsanitized string literal into executable code. An attacker can craft a malicious expression containing system commands (e.g., …...
To view or add a comment, sign in
-
math-codegen, Remote Code Execution (RCE) via String Literal Injection, GHSA-p6x5-p4xf-cc4r (Critical) The vulnerability resides in how `math-codegen` processes string literals. When an application passes user‑controlled input to cg.parse(), the library does not sanitize or escape the string content. Instead, it injects that content verbatim into the body of a dynamically generated JavaScript function using new Function(...). This turns any unsanitized string literal into executable code. An attacker can craft a malicious expression containing system commands (e.g., …...
To view or add a comment, sign in
-
Anthropic Forgot One Line. We Got 512,000. One missing entry in a config file. That's it. No sophisticated attack. No insider threat. Someone at Anthropic forgot to add *.map to .npmignore — and on March 31, 2026, that omission handed the world the entire Claude Code codebase. 512,000 lines of TypeScript. 1,900 files. 44 hidden feature flags. A stealth commit system. An autonomous background agent. Internal model codenames with regression data attached. All of it. Public. On npm. What Happened When Anthropic published version 2.1.88 of @anthropic-ai/claude-code, it accidentally included cli.js.map — a 59.8 MB source map sitting in a publicly accessible S3 bucket. A source map is the key that translates minified production output back to readable TypeScript. It's a debugging artifact meant to stay internal. The root cause: Bun, the JavaScript runtime Anthropic builds on, had a known open bug where source maps were generated even when disabled in config. Their own toolchain bit them. A researcher named Chaofan Shou spotted it first and posted on X. Within minutes the code was mirrored to GitHub. Within hours the repo had 75,000 stars — reportedly the fastest-growing repository in GitHub history. What Was Inside Engineers described Claude Code as built less like a chatbot wrapper and more like a small operating system. 40+ internal tools, each with their own permission gates. Background memory processes. A controller agent delegating to swarms of subagents through Coordinator Mode. The 44 hidden feature flags were the real story — compiled production code sitting behind switches that compile to false in the public build. Twenty of those features haven't shipped yet. One was "Undercover Mode" — a 90-line file called undercover.ts — designed to strip all Anthropic internals from commit messages when contributing to external repos. No attribution. No mention of Claude Code itself. Boris Cherny, Anthropic's head of Claude Code: "Plain developer error. 100% of my contributions to Claude Code were written by Claude Code." The irony landed immediately: Anthropic built a system to prevent internal information leaking through code contributions — then leaked the entire source through a file they forgot to exclude from npm. The Competitive Hit Claude Code's ARR had crossed $2.5 billion as of early 2026. The leak handed every competitor — Cursor, Windsurf, Copilot — a literal engineering blueprint for how Anthropic solved multi-agent orchestration, context entropy, and memory management at scale. You can't unsee a blueprint. Next: KAIROS — the autonomous background agent that runs while you sleep. #ClaudeCode #Anthropic #AIEngineering #GenerativeAI #OpenSource #AITooling
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Unfortunately these big AI labs love hyping up AGI. A side effect of how much importance people put on the valuation of a company