🔥 The Claude Code source code just leaked. And what’s inside changes everything. 59.8MB. 512,000 lines of TypeScript. 1,900 files. All of it accidentally shipped inside an npm package update this morning — and the internet moved fast. One GitHub repo hit 50,000 stars in under 2 hours. Here’s what the AI community found inside — and why it matters for every builder: 1️⃣ KAIROS — the autonomous agent Anthropic never announced Claude Code has a fully built “daemon mode.” It runs in the background while you’re idle, performing memory consolidation, merging observations, removing contradictions, and compressing context. When you return, your agent is smarter than when you left. This isn’t vaporware. It’s compiled code behind a feature flag. 2️⃣ Anti-Distillation: Fake tools to poison competitors There’s a flag called ANTI_DISTILLATION_CC. When enabled, Anthropic injects fake tool definitions into API requests — specifically to corrupt training data if someone records Claude’s outputs to train a rival model. This is competitive AI warfare written directly into production code. 3️⃣ 44 hidden feature flags. 20 unshipped. The roadmap Anthropic never published is now public. Persistent background agents. Remote control from your phone. Cross-session memory that studies its own mistakes. 4️⃣ Buddy. A literal Tamagotchi. I’m not joking. There’s a full companion pet system with species rarity, shiny variants, and a soul description written by Claude on first hatch. Gated behind a BUDDY compile flag. Someone at Anthropic is having the time of their life. 5️⃣ This is Anthropic’s second leak in a week. Days earlier, Fortune reported 3,000 internal files were publicly accessible — including a draft blog post about an unreleased model codenamed “Capybara.” The Claude Code leak confirmed it. The internet already has it. Go grab it and go absolutely bonkers with it 👇 🔗 https://lnkd.in/gjJCyZ-V #AI #ClaudeCode #AgenticAI #ProductManagement #ArtificialIntelligence #Anthropic
Anthropic's Claude Code Leaked: AI Warfare and More
More Relevant Posts
-
You've probably heard by now that Anthropic's entire codebase was leaked by accident recently. The company claims it happened due to "human error." There's plenty of misinformation abound, but here's what you should know: 1. Repositories sharing the leaked source code were almost immediately taken down through DMCA requests. But someone rewrote the entire codebase in Python from scratch just in time so it can't be taken down. It's still live on GitHub as of right now. 2. A bizarre trend taking over social networks is random people claiming they were the Anthropic employee who mistakenly leaked the code. This seems to be some kind of trolling attempt that's turned into a trend. Take any such claims with a grain of salt. 3. The leak exposed the company's upcoming LLM update, Claude Mythos, which is said to be a significant improvement over Opus. There's also a bunch of less interesting unreleased features leaked in the source code. 4. While it's still unclear what ultimately caused the source map file to leak, many have theorized that it happened due to a vibe coding issue in Claude Code, which the uploader didn't think to examine more closely. This is what happens when an enterprise-scale organization decides to forego basic security best practices. Amodei has often been heralded as the champion of a steadier and more human approach to AI, but as many of us have known all along, the reality is not as flattering.
To view or add a comment, sign in
-
𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁𝗮𝗹𝗹𝘆 𝗹𝗲𝗮𝗸𝗲𝗱 𝟱𝟭𝟮,𝟬𝟬𝟬 𝗹𝗶𝗻𝗲𝘀 𝗼𝗳 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲'𝘀 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲. What happened next is wild. 𝟰 𝗔𝗠. 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗽𝘂𝘀𝗵𝗲𝘀 𝗮 𝗿𝗼𝘂𝘁𝗶𝗻𝗲 𝘂𝗽𝗱𝗮𝘁𝗲 𝘁𝗼 𝗻𝗽𝗺. Inside the package — their entire codebase. A 60 MB debug file accidentally bundled in. 23 minutes later, a researcher spots it. Downloads it. Posts it on X. Within 6 hours: 3 million views. By morning: forked 41,000+ times across GitHub. Anthropic started sending DMCA takedowns. Too late. Someone mirrored it to a decentralized platform with one message: "Will never be taken down." 𝗧𝗵𝗲𝗻 𝗮 𝗞𝗼𝗿𝗲𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗻𝗮𝗺𝗲𝗱 𝗦𝗶𝗴𝗿𝗶𝗱 𝗝𝗶𝗻 𝘄𝗼𝗸𝗲 𝘂𝗽 𝗮𝘁 𝟰 𝗔𝗠 𝗮𝗻𝗱 𝗱𝗶𝗱 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗰𝗿𝗮𝘇𝘆. Instead of copying the leaked code, he rewrote the entire thing from scratch — in Python — before sunrise. Called it claw-code. It became the fastest repo to cross 50K GitHub stars. And because it's a clean-room rewrite (new code, same ideas), Anthropic can't touch it legally. 𝗧𝘄𝗼 𝘁𝗵𝗶𝗻𝗴𝘀 𝘄𝗼𝗿𝘁𝗵 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗵𝗲𝗿𝗲: → Never ship .map, .env, or debug files in production releases. This was a simple deployment mistake — not a hack. → The real moat isn't always the code. It's the workflow, the design, the orchestration. That's why an overnight rewrite was even possible. The most interesting part? This leak showed that top AI coding agents are really just well-designed workflows — not secret model magic. 𝗧𝗵𝗮𝘁'𝘀 𝗮 𝗵𝘂𝗴𝗲 𝗶𝗻𝘀𝗶𝗴𝗵𝘁 𝗳𝗼𝗿 𝗮𝗻𝘆𝗼𝗻𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗔𝗜. Did you follow this story? Drop your thoughts below.
To view or add a comment, sign in
-
-
What the Architecture Reveals 🔍 512,000 lines of leaked Claude Code told us something important. The most powerful AI coding agent in the world isn't built on magic. It's built on surprisingly minimal architecture. Here's what the claw-code analysis revealed about how production agentic systems actually work: One agent loop. 40+ discrete tools. No rigid workflows. No hardcoded task sequences. The harness creates conditions for reasoning. The model does the work. Subagent spawning on context overflow. When a task risks filling the primary context window, Claude Code spawns independent agent instances with their own context and scope. Exploratory work doesn't contaminate the main thread. This is how you build agents that can actually run for hours without losing coherence. Permission-gated tools. Deny list always wins. Every tool — bash, file reads, web fetch, git ops — is individually permission-gated. Compound bash commands are evaluated sub-command by sub-command. If any part gets denied, the whole chain is blocked. This is the right design for anything executing real shell commands. 44 hidden feature flags. The most strategically sensitive part of the leak. These are features Anthropic has built but hasn't shipped. Competitors now have a product roadmap they weren't supposed to see. This architecture validates everything we've built at SELARIX. One founder. A cabinet of specialized agents. Tool permissions scoped by role. Context managed by design. The blueprint was always sound. Now everyone can see it. 🔗 claw-code.codes 🔗 https://lnkd.in/dJaA59Gt #AIArchitecture #AgenticAI #ClaudeCode #ClawCode #MultiAgent #SELARIX #OpenSource
To view or add a comment, sign in
-
The internet 1, closed-source AI 0. At 4 a.m., Anthropic accidentally exposed 512,000 lines of its proprietary source code in a public update. A researcher spotted the leak within minutes, and the post quickly reached 23 million views. The company fired off DMCA takedowns in response. Enter Sigrid Jin, the world’s heaviest Claude user, with 25 billion tokens consumed per year according to the WSJ, wakes up, rewrites the entire codebase in clean Python, and uploads it to GitHub as “claw-code.” Because it was a creative rewrite rather than a direct copy, the DMCA couldn’t touch it. The repo hit 49,000 stars and 56,000 forks in record time, becoming GitHub’s fastest-growing ever, and was soon mirrored on a decentralized platform where it can never be deleted. The company built to “align AI and save humanity” just watched its entire proprietary playbook go public and couldn’t stop it. What does that say about the illusion of security in closed systems? When one slip can expose everything, and control disappears the moment code meets the open internet? https://lnkd.in/gnsW2CQi
To view or add a comment, sign in
-
🚨 Wait, what? The Claude Code "Open-Sourced" Masterclass The company, known for its "AI Safety First" stance, accidentally leaked the entire source code for Claude Code. As a developer, my heart goes out to the team at Anthropic. We’ve all had that "sinking feeling" after a deployment error, but rarely on a stage this large. ℹ️ What happened? It wasn't a hack. It was a classic packaging mistake. Version 2.1.88 was shipped with a 57MB source map file. Within minutes, security researchers and the dev community had reconstructed over 500,000 lines of readable TypeScript. ‼️ The "Secrets" inside the code: Now that the "black box" is open, we’re seeing exactly how a top-tier AI agent is built. It’s less "alien tech" and more brilliant orchestration: ➡️ The Prompt Sandwich: Claude Code uses an 11-step process to turn your input into an output, held together by massive system prompts and guardrails. ➡️ Anti-Distillation "Poison Pills": The code contains fake tools meant to confuse competitors who try to train their own models using Claude’s data. ➡️ Undercover Mode: A feature designed to hide AI signatures in commit messages, making the code look like a human wrote it. ➡️ Future Roadmap: The leak revealed unreleased features like KAIROS (background agents), a digital companion called Buddy, and references to Opus 4.7. 🚀 The Lesson for all of us: Your IP is only as secure as your build pipeline. Anthropic recently acquired Bun.js, and while the exact cause is debated, it serves as a massive reminder to double-check what is being bundled in your production releases. It’s a fascinating look under the hood of agentic AI, even if it wasn't meant to be public. Check out the rewrite here: https://lnkd.in/gYkUvwRa How do you feel about seeing this "prompt spaghetti" ❓ #AI #WebDev #Anthropic #Claude #SoftwareEngineering #CyberSecurity #CodingLife
To view or add a comment, sign in
-
🚨 500,000+ Lines of Claude Code Just Leaked — Not a Hack, But a Mistake This week, something unusual happened in AI. https://lnkd.in/g-ihMkjD Anthropic accidentally exposed over 512,000 lines of Claude Code. Let that sink in. No breach. No attacker. Just a build/packaging error. ⚠️ What actually happened? A debug source map file (.map) was mistakenly included in a public release. That file allowed developers to reconstruct: Full TypeScript source code Internal architecture Feature flags and experimental systems Within hours: The code spread across GitHub Developers started analyzing it Mirrors appeared globally 👉Code mirror: https://lnkd.in/g-ihMkjD
To view or add a comment, sign in
-
Most tools for working with code do one of two things: - help you search - help you generate But neither really helps you understand an unfamiliar codebase. So I built something for that. You give it a repository → ask a question → it answers based on the actual code. Not just “here are some files”, but: - how functions connect - what depends on what - where things might be breaking Under the hood it’s simple: AST parsing → function-level chunks → embeddings → call graph → constrained reasoning What made it interesting wasn’t the pieces, but getting them to work together: - balancing retrieval vs context expansion - keeping answers grounded (not generic) - moving expensive work to indexing time It’s still very much v1. Works best on local Python repos, and there are a lot of open problems left. If you’ve worked on: - code search - developer tooling - LLM systems I’d genuinely value your thoughts. Full breakdown here: 🔗 https://lnkd.in/eUGDgzzG #SoftwareEngineering #AI #LLM #MachineLearning #DeveloperTools
To view or add a comment, sign in
-
Developers are finding new ways to tame the complexity of LLM and agent workflows. At the heart of this effort is hieuchaydi/RepoBrain, a local-first codebase memory engine for AI coding assistants. RepoBrain indexes repositories, retrieves grounded evidence, traces logic flows, and ranks the safest files to inspect or edit before code generation. This is a critical step forward because teams are trying to make agent behavior more reliable, not just more powerful. What sets RepoBrain apart is its ability to provide actionable insights without requiring a hosted backend or API key. This is achieved through a combination of local index + evidence-backed retrieval, route/service/job flow hints for faster codebase orientation, and ranked edit targets with confidence and warnings. RepoBrain's capabilities include: - local index + evidence-backed retrieval - route/service/job flow hints for faster codebase orientation - ranked edit targets with confidence and warnings - built with Python The momentum behind RepoBrain looks earned because the project is easy to place inside a real workflow, not just admire from a distance. It lands in high-interest areas like agent, ai-agents, llm, and recent commits make it feel active instead of abandoned. The project still feels early, which gives it some discovery momentum. Repo: https://lnkd.in/ggAjSMGY #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Python #RepoBrain #Agent #AiAgents
To view or add a comment, sign in
-
-
🤯 Something wild just happened in the AI dev world. A package was pushed to npm… And it accidentally exposed how Claude-like coding agents actually work under the hood. Not just small snippets. 👉 Full client-side logic 👉 How agents plan, execute, and iterate 👉 How code generation + feedback loops work Naturally, developers moved fast. Some downloaded it. Some mirrored it. And one developer even rebuilt the entire thing in Python — making it harder to take down. Now there’s an open repo that behaves very similar to Claude Code. 🔗 https://lnkd.in/gawyWMpY ⸻ But here’s what I find most interesting: This shows how fast ideas spread in tech today. Even if something gets taken down… If developers see value, it gets replicated, improved, and redistributed. ⸻ The bigger takeaway? We’re entering a phase where: ⚡ AI agents are no longer “black boxes” 🧠 Their workflows are becoming understandable 🛠️ And increasingly… reproducible Which means the barrier to building powerful developer tools is dropping fast. ⸻ This isn’t just about one leak. It’s about how quickly the ecosystem learns, adapts, and rebuilds. Curious — do you think this kind of openness accelerates innovation, or creates more risk? 👇 #AI #OpenSource #Developers #SoftwareEngineering #TechTrends
To view or add a comment, sign in
-
Developers are constantly seeking ways to streamline their workflows and make the most of their time. In the realm of LLM and agent workflows, teams often struggle to balance reliability and power. Most rely on cumbersome server-side solutions that are difficult to scale and maintain. This is where ComposioHQ/awesome-codex-skills comes in – a curated list of practical Codex skills for automating workflows across the Codex CLI and API. At its core, this repository provides a collection of Python-based skills that can be used to improve the reliability and efficiency of agent behavior. What stands out is the variety of skills available, including bernstein – a multi-agent orchestrator with Codex CLI adapter, and what Are Codex Skills? – a fundamental question that gets to the heart of how these skills work. What makes this repository particularly interesting is how it addresses a common pain point in the development process. By providing a list of practical skills that can be easily integrated into existing workflows, ComposioHQ/awesome-codex-skills makes it easier for developers to make agent behavior more reliable, not just more powerful. Here are some key highlights: - bernstein – Multi-agent orchestrator with Codex CLI adapter. Runs parallel Codex agents in isolated git worktrees with quality gates. - what Are Codex Skills? - a curated list of practical Codex skills for automating workflows across the Codex CLI and API. - built with Python The traction makes sense: a repository sitting at #3 with around 637 new stars in the current trending window is usually solving a problem people can feel immediately. With its focus on making fast-moving AI workflows easier to steer and reuse in real projects, it's no wonder that ComposioHQ/awesome-codex-skills is getting attention. Repo: https://lnkd.in/eTmpF-UT #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Python #AwesomeCodexSkills #Awesome #AwesomeLists
To view or add a comment, sign in
-
Explore related topics
- Understanding Anthropic Claude AI
- Autonomous Agents That Imitate Human Behavior
- Best Practices for Using Claude Code
- How Autonomous AI Agents Process Information
- Open Source Artificial Intelligence Models
- Claude's Contribution to Streamlining Workflows
- Updates on New AI Model Releases
- Best Use Cases for Claude AI
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development