Anthropic accidentally leaked a large chunk of Claude Code (~500k LOC) via a public npm package (likely a source map/debug artifact). It’s now mirrored here: https://lnkd.in/d77Mx7mg Important context: – This is the agent/app layer, not the model weights – Still a rare look into how production AI tooling is structured 👀
Anthropic Leaks Claude Code via npm Package
More Relevant Posts
-
Anthropic accidentally published their entire Claude Code source code to NPM. Here's what the leak reveals about how AI coding tools actually work: Anti-distillation defenses. Claude Code injects fake tool definitions into API calls. If competitors are recording traffic to train copycat models, they get poisoned data. A creative technical countermeasure. "Undercover mode." When Anthropic employees use Claude Code on external repos, the AI is instructed to never mention internal codenames — and crucially, to not reveal it's an AI. There's no way to turn this off. Frustration detection via regex. Yes, they use a simple regex (not the LLM) to detect if you're swearing at the tool. Sometimes the boring solution is the right one. KAIROS: an unreleased autonomous agent. References throughout the code point to a background daemon with "nightly memory distillation," GitHub webhooks, and 5-minute cron jobs. This appears to be Anthropic's next big feature. The full analysis is worth reading. This is rare visibility into how frontier AI companies actually build products. What surprised you most about these findings? #aiwithsai #ai #claudeai #aitools #machinelearning https://lnkd.in/dWm7wfCd
To view or add a comment, sign in
-
You're not going to believe this. Yesterday, someone at Anthropic mistakenly uploaded the entire source code for Claude Code and it instantly got leaked online. As you can imagine, it's a beast of a program. I (by which I mean, my Claude Code) combed through all of it to pull out all the patterns and secrets that make it so powerful. I break it all down in my blog post. If you're building AI agents, or even curious about how they work, this is a must read - https://lnkd.in/gtCAircU
To view or add a comment, sign in
-
claw-code just hit 155K stars in a few days. I took a quick look, and what’s interesting is how clearly it shows the structure of an AI agent system: breaking tasks into smaller agents using tools to actually execute actions simple orchestration to connect everything It gives a good idea of how these systems are being built today. Worth checking out if you’re working with AI agents. https://lnkd.in/diQUpSTD #AI #AI_Agents
To view or add a comment, sign in
-
This is wild. A developer just recreated Anthropic’s Claude Code system… in hours. Not leaked. Not copied. Rebuilt using AI. I explored the viral claw-code repo — and most people are missing the real story 👇 🔗 https://lnkd.in/d24DTZuf 🧠 This isn’t just a “clone” It’s a clean-room rebuild of an AI coding agent. No original code. Just behavior + architecture recreated from scratch. ⚙️ What’s actually happening under the hood: This system runs on a simple but powerful loop: → Think (LLM) → Plan steps → Use tools → Store memory → Repeat That’s it. But that loop changes everything. 🔥 Why this matters (big shift) AI is no longer just answering questions. It’s now: • Thinking in steps • Taking actions • Building things autonomously 💡 The biggest takeaway: We’re moving from: Prompt Engineering ❌ to Agent Engineering ✅ The real skill now is designing systems where AI can: think → act → improve 🚀 My take: This repo isn’t production-ready. But it’s a glimpse into the future of software. And that future is coming fast. If you're building in AI, you should be paying attention. Comment “AGENT” and I’ll share: → Full breakdown → How to run it → How to build your own #AI #GenerativeAI #SoftwareEngineering #TechTrends #Developers #MachineLearning
To view or add a comment, sign in
-
The biggest bottleneck for AI coding agents might not be the model. It might be your repo. This post from Streamlit and Lukas Masuch is one of the better takes I’ve seen on making AI agents work in a real, mature codebase: https://lnkd.in/dr5bnFVU The key idea: They didn’t make agents effective by chasing prompts. They made the codebase agent-native. That meant: • simplifying parts of the repo where agents kept struggling • adding fast, hard guardrails with linting, typing, hooks, and checks • moving important context into the repo with docs, specs, and AGENTS.md files • turning recurring engineering work into reusable skills and subagents • layering AI review on top of AI-generated output • automating stable workflows end to end What I like about this is that it feels grounded. Most teams don’t have the luxury of starting from scratch. They’re working in years-old codebases with accumulated complexity, partial documentation, and workflows that humans have learned to tolerate. Agents don’t tolerate that nearly as well. So the takeaway is bigger than Streamlit: when agents struggle, the repo is often the real problem. That’s a useful lens even if you’re not going all-in on AI agents yet. Because the same changes that help agents usually make the codebase better for humans too. Worth a read! #AIEngineering #Streamlit #DeveloperTools #SoftwareEngineering #AIAgents #LegacyCode #Engineering
To view or add a comment, sign in
-
Agent code execution today is basically a black box. I’ve been building marimo-sandbox (an MCP server built on top of marimo): every agent run becomes a persistent, reproducible notebook with full provenance, artifacts, and diffs between runs. Open any run as a notebook, change an input, re-run it yourself. Exploring this as a primitive for auditable AI execution. https://lnkd.in/gxUt6wiu
To view or add a comment, sign in
-
I'm sharing with you this very interesting AI code assistant project called Nexus (link below) it analyses your entire project written in any of 30 languages and outputs basically an optimized AST (abstract syntax tree). but it doesn't sit there doing nothing: it provides your AI a skill to navigate that tree with a query language. It has the huge advantage of your AI not guessing anymore from the few files it reads and "imagines the rest" the AI has the real situation accessible in a token saving manner. meaning you can use your context window to do what you are paying for: write correct code. It starts making you productive in new discussions immediately, sparing you the overhead of discovery on each discussion start. Additionally, it and your git history to detect the hot spots you're focused on. I'm sharing my journey writing a unique multidimensional multi-tenant platform with multiple innovations. It is materialized in Svelter(.me) as a proof of concept playground. A platform dedicated to NIS2 is in the oven and will hopefully bring huge value. Follow me for more feedback from real world AI use. #AI #SKILLS #Claude https://lnkd.in/dTWEUm_B
To view or add a comment, sign in
-
Double worry about software quality with AI tools - started a new Reddit community (open to contributions). 4 weeks ago, I created a new Reddit community where we share daily news and discuss software testing topics. Some recent discussions: - Are you into testing AI agents? 👉 https://lnkd.in/dfJXm3Gd - Is using TDD with AI too slow to advance in Python? 👉 https://lnkd.in/d2hdzqFP - Takeaways from the book "Unit Testing: Principles, Practices, and Patterns" 👉 https://lnkd.in/dyfeRd9W - Pinch Points: A practical way to refactor complex legacy code 👉 https://lnkd.in/de9iP9Nm - AI testing tools are finally showing up in real pipelines 👉 https://lnkd.in/d8JNu-En Join here: https://lnkd.in/dSkgH6RU Open to anyone interested in testing, especially around AI and modern CI/CD pipelines. #SoftwareTesting #AutomationTesting #ai #llm
To view or add a comment, sign in
-
-
I thought my AI agent was getting smarter… but it was actually just getting messier. More tools. More logic. Everything packed into one place. It worked — but something felt off. So I tried a small shift. I moved the tools out of the agent. Not just refactoring — I built them as an MCP server and let the agent call them. And that’s when things clicked. Now the system feels like this: 🧠 Agent → decides what to do ⚙️ MCP → actually does it 🌐 API → just handles requests Cleaner. More modular. Way easier to reason about. This was a small change, but it completely changed how I think about building agents. 👇 I wrote a blog breaking this down step by step: https://lnkd.in/gQnN_gum So, how are you managing tools in your agents today? #AI #LangChain #MCP #AgenticAI #LLM #FastAPI #AIEngineering #Model #Agent #MLOps
To view or add a comment, sign in
-
Just days ago, the AI world saw a major irony unfold. Anthropic known for its strict safety standards and closed-source philosophy accidentally leaked the entire source code of its flagship AI coding assistant. A single 57 MB source-map file in npm release v2.1.88 exposed over 500,000 lines of clean TypeScript across 1,900 files. The code spread rapidly online despite quick DMCA attempts. Here’s a clear breakdown of what the leak revealed: The Mistake A fast JavaScript runtime was used for building. One overlooked config allowed the full unminified source maps to ship in production, a classic but costly oversight. Architecture The system runs as an 11-step “prompt sandwich” pipeline. Every stage includes hard-coded instructions, safety guardrails, and detailed comments written so the AI itself can understand and maintain the massive codebase. It even uses Axios for HTTP calls. Safety Features Anti-distillation poison: Fake tools and misleading outputs are injected to prevent model training on its responses. A sophisticated 1,000+ line bash parser handles terminal commands with extreme caution. Hidden Capabilities Undercover Mode: Switches to natural, non-AI-sounding language for more human-like responses. Frustration Detector: Regex system scans for irritation signals (e.g., strong language) and adjusts tone accordingly. Unreleased Features Dozens of feature flags hint at upcoming additions: Buddy - A Tamagotchi-style virtual companion with randomized traits. Opus 4.7 - Next model version., Autonomous agents (Capybara/Chyris) with “dream mode” for persistent tasks. Modes like Ultra Plan, Coordinator, and Demon Mode for advanced orchestration. Community Reaction Developers quickly rewrote it in Python. The open-source version (“Claw Code”) gained massive traction on GitHub, with forks now outperforming some commercial tools. This leak sparks important questions: Can closed-source safety keep up with rapid innovation? Does openness ultimately create better, safer systems? What’s your view? Setback for closed AI or unexpected gift for the ecosystem? Drop your thoughts below 👇 #AI #OpenSource #SoftwareEngineering #DeveloperTools #FutureOfCoding
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development