Anthropic accidentally published their entire Claude Code source code to NPM. Here's what the leak reveals about how AI coding tools actually work: Anti-distillation defenses. Claude Code injects fake tool definitions into API calls. If competitors are recording traffic to train copycat models, they get poisoned data. A creative technical countermeasure. "Undercover mode." When Anthropic employees use Claude Code on external repos, the AI is instructed to never mention internal codenames — and crucially, to not reveal it's an AI. There's no way to turn this off. Frustration detection via regex. Yes, they use a simple regex (not the LLM) to detect if you're swearing at the tool. Sometimes the boring solution is the right one. KAIROS: an unreleased autonomous agent. References throughout the code point to a background daemon with "nightly memory distillation," GitHub webhooks, and 5-minute cron jobs. This appears to be Anthropic's next big feature. The full analysis is worth reading. This is rare visibility into how frontier AI companies actually build products. What surprised you most about these findings? #aiwithsai #ai #claudeai #aitools #machinelearning https://lnkd.in/dWm7wfCd
Anthropic's Claude Code Leak Reveals AI Coding Tool Secrets
More Relevant Posts
-
Anthropic accidentally leaked a large chunk of Claude Code (~500k LOC) via a public npm package (likely a source map/debug artifact). It’s now mirrored here: https://lnkd.in/d77Mx7mg Important context: – This is the agent/app layer, not the model weights – Still a rare look into how production AI tooling is structured 👀
To view or add a comment, sign in
-
I'm sharing with you this very interesting AI code assistant project called Nexus (link below) it analyses your entire project written in any of 30 languages and outputs basically an optimized AST (abstract syntax tree). but it doesn't sit there doing nothing: it provides your AI a skill to navigate that tree with a query language. It has the huge advantage of your AI not guessing anymore from the few files it reads and "imagines the rest" the AI has the real situation accessible in a token saving manner. meaning you can use your context window to do what you are paying for: write correct code. It starts making you productive in new discussions immediately, sparing you the overhead of discovery on each discussion start. Additionally, it and your git history to detect the hot spots you're focused on. I'm sharing my journey writing a unique multidimensional multi-tenant platform with multiple innovations. It is materialized in Svelter(.me) as a proof of concept playground. A platform dedicated to NIS2 is in the oven and will hopefully bring huge value. Follow me for more feedback from real world AI use. #AI #SKILLS #Claude https://lnkd.in/dTWEUm_B
To view or add a comment, sign in
-
🚨 Everyone is talking about Claude leaks… But here’s something actually useful 👇 I found this GitHub repo: 👉 https://lnkd.in/g8GimQMz Instead of exploiting anything, this project does something smarter: It rebuilds the idea of an AI “agent system” in a clean, legal, and understandable way. So what’s the real value here? 💡 For engineers • A simple reference on how to structure AI agents • How tools, commands, and workflows connect together • Python-based → easy to read, modify, and extend 🧪 For AI builders & tinkerers • A playground to experiment with agent workflows • Learn how systems track tools, tasks, and behavior • Understand how to compare (audit) different AI systems ⚖️ For people thinking about ethics • Shows how to learn from closed systems without copying them • A great example of “build inspired, not duplicated” 🌍 For the AI community • Real-world example of an agent framework • Easy way to understand how modern AI systems are structured The biggest takeaway? 👉 AI is not just about models anymore 👉 It’s about how you design the system around them Most people will chase leaks. Smart builders will learn from them. #AI #Github #Claude #PromptEngineering #Developers #BuildInPublic
To view or add a comment, sign in
-
This is wild. A developer just recreated Anthropic’s Claude Code system… in hours. Not leaked. Not copied. Rebuilt using AI. I explored the viral claw-code repo — and most people are missing the real story 👇 🔗 https://lnkd.in/d24DTZuf 🧠 This isn’t just a “clone” It’s a clean-room rebuild of an AI coding agent. No original code. Just behavior + architecture recreated from scratch. ⚙️ What’s actually happening under the hood: This system runs on a simple but powerful loop: → Think (LLM) → Plan steps → Use tools → Store memory → Repeat That’s it. But that loop changes everything. 🔥 Why this matters (big shift) AI is no longer just answering questions. It’s now: • Thinking in steps • Taking actions • Building things autonomously 💡 The biggest takeaway: We’re moving from: Prompt Engineering ❌ to Agent Engineering ✅ The real skill now is designing systems where AI can: think → act → improve 🚀 My take: This repo isn’t production-ready. But it’s a glimpse into the future of software. And that future is coming fast. If you're building in AI, you should be paying attention. Comment “AGENT” and I’ll share: → Full breakdown → How to run it → How to build your own #AI #GenerativeAI #SoftwareEngineering #TechTrends #Developers #MachineLearning
To view or add a comment, sign in
-
Recently explored an interesting open-source project: code-review-graph It builds a local knowledge graph of your codebase, mapping functions, classes, and dependencies — so AI tools only read what actually matters instead of scanning entire repos. Result: ▶ Faster code reviews ▶ Better context understanding ▶ Significant reduction in token usage What stood out is its ability to track changes incrementally and quickly identify impacted areas, making large codebases much easier to navigate. Feels like a step towards solving one of the biggest problems in AI coding: context, not capability. 🌐 https://lnkd.in/dRKnbqVj 👏 Kudos to Tirth Kanani for building this! #AI #CodeReview #OpenSource #DeveloperTools #LLM
To view or add a comment, sign in
-
Agentic coding is here to stay. But human understanding is still what matters, because agency also means ownership. At the end of the day, you are responsible for what you ship. That is why AI slop is a real problem. The issue is not only correctness. It is the growing cognitive workload caused by layers, indirection, wrappers, and abstractions that make the main path harder to follow. Good code should be easy to trace and understand. That is why I built ReaderFriction, an open-source CI tool to make that cost visible. https://lnkd.in/ej22H7Mh The image explains the rest. #ai #softwareengineering #codequality #developerexperience #opensource
To view or add a comment, sign in
-
Every developer I mentor starts confident about AI. They've called the API, seen the response, it works. Then I ask: "What happens when it returns amount: "12.50" as a string instead of a float?" Or: "Your agent picks the wrong tool. How does your test suite catch that before production does?" If there's no clear answer, that's not a knowledge gap. It's an architecture gap. The real problem with AI apps isn't the AI. It's everything around it. LLMs are non-deterministic. You can't assert on raw output. So how do you test an agent that calls tools, parses structured responses, and chains decisions? You isolate. You mock the model boundary. You inject dependencies so your agent logic is testable without a single API call. You define Protocols so swapping from OpenAI to Anthropic to a local model doesn't touch your business logic. These aren't clever tricks. They're the same patterns experienced Python devs use in any production codebase. Repository pattern, service layers, dependency injection. The difference is most AI tutorials skip all of it. What we actually build Bob Belderbos and I are running a 6-week cohort where you build a production AI expense agent from scratch, then bolt on 3 interfaces: Telegram bot, FastAPI backend, Streamlit dashboard. Containerized with Docker, provider-agnostic architecture. The process: 26 failing tests in week 1 - 136 by week 6. You write the code to make them green. Each test encodes a specific engineering decision. Why this abstraction exists, where this boundary belongs, how this failure mode gets caught. By the end: 95%+ coverage, CI/CD pipeline, and a deployed app with a Git history that reads like an experienced engineer wrote it. Weekly group calls. PR reviews on your actual code. Private community for async support. Start date: April 20th. Details and full curriculum → pythonagenticai.com Questions? DM me or Bob Belderbos. Happy to jump on a quick call.
To view or add a comment, sign in
-
While scrolling through LinkedIn, I found an interesting article (Sebastian Raschka, PhD Sebastian Raschka that clearly explains the real architecture behind coding agents and how they are built) about AI agents. The author clearly explained the structure of agents, how they perform tasks, and how they are built. Below is the link to the article, and here is what I understood from it. The article explains that AI coding agents are not just large language models they are systems built around LLMs using tools, memory, and context. In real-world applications, the surrounding system plays as important a role as the model itself. From my understanding, the architecture of a coding agent mainly includes these components: 1. Live Context (Workspace Awareness) The agent first understands the project environment — repository structure, files, and instructions — before taking action. This prevents it from working blindly and improves accuracy. 2. Prompt Structure & Cache Reuse Instead of rebuilding prompts every time, agents reuse stable instructions and only update new inputs. This makes them faster and more efficient. 3. Tool Usage Agents don’t just generate text; they can use tools like reading files, running commands, or editing code. This makes them act more like real developers rather than chatbots. 4. Context Management To avoid overload, agents compress old information and keep only relevant data, ensuring better performance over long sessions. 5. Memory System Agents maintain both working memory and full transcripts, allowing them to remember past actions and continue tasks logically. 6. Sub-agents & Delegation Advanced agents can delegate smaller tasks to sub-agents, improving efficiency and parallel problem-solving. My Key Takeaway: AI agents are not just “smart models” they are structured systems combining memory, tools, and reasoning loops. The future of AI productivity will likely depend more on how we design agent systems rather than only improving models. Here’s the article: https://lnkd.in/gdD7dEdX github: https://lnkd.in/gmCf_H5e. #AIAgents #CodingAgents #LLM #MachineLearning #AI #GenerativeAI #DeepLearning
To view or add a comment, sign in
-
You're not going to believe this. Yesterday, someone at Anthropic mistakenly uploaded the entire source code for Claude Code and it instantly got leaked online. As you can imagine, it's a beast of a program. I (by which I mean, my Claude Code) combed through all of it to pull out all the patterns and secrets that make it so powerful. I break it all down in my blog post. If you're building AI agents, or even curious about how they work, this is a must read - https://lnkd.in/gtCAircU
To view or add a comment, sign in
-
🚀 This repo changed how I think about RAG: 👉 Graphify Andrej Karpathy’s shared personal knowledge base went viral, and within 48 hours, someone had already delivered a fully built version right to users. https://lnkd.in/gHW-8dn3 Turn any folder (code, docs, images) into a queryable knowledge graph — in one command. 💡 Why it matters: We’ve been over-optimizing for: • embeddings • vector DBs Graphify shows: structure > embeddings • Extracts structure (AST for code = zero token cost) • Uses LLM only where needed • Builds persistent relationships • No vector DB required 🔥 Big idea: • The future of RAG isn’t better retrieval • It’s better representation If you’re building: • AI agents • dev tools • knowledge systems If this resonates: • Like 👍 • Repost 🔁 • Or drop your thoughts — especially if you're building in this space Curious how others are thinking about: 👉 Graph vs Vector vs Hybrid approaches #AI #LLM #GraphRAG #AgenticAI #OpenSource #KnowledgeGraph #DeveloperTools #Andrej #Karpathy
To view or add a comment, sign in
Explore related topics
- Understanding Anthropic Claude AI
- AI Coding Tools and Their Impact on Developers
- How to Use AI for Manual Coding Tasks
- How AI Agents Are Changing Software Development
- How to Use AI Code Suggestion Tools
- How to Use AI Agents to Optimize Code
- Using Code Generators for Reliable Software Development
- How AI Assists in Debugging Code
- Best Use Cases for Claude AI
- How Developers can Trust AI Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development