Claw Code: The "Leaked" Claude Code Rebirth The Gist: Claw Code is an explosive open-source project (March/April 2026) that serves as a high-performance, clean-room rewrite of the architecture behind Anthropic’s Claude Code. It recently became the fastest repository in history to surpass 100,000 stars on GitHub, fueled by the viral "leak" of Claude’s internal agentic workflows. The Highlights: - Clean-Room Logic: After the Claude Code source was allegedly leaked, the Ultraworkers collective claims to have reverse-engineered the core "harness" without using proprietary code. It’s now written primarily in Rust for speed and memory safety. - Autonomous Coordination: Unlike simple chat-based coding assistants, Claw Code uses a three-part system (OmX, OmO, and clawhip) to allow multiple agents to coordinate in parallel. One agent plans, another executes, and a third reviews—all without human "babysitting." - "Clawable" Philosophy: The project focuses on "machine-first" automation. It removes human-centric barriers like fragile terminal prompts and opaque session states, allowing the AI to recover from errors and run test loops entirely on its own. - Discord as an IDE: The project promotes a "set and forget" workflow where a human can give a directive via Discord, and the "claws" (agents) handle the labor, pushing the final code to GitHub only once all tests pass. - The Controversy: The repo’s meteoric rise has sparked fierce debate. Critics on Reddit and GitHub have called the star-count "botted" and raised legal concerns, while fans see it as the "democratization" of elite AI engineering tools. The Bottom Line: Claw Code is more than a tool; it's a demonstration of the "Agentic Era." It proves that when you give AI a structured environment to plan and self-correct, the role of the human shifts from "typing code" to "directing the mission." 👉 Repository: https://lnkd.in/d6TWc4H6
Claw Code: Open-Source Claude Rewrite Surpasses 100k Stars on GitHub
More Relevant Posts
-
I've been building an open source CLI that runs AI coding agents for you. It breaks work into tasks, runs them in parallel across repos, then spawns a second model to review the first one's output. Shipped v0.2.5 today. The bit worth mentioning: the planner now detects what tooling your project has (subagents, MCP servers, instruction files) and bakes delegation hints into generated tasks. The agent figures out on its own to route security diffs to your auditor or UI checks to Playwright. No configuration needed. Works with Claude Code and GitHub Copilot. MIT licensed. https://lnkd.in/eaKu4yRm #opensource #aicoding #devtools #claudecode #githubcopilot #cli #aiagents
To view or add a comment, sign in
-
3 features in one GitHub plugin fixed my vibe-coding. It's called Axme Code. Installed it last week. Here's what changed: Before, I was: • Re-explaining my stack on session 47 (again) • Watching the agent stub half the code and call it "done" • One typo away from a Friday-killing git push --force to main The 3 features that flipped it: 𝟭. 𝗣𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗺𝗲𝗺𝗼𝗿𝘆 𝗮𝗰𝗿𝗼𝘀𝘀 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀 A knowledge base for your repo: stack, decisions, patterns, handoff from the last session. Every session starts with full context. No more pasting 200 lines into CLAUDE.md and hoping. 𝟮. 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀 𝘄𝗶𝘁𝗵 𝗲𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗹𝗲𝘃𝗲𝗹𝘀 Save decisions ("deploy via CI only", "no sync HTTP in async handlers") as required or advisory. The agent reads them as rules, not suggestions. Tests must pass before it reports "done." 𝟯. 𝗣𝗿𝗲-𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝘀𝗮𝗳𝗲𝘁𝘆 𝗵𝗼𝗼𝗸𝘀 Hooks intercept dangerous commands before they run — git push --force, rm -rf, writes to .env. Hard enforcement, not a prompt. 100% block rate, not 80%. Vibe-coding fail because the system around the model has no memory and no guardrails. PS: GitHub link in comments. #AI #VibeCoding #ClaudeCode #DeveloperTools
To view or add a comment, sign in
-
-
If you're running Claude Code Pro Max and wondering why your quota vanished in 90 minutes, you might have a background session burning through it right now. A GitHub issue posted April 12 caught my eye. A Pro Max user burned through their entire quota window in 90 minutes of moderate use. They dug into their session logs and found something troubling: cache_read tokens, which Anthropic advertises cost 1/10th the rate of regular input tokens, appear to be counting at full rate against the quota limit. Here's the core issue. Leaving a computer for over an hour then continuing a stale session means a full cache miss. Each API call then sends 100k to 960k tokens at full rate. With 200+ calls per hour in normal tool-heavy usage, quota vanishes fast. But the bigger problem: background sessions. Sessions left open in other terminal tabs continue making API calls for compacts, retrospectives, and hook processing. All of it hits the same shared quota bucket. You don't see it until the error messages start. Boris from the Claude Code team showed up in the thread, confirmed the issue, and shipped a same-day workaround. Set `CLAUDE_CODE_AUTO_COMPACT_WINDOW=400000` to reduce the context window from 1M to 400k tokens, which dramatically cuts cache miss costs. Consider closing background sessions when you're not using them. The pricing model says cache_read tokens cost 1/10th of regular tokens. If they're counting at full rate against quota limits, that's not a billing error. It's a structural mismatch between how the product is sold and how it's actually metered. Power users built workflows around the pricing model. If the model doesn't match reality, they deserve a straight answer, not a GitHub workaround. #ClaudeCode #Anthropic #AI #DeveloperTools #Coding #Programming #AIBenchmarks #TechNews #HN #HackerNews #SoftwareEngineering #LLM #Pricing #OpenSource
To view or add a comment, sign in
-
-
130 million tokens saved. 15,720 commands. 88.9% efficiency. One free tool. I keep seeing engineers posting about how fast they're burning through tokens running agentic coding workflows. And almost every time, they're diagnosing the wrong problem. It's not the AI reasoning that eats your budget. It's shell output. Every git diff, grep, ls, cargo test, and docker ps your agent runs comes back as thousands of tokens of verbose noise. Full file trees. Complete test suites. Git logs with metadata nobody needs. All of it hitting your context window and your wallet. RTK (Rust Token Killer) is a free, open source CLI proxy that compresses that output before your LLM ever sees it. Same information. A fraction of the tokens. Real user data from actual session tracking shows 60-90% savings on common dev commands. One user tracked 15,720 commands and saved 130 million tokens at an 88.9% efficiency rate. Single Rust binary. Zero dependencies. Less than 10ms overhead. Works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Codex, Windsurf, and Cline. 19.5k GitHub stars. For teams running agentic workflows at scale, this is a real cost lever. For individuals on usage-based plans, it stretches your budget significantly. For someone like me who burns through three to four Claude Max accounts in a week, this is a must-have. Link in the comments. #AIEngineering #DeveloperTools #AgenticCoding #ClaudeCode
To view or add a comment, sign in
-
-
🚨 Breaking: Fortune reports Anthropic’s Claude Code leaked via a packaging error this week. The vibe coding stack just got a trust stress test. What changed: the bottleneck moved—from writing code to verifying it. Why it matters now: speed is solved. Trust isn’t. If you ship with Lovable, Replit, Cursor, or GitHub Copilot, your edge is no longer keystrokes per minute—it’s proof. Expect enterprise buyers to demand SBOMs, signed artifacts
To view or add a comment, sign in
-
-
Built an AI-powered GitHub bot that automatically reviews pull requests and posts inline code comments within 60 seconds of opening a PR. 🚀 The problem it solves Senior engineers spend a lot of time on first-pass reviews catching mechanical issues like: • Missing error handling • Security vulnerabilities • Poor naming This bot handles that automatically — so human reviewers can focus on architecture and logic. ⚙️ How it works • Developer opens a PR on GitHub • GitHub triggers a webhook instantly • Bot reads only the changed lines (diff) • Sends them to an LLM for analysis • Posts structured inline comments directly on the PR 🛠️ What I built • Secure webhook receiver with HMAC-SHA256 validation (prevents abuse & API drain). • Custom diff parser to map file lines → GitHub diff positions (critical for review API). • LangChain LCEL pipeline: → Prompt template → LLM → Pydantic output parser (structured comments) • Redis-powered optimizations: → Content-based caching (no re-review for unchanged files) → Reduced response time from 3.8s → ~0s on repeat runs → Deduplication to prevent duplicate comments • Direct GitHub API integration (bypassed abstraction issues when needed) 💻 Tech stack • Python, FastAPI — backend & webhook handling • LangChain — LLM orchestration (LCEL + structured output) • Groq — Llama 3.3 70B model • PyGitHub + requests — GitHub integration • Redis — caching & deduplication 🔗 GitHub: https://lnkd.in/gha_ewFn
To view or add a comment, sign in
-
-
The repo at the top of GitHub trending today isn't a model, an SDK, or a framework. It's mattpocock/skills — a folder Matt Pocock pulled out of his personal .claude directory and shipped with one install command. It picked up 7,000+ stars in 24 hours. The thesis is uncomfortable: as agents accelerate development, the boring engineering disciplines matter more, not less. Codebases compound complexity faster than ever. Skills are the lever that keeps AI output aligned with how a senior engineer would actually build. Six skills carry most of the weight: /grill-with-docs — forces shared domain language before any code is written. /tdd — red-green-refactor as an enforced loop, not a vibe. /diagnose — structured debugging instead of "try this and see." /improve-codebase-architecture — periodic architectural sweeps to fight decay. /grill-me — interrogates decision trees before they fork. /caveman — ~75% token reduction for long agent runs. Install: npx skills@latest add mattpocock/skills, then run /setup-matt-pocock-skills to wire it to your repo.
To view or add a comment, sign in
-
Once again, what a week! 🤩 My PR to add riscv64gc to wasm-tools release artifacts landed in v1.246.0 on March 31st. The Bytecode Alliance now ships native RISC-V binaries as part of their regular release. It's a small change in the diff, but it means anyone using wasm-tools on RISC-V can grab a prebuilt tarball instead of compiling from source. That matters for CI pipelines. wasm-pkg-tools followed the same week: PR merged, riscv64 binaries will ship with the next release. The WASM toolchain on RISC-V is filling in fast. 53 releases went out across 22 forks this week. A few worth pointing out: - Docker components updated across the board. BuildKit v0.29.0 and Buildx v0.33.0 are now available for riscv64 via ghcr.io. The container agent hit v1.41.0. Installing Docker on a RISC-V board and getting multi-platform builds working is no longer a weekend project. - Mistral AI Vibe v2.7.2, the CLI coding assistant from Mistral AI, now has a riscv64 binary. This build includes keyboard shortcuts for word-wise cursor movement and MCP server fixes from upstream. If you're trying to run AI developer tooling natively on RISC-V, this one works. - On the Python side, grpcio 1.80.0 hit a stable release for riscv64 this week. gRPC unlocks a lot of ML inference pipelines that depend on it. The riscv64-python-wheels index also added jiter 0.13.0, llama-cpp-python 0.3.16, sentencepiece 0.2.2, and blake3 1.0.8, all built on native RISE runners. One gap still open: tokenizers from Hugging Face has the upstream PR merged (BayLibre's work), but no release with riscv64 wheels yet. Waiting on that one. Where are you seeing the biggest missing-package friction on RISC-V? #RISCV #RISCVEverywhere #OpenSource #Docker #Python #WebAssembly #devEco
To view or add a comment, sign in
-
𝗚𝗶𝘁𝗛𝘂𝗯 𝗷𝘂𝘀𝘁 𝗯𝘂𝗶𝗹𝘁 𝗮 𝗽𝗮𝗰𝗸𝗮𝗴𝗲 𝗺𝗮𝗻𝗮𝗴𝗲𝗿 𝗳𝗼𝗿 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿. `gh skill` shipped April 16. One command to install portable sets of instructions, scripts, and resources that teach AI agents how to work - and they work the same on Copilot, Claude Code, Cursor, Codex, and Gemini CLI. That cross-platform part is new. ● 𝗖𝗿𝗼𝘀𝘀-𝗮𝗴𝗲𝗻𝘁 𝗽𝗼𝗿𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - a skill you install works on any supported agent runtime, no per-tool adaptation needed; uses the open Agent Skills specification ● 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗯𝘂𝗶𝗹𝘁 𝗶𝗻 - pin to a git tag or SHA, and `gh skill update` compares actual file content hashes, not version labels, so you only update when something real changed 👉 ● 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗻𝗼𝘁𝗲 - skills are community-sourced and unverified; GitHub explicitly warns about prompt injection risks and recommends `gh skill preview` before any install from unknown sources This is what a proper package ecosystem for agent behavior looks like early on. Installing shared agent knowledge the way you install npm packages is going to compound fast. What kind of agent skill would you want on a public registry first? #GitHubCopilot #AIAgents #DeveloperTools #AgentSkills
To view or add a comment, sign in
-
-
In 2026, tokens are the new developer currency. Stop spending yours on "Orientation Tax." I built a VS Code extension that cuts GitHub Copilot's token usage by 40–95%. It's now live on the Marketplace. Here's the problem: Every time Copilot needs to understand your code, it reads entire files — thousands of tokens just to figure out where a function lives. That's like reading an entire book to find one chapter title. 𝗧𝗼𝗸𝗲𝗻𝗦𝗹𝗮𝘆𝗲𝗿 fixes this by giving Copilot compact structural skeletons instead of raw files: → Before: 1,200 lines of raw code = 5,000 tokens → After: 8-line structural skeleton = 200 tokens 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: ⚡ Single LSP call extracts the full symbol tree 🧠 Language-specific compactors strip bodies, keep signatures 📦 Content-hash LRU cache — instant on repeat access 🔧 Registers as a Language Model Tool that Copilot calls autonomously 𝗧𝗵𝗲 𝗱𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱 alone took a wild turn: 📊 Real-time donut chart + coverage ring 🏆 Top savers leaderboard with medals 📈 Session timeline sparkline 🛡️ Automatic secrets detection — blocks API keys, tokens, private keys from ever reaching the LLM 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀: TypeScript, JavaScript, Python, Go, Java, Rust 𝗧𝗵𝗲 𝗱𝗲𝘀𝗶𝗴𝗻 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝗺𝗮𝗱𝗲 𝗶𝘁 𝗳𝗮𝘀𝘁: I intentionally cut Call Graph Extractor, Type Hierarchy Extractor, and Query Matcher. This keeps overhead at exactly 1 API call per file instead of 60+. Sometimes the best feature is the one you don't build. 🔗 Install from VS Code Marketplace: https://lnkd.in/e8iY364q 📂 GitHub: https://lnkd.in/eZu_H9_B 📄 MIT Licensed | Built with TypeScript If you're spending money on AI coding tokens, this might save you a lot. #OpenSource #VSCode #GitHubCopilot #AI #DeveloperTools #TypeScript #LLM #TokenOptimization #CodingTools #SoftwareEngineering
To view or add a comment, sign in
Explore related topics
- Open Source Tools for Autonomous AI Software Engineering
- Claude's Contribution to Streamlining Workflows
- Understanding Anthropic Claude AI
- GitHub Code Review Workflow Best Practices
- How Claude Code Transforms Team Workflows
- Best Practices for Using Claude Code
- How AI Agents Are Changing Software Development
- Open Source AI Developments Using Llama
- How to Use AI Agents to Optimize Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development