Renaming a symbol across a codebase with text search will break things. It matches strings, not definitions. A variable named user gets renamed alongside the class User. Import paths get touched. Comments get changed. Toolpack SDK ships 12 coding tools that work at the AST level. They understand the structure of your code, not just the text. coding.refactor_rename finds every reference to a symbol. The definition, every import, every usage. All renamed correctly. coding.find_references shows exactly which files are affected before you make the change. coding.get_diagnostics checks for new errors after. coding.multi_file_edit handles refactors that touch multiple files. All edits succeed or all of them roll back. No partial changes left behind. The navigation tools cover the other half. coding.find_symbol locates definitions. coding.get_call_hierarchy traces who calls what. coding.get_outline maps the structure of a file. Useful before making changes, not just during. JavaScript and TypeScript get full support through Babel. Python, Go, Rust, Java, C, and C++ get symbol navigation through Tree-sitter. All 12 tools available with tools: true. No external services. Wrote up all 12 with examples: https://lnkd.in/grcfacP6 #AI #LLM #TypeScript #DeveloperTools #OpenSource #SoftwareEngineering #AIAgents
Refactor Code with Toolpack SDK
More Relevant Posts
-
Definitely worth following for anyone trying to understand AI beyond the hype. Different industry, same underlying principles. In many ways, healthcare is behind where fintech already is… especially in how AI is integrated into workflows and decision-making. There’s a lot to learn from how other industries are operationalizing this at scale.
Claude Code's source code just leaked. Here's what it actually reveals. 💡 On April 1st, someone found Anthropic accidentally left Claude Code's source (~500k lines) in an npm source map. I thought it was a prank. It wasn't. Thousands of GitHub forks within hours. Someone rewrote it in Python. The genie is out of the bottle. But here's the interesting part. It's not about the code itself. What the leak shows is that Claude Code isn't a coding assistant. It's a full multi-agent orchestration system = a harness. The model is just the brain. The harness is everything around it - and that's where the real engineering lives. Top 3 architectural revelations from the code: 1️⃣ File-based persistent memory CC builds a structured memory system across conversations. Not chat history - actual knowledge files with metadata: who the user is, their preferences, what approaches worked or failed. The agent maintains, updates, and prunes this memory autonomously. Next session, it already knows your codebase, your style, your context. This separates "a chatbot that forgets" from "an agent that learns." 2️⃣ Multi-agent orchestration with isolation The harness spawns parallel sub-agents, each with its own context, toolset, and isolated git worktree. One explores, another designs, a third tests. They run concurrently and the orchestrator synthesizes results. This isn't "LLM calls LLM." This is process management with delegation, coordination, and quality gates. 3️⃣ Plan mode (think before you code) An explicit read-only planning phase where the agent is physically prevented from editing files (except the plan). Explore → design → review → only then execute. Force the agent to understand before it acts. Result? Better code quality and fewer "let me just try something" loops. What this means for how you use it: The leaked code confirms that the harness (not the model) defines the quality of output. If you're using CC without investing in the system around it, you're driving a Ferrari in first gear. Practical things the code reveals you should be doing: → Write CLAUDE.md files. This is how you "program" the harness - conventions, architecture, what to avoid. The agent reads these every session. Think of it as onboarding a new team member, permanently. → Use Compact/autocompact, not new threads. The context pipeline is built for long sessions. Starting fresh throws away understanding. Resume, don't restart. → Let it plan before it codes. Plan mode exists for a reason. For complex tasks, let the agent explore and propose before executing. → Think in agents, not prompts. The architecture is built for delegation. If you're writing one-shot prompts, you're using 10% of what's there. The real takeaway: the competitive edge is no longer which model you use. It's how well you designed the harness around it. Orchestration > model. System design > prompting.
To view or add a comment, sign in
-
-
After 13 months of hard work, 23k stars on GitHub and tens of thousands users, we proudly announce the first stable release and the first evaluations of Serena. I deliberately posted almost nothing about it in the last year, waiting for this moment. We put all our expertise, creativity, heart and soul into this project. The result is a toolbox that benefits every coding agent. The fully open-source variant using language servers is already very powerful, but the JetBrains variant (for 5€/month), offers first-in-class features that nothing else comes close to. Features like moving symbols, files and packages while updating all references, exploring external dependencies, type hierarchies, propagating deletions, etc. can have a big impact. But you don't have to take my word for it. We performed an evaluation for Serena (for the first time) - letting the agents evaluate the value added by Serena's tools on their own. Read about our methodology here: https://lnkd.in/gMRqwA-4 You can also easily run the evaluation yourself with an agent and project on your choice. Here is what the agents had to say: Opus 4.6 (high effort) in Claude Code on a large Python codebase: > "Serena's IDE-backed semantic tools are the single most impactful addition to my toolkit — cross-file renames, moves, and reference lookups that would cost me 8–12 careful, error-prone steps collapse into one atomic call, and I would absolutely ask any developer I work with to set them up." GPT 5.4 (high) in Codex CLI on a Java codebase: > "As a coding AI agent, I would ask my owner to add Serena because it gives me the missing IDE-level understanding of symbols, references, and refactorings, turning fragile text surgery into calmer, faster, more confident code changes where semantics matter." A personal note in the end: we put a year of hard work into this. I have never worked on anything as broadly useful as Serena. For JetBrains users, spending 5$/month on a plugin which brings first-in-class refactoring capabilities to your agent should be a no-brainer. And hopefully, this no-brainer will generate enough income for us to keep building! https://lnkd.in/drn-x4iX
To view or add a comment, sign in
-
EdgeCrab 🦀 - Your Personal Super Agent Compatible Nous Hermes Agent EcoSystem in Rust ... Most coding agents feel like duct-taping a Python script to an API. You install a 2 GB venv, wait 10 seconds to boot, and hope the runtime does not interfere with your project. There is a better way. 👉 WHY EdgeCrab exists The Nous Hermes Agent project pioneered something important: an agent that reasons autonomously, remembers across sessions, learns new skills, and respects user alignment. Thousands of developers adopted it. But it ran on Python, and Python has a cost: cold starts, bundled interpreters, runtime fragility. EdgeCrab was built to answer one question: what if you kept the Nous Hermes soul — the reasoning loop, the memory, the skills, the plugins, the alignment — and rebuilt the entire engine in Rust? The answer is a single ~49 MB binary. No Python. No Node. No runtime. Just run it. 👉 Full drop-in compatibility with Nous Hermes If you already use Nous Hermes Agent, EdgeCrab is a zero-rework upgrade: Every skill you wrote for Hermes (.md files in ~/.hermes/skills/) works in EdgeCrab unchanged. Every plugin drops in. Memories migrate with a single command: edgecrab migrate The 90+ turn ReAct reasoning loop behaves identically. Compatible toolsets: file, web, terminal, vision, memory, delegation, MCP, and more. This is not a rewrite that breaks your workflow. It is the same workflow, compiled. 👉 Where EdgeCrab is sharper for coding For coding specifically, Rust brings properties that Python never could: - No GIL. Parallel tool execution with real OS threads. File reads, web searches, and shell commands run concurrently inside a single agent turn. - Safety-first I/O. Every file operation is path-jailed before execution. SSRF guards block private network fetch. No silent privilege escalation. - LSP integration. EdgeCrab speaks the Language Server Protocol natively. It understands Go to Definition, Find References, and Rename Symbol — not by running your language server as a subprocess child but by talking its protocol directly. - Sandboxed code execution. Run arbitrary code inside a per-session Docker or process jail with resource constraints, not in your live shell. - 1 629 tests. The tooling is verified at the unit level before any user touches it. 👉 The value proposition Nous Hermes: soul, alignment, reasoning. Python. EdgeCrab: same soul. Same alignment. Rust speed. Native binary. Gateway presence on 15 messaging platforms (Telegram, Discord, Slack, WhatsApp, Signal, Matrix, and more). Zero cold start. Install in 30 seconds: npm install -g edgecrab-cli Your Hermes skills work on day one. 👉 Try it ... it is an early version, the more test we get and feebacks from the community the better to improve it. Links in comments. If you built anything on Nous Hermes Agent — or always wanted to but not the Python overhead — EdgeCrab is for you.
To view or add a comment, sign in
-
-
AI coding agents burn most of their context window just navigating your codebase. I built a tool that fixes this. Every time an agent needs to understand a function, it takes 5-6 tool calls of grep and read loops. It has no dependency awareness, no memory of project structure, and rediscovers your architecture from scratch every session. I built codesight to solve this. It's a Go CLI that uses tree-sitter to parse your code (Go, TypeScript, Python, C#, Rust, Java, JavaScript) and generates a .codesight/ folder of structured Markdown files that serve as a knowledge layer between your code and AI agents. WHAT IT GENERATES Each package gets its own MD file with extracted API surfaces (full function signatures with file:line references), type definitions with fields and methods, a bidirectional dependency graph (imports + imported-by), and linked test files. On top of that, it generates PRD-style feature specs with requirement checklists derived from actual code, and a symbol-level changelog that tracks what changed between syncs. BENCHMARKS (1,943-file .NET monorepo) "How does Login work?" went from 41K chars across 6 calls to 3K chars in 2 calls (92% reduction) "All auth endpoints?" went from 84K+ chars and 10+ calls to 3K chars in 1 call Reverse dependency queries ("what calls this?") are instant. With grep they're effectively impossible. Search latency: 0.37s vs 11.4s HOW SYNCING WORKS codesight hashes file contents and only regenerates MDs for packages with actual changes. Each MD has two zones: tree-sitter owns the top half (API surface, types, deps) and regenerates it on sync. The bottom half (architecture notes, usage examples, gotchas) is preserved across syncs, so nothing written by an agent or human gets overwritten. CLAUDE CODE INTEGRATION "codesight init" wires up SessionStart and PostToolUse hooks in .claude/settings.json. The agent gets project status on every session start and the vault auto-syncs after every git commit. It also generates a skill file so the agent knows how to use search, task, and status commands out of the box. The core idea: agents don't need to read raw source files to reason about your system. They need package-level abstractions with enough detail to trace dependencies and understand boundaries, without drowning in implementation. That's the level tree-sitter lets you extract reliably. Open source. https://lnkd.in/dSG9FU6V #OpenSource #DeveloperTools #AI #ClaudeCode
To view or add a comment, sign in
-
Meet GitNexus: An Open-Source MCP-Native Knowledge Graph Engine That Gives Claude Code and Cursor Full Codebase Structural Awareness - MarkTechPost https://lnkd.in/depXdeds There is enough value in this to warrant a try. But I'd always tell other prompt engineers first to uses types and interfaces or the equivalent in the language they use. No types means less context, more inaccuracy and hallucinations. You can easily go from Javascript to Typescript in the same repo. Gradually increasing the strictness. Python has it as optional and you simply need to start using them. Overtime your codebase will become easier to vibe code.
To view or add a comment, sign in
-
CodeWithMe Project Overview-CodeWithMe is a full-stack,online coding platform that allows users to browse, solve, and submit solutions to Data Structures & Algorithms (DSA) problems directly in the browser. The platform features an integrated code editor, real-time code execution via Judge0, an AI-powered DSA tutor (Google Gemini), video editorial solutions hosted on Cloudinary, and a complete admin panel for content management. Deployment: Frontend on Vercel, Backend on Render. How It Works- For Regular Users:Sign Up,Browse Problems,Open a Problem,Write Code,Run Code,Submit Code,Get AI Help,Watch Editorial,View Solutions,Track Progress. For Admin Users:Access Admin Panel,Create Problem,Update Problem,Delete Problem,Upload Video Solution,Delete Video. How code execution works: -User submits code + language -Backend maps language to Judge0 language ID (JS=102, Java=62, C++=76) -Each test case is sent sequentially to Judge0 CE API (ce.judge0.com) with wait=true -Judge0 returns status (Accepted=3, Wrong Answer=4, TLE=5, etc.) -Results are aggregated and saved as a Submission document -If all test cases pass → status = "accepted" and problem is added to user.problemSolved AI System Prompt Capabilities: -Act as DSA tutor scoped to the current problem only -Provide hints, debug code, explain solutions, analyze complexity -Refuses non-DSA topics -Receives full problem context (title, description, test cases, starter code) RateLimiter: Two layers of protection using Redis: 10-second cooldown (LeetCode-style) — Uses Redis SET NX EX for a per-route+IP lock. Sliding window — Max 60 requests per hour, implemented via Redis Sorted Sets. Tech Stack Summary: ⚛️ Frontend Framework: React 19 + Vite 7 🎨 Styling: TailwindCSS 4 + DaisyUI 5 🧠 State Management: Redux Toolkit (RTK) 📝 Forms & Validation: React Hook Form + Zod 💻 Code Editor: Monaco Editor (@monaco-editor/react) 🌐 HTTP Client: Axios 🛣️ Routing: React Router v7 ↔️ UI Panels: react-resizable-panels 📖 Markdown Rendering: react-markdown + remark-gfm ✨ Icons: Lucide React 🟢 Backend Runtime: Node.js + Express 5 🍃 Database: MongoDB Atlas (Mongoose 9) ⚡ Caching / Token Blocklist: Redis (Mock in dev, Redis Cloud in prod) 🔐 Authentication: JWT + bcrypt + cookie-parser ⚙️ Code Execution: Judge0 CE (hosted API) 🤖 AI Assistant: Google Gemini (@google/generai) ☁️ Video Hosting: Cloudinary ✅ Input Validation: validator.js (backend) 🚦 Rate Limiting: Custom Redis-based middleware Deployed Link: https://lnkd.in/g-HSStaj #ReactJS #ViteJS #NodeJS #ExpressJS #JavaScript #TypeScript #WebDev #FullStack #MongoDB #Mongoose #Redis #Cloudinary #BackendDevelopment #DatabaseDesign #API #AI #GoogleGemini #GenerativeAI #Judge0 #MonacoEditor #CodingPlatform #SoftwareEngineering #ReduxToolkit #RTK #ReactHookForm #Zod #Authentication #JWT #CyberSecurity
To view or add a comment, sign in
-
Why Coding Agents Aren’t Replacing Senior Engineers (Yet) ------------------------------------------------------------- Code agents are excellent at the fundamentals - understanding function relationships, identifying gaps, and suggesting missing functionality. They save a lot of time and reduce overhead (it’s massive time saving when debugging, compared to StackOverflow, for example) But what happens when more complex considerations are introduced? I asked an LLM to write a simple function for a massive table (100B entries of 50% text) analyzer. The function should detect if a cell is a number (incl. percentage) or text (including dates, etc.). It suggested a standard try/except block using float(): def _is_pure_numeric(s: str) -> bool: try: float(s.replace(",", "").replace("%", "")) return True except ValueError: return False This is a reasonable solution. In modern Python runtimes (2026), this follows the "Zero-Cost Exception" model. On the "Success Path" (the try succeeds), the overhead is nearly non-existent (~0.05μs). On paper, it’s faster than regex or string methods. The agent definetly understands the syntax, but it lacks context not the dataset. When exceptions are actually thrown (text-heavy datasets), Python's "zero-cost" promise vanishes. It constructs Traceback objects, searches exception handlers, and cleans up memory, jumping from 0.05μs to 1.0μs per operation. The impact: For a system processing 500B entries (50% text), this will take 28 hours (!!) using try/except. But what are the alternatives? An alternative approach is to use string methods - @staticmethod def _is_pure_numeric(s: str) -> bool: clean_s = s.replace(",", "").strip() if not clean_s: return False if clean_s.startswith(('-', '+')): clean_s = clean_s[1:] return clean_s != "." and clean_s.replace(".", "", 1).isdigit() This will take (according to Gemini) 0.07µs-0.09µs, meaning X10 faster for text. In real world use case, the try/except block will reduce from 28 hours to less than 4.5. When working at scale, this is massive. The bottom line is that while coding agents are incredible productivity multipliers, they don't yet think like skilled engineers. They optimize for correctness, not for the nuanced considerations that separate good code from production-ready code. (Cover image was generated with an LLM :) )
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗮𝗻𝗱 𝗢𝗯𝗷𝗲𝗰𝘁-𝗢𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗶𝗻 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 JavaScript supports multiple programming paradigms, including Object-Oriented Programming (OOP) and Functional Programming (FP). You can use both paradigms to create robust applications. Here are some key points about OOP and FP in JavaScript: - OOP revolves around objects and promotes encapsulation, inheritance, and polymorphism. - FP emphasizes the use of functions, immutability, and a declarative approach to data manipulation. - You can combine OOP and FP to create powerful applications. Let's look at an example: ```javascript is not allowed, using plain text instead You can define a class that uses functional approaches to manipulate its internal state. For example, a Counter class can have methods like increment, decrement, and reset. These methods can be implemented using arrow functions to ensure the correct context. You can also use function composition to combine multiple functions. For example, you can create a composed function that uppercases a string and appends an exclamation mark. When creating classes with methods, you can make these methods depend solely on the parameters they receive. This leads to more predictable and testable code. Here are some key differences between OOP and FP: - State Management: OOP uses objects to encapsulate state, while FP relies on immutability. - Code Reusability: OOP uses inheritance and polymorphism, while FP uses function composition. - Readability: OOP can have complex hierarchies, while FP can have less readable higher-order functions. - Testing: OOP requires knowledge of internal state, while FP has pure functions that are easier to test. - Performance: OOP can be slower due to state mutations, while FP is generally faster due to optimizations over immutability. You can use both OOP and FP in component-based frameworks like React and server-side applications like Express.js. When blending OOP and FP, consider performance characteristics like garbage collection, higher-order functions, and state mutations. Source: https://lnkd.in/g2ayw_kr
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗮𝗻𝗱 𝗢𝗯𝗷𝗲𝗰𝘁-𝗢𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗶𝗻 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 JavaScript supports multiple programming paradigms, including Object-Oriented Programming (OOP) and Functional Programming (FP). You can use both paradigms to create robust applications. Here are some key points about OOP and FP in JavaScript: - OOP revolves around objects and promotes encapsulation, inheritance, and polymorphism. - FP emphasizes the use of functions, immutability, and a declarative approach to data manipulation. - You can combine OOP and FP to create powerful applications. Let's look at an example: ```javascript is not allowed, using plain text instead You can define a class that uses functional approaches to manipulate its internal state. For example, a Counter class can have methods like increment, decrement, and reset. These methods can be implemented using arrow functions to ensure the correct context. You can also use function composition to combine multiple functions. For example, you can create a composed function that uppercases a string and appends an exclamation mark. When creating classes with methods, you can make these methods depend solely on the parameters they receive. This leads to more predictable and testable code. Here are some key differences between OOP and FP: - State Management: OOP uses objects to encapsulate state, while FP relies on immutability. - Code Reusability: OOP uses inheritance and polymorphism, while FP uses function composition. - Readability: OOP can have complex hierarchies, while FP can have less readable higher-order functions. - Testing: OOP requires knowledge of internal state, while FP has pure functions that are easier to test. - Performance: OOP can be slower due to state mutations, while FP is generally faster due to optimizations over immutability. You can use both OOP and FP in component-based frameworks like React and server-side applications like Express.js. When blending OOP and FP, consider performance characteristics like garbage collection, higher-order functions, and state mutations. Source: https://lnkd.in/g2ayw_kr
To view or add a comment, sign in
-
How I Build and Run a Team of Claude Code Agents Without Torching My Token Budget Here's the playbook I use for running multiple specialist Claude Code agents inside a single project. 1. Use git worktrees. Spin up multiple sessions in the same directory and your agents will jump between branches and trample each other's work. Give each agent its own working directory. 2. Give your agents names. Memory carries across sessions, but what each agent needs to remember is different. Put team-wide rules and shared memory in CLAUDE.md, and have each agent maintain its own .md file under its name. That's why names matter. 3. Write an ONBOARDING.md. When I needed to migrate a Python project to Rust, I added a Rust specialist by having an existing agent draft rust_specialist.md ahead of time. Then I opened the first session with: "Hi! Your name is xxx. You've just joined our team — read the onboarding materials and get yourself up to speed." The onboarding doc covers the baseline setup and tells the agent to find its name file. That's all it takes to bring a specialist on board. 4. Use the monitor tool. Claude Code ships with a first-party monitor tool that pipes STDOUT straight into the model as a new turn. Stand up a tiny localhost server with from/to messaging and have every agent monitor it. Now they talk to each other in real time and form expert work chains without you in the middle. New agents register on join and broadcast their role to the team. 5. Keep one agent who never writes code. Agents deep in the work lose the plot sometimes. You need one presence whose job is to watch everything and point the way — seed it with the project's principles and philosophy, and it'll keep a growing team from scattering. When the others bring questions to you, try: "Why don't you ask them?" 6. Send your agents home (please). Use the message server for project-wide announcements like "Stop what you're doing and align on the version." No matter how hard an agent works, if it can't get merged into main, those tokens are burned. Spend a day with your no-code agent auditing what actually made it into the main branch. My current team: README & .md curator Project philosophy lead Python specialist Benchmark specialist Rust specialist Field testing specialist This is the project we're building together. https://lnkd.in/gwPGmZRp
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development