I just open-sourced bottrace --- a debugging tool I built because I got tired of watching AI agents guess when they should be tracing. I've been a developer for a long time, and debuggers have always been my go-to. Honestly, I don't understand why so many modern developers skip them. The ability to freeze execution, walk the call stack, and examine every variable in scope --- that's how you actually understand what code is doing. When I started working heavily with AI agents, I noticed they were doing the equivalent of adding print statements everywhere. And print statements cost tokens --- a lot of them. Worse, on larger codebases, I kept seeing agents find the first thing that looked like the target and start working there, when the real code was something similar but in a completely different file. The call stack tells you the truth. It shows you exactly what called what, and where. But agents weren't doing that. They couldn't --- debuggers are interactive, and agents can't use a UI. So I built bottrace. It gives agents the same runtime visibility a debugger gives me, but through structured CLI output. Now when my agents hit a confusing codebase, the rule is simple: when in doubt, trace it first. bottrace run app.py --call-counts shows what's actually getting called and where. bottrace run app.py --calls --max-depth 3 maps the real execution path. No guessing. No grepping for symbols and hoping you found the right one. It's been especially valuable on large codebases where there's enough complexity that it takes hundreds of tokens just to get oriented. A single trace cuts through all of that. Zero dependencies. Python 3.10+. Works anywhere --- SSH, CI, containers. pip install bottrace https://lnkd.in/g_tym8BW
Debug AI Agents with Bottrace: Interactive CLI Debugger
More Relevant Posts
-
My own code share for this week and it's really Cool: OpenClaw is only as good as the tools you build around it, like anything you use AI for. I built a python based OpenClaw CLI for coding and pretty much anything that can be done with command line, similar to any CLI coding tool, just that it works with your Gateway and agents. Session Management, command approval, voice input, the whole shebang and You can have it too. Repo: https://lnkd.in/dRVmXy5U #OpenClaw #AI #Python #AI
To view or add a comment, sign in
-
With low-quality tests, you're paying tokens for fixing them with little to no benefit that tests should provide Let's be real. Most of the Python tests out there are a waste of time. They are there to make the manager happy, to pass the compliance review, or to exercise dominance. Talking about tests that: - break due to unrelated changes, - make you restart the CI/CD pipeline and hope that they pass on the next run, - take forever to run, - pass, but the production is broken. Back in the day, one complained about having to work with such tests. Nowadays, we're paying LLM tokens while Claude Code is fixing them over and over. Pure waste of time and money. In my latest article, I'm describing 7 qualities of highly valuable tests that every developer should know. Qualities of tests that help you ship faster with AI without losing confidence or turning your status page into a traffic light🚦 Don't forget to subscribe to not miss the next tip 🔔
To view or add a comment, sign in
-
How Inheritance Simplifies Regression in SystemVerilog Yesterday we covered regression, which is a way of running multiple tests on a simulator with minimum or no developer intervention by using scripts that could either be Python, Makefile, or shell scripts. One important thing we often miss is the core concept used in the testbench environment to get the capability to run different tests on a single verification environment. Let us assume we don't have any regression script yet but wish to reuse the same env for testing different tests. Assume we have three test cases: directed, random, and stress. Now we want to test them on the verification env. The usual practice we follow is to create a base test class that consists of things common for all test cases, such as creating the object of the environment, the interface to be used by all test cases, running the core task of the env, and generating the report. Then we extend the base class and create a new test case and model it to generate specific stimulus required for verification. e.g., the random test will extend the base class and add constraints to generate random stimulus required for that test. This is where we use inheritance in OOP, where we are trying to get all features supported by the base class, and the derived class will include modifications required for generating test-specific stimulus. The advantage of inheritance is that we do not need to create a test class again with all the same features plus test-specific features. Another advantage is that we could create a common template where we call the main task of the base test class in the testbench top and can reuse it for any number of test classes derived from the base class. Without inheritance, each time we need to create a handle for each new test class and manually call its main task to execute the test on env. With inheritance, we can run the main task of the base class in the testbench top, and if we wish to run another test class, the approach will be to simply use the base class handle to point to the newly derived test class, and it will start running the new test class on the environment. Inheritance gives us two unique capabilities. The first one is to have a single class with common properties that are accessible to all derived classes. The second one is that if we wish to run any derived class on the environment, we just connect the base class handle to the derived class so it will start pointing to the derived class, and we can run the derived class on the environment with minimal updates. This is the starting point of regression. Now the regression script will simply choose the derived class handle to be assigned to the base class and execute that derived class code on the environment, then collect the report, move ahead, change the derived class handle, and run the next test case. Learn everything about SV & UVM here : https://lnkd.in/dFvsAM_n
To view or add a comment, sign in
-
-
#molar #rust #vibecoding In the last few weeks I was experimenting a lot with Claude code in my MolAR Rust code base. I'm still convinced that "vibe coding", in the sense of instructing LLM to write code while you have no clue what's going on, is a curse and has to be avoided. Go learn some programming first, please. However, using LLM coding assistant when you actually *know* what are you doing - is not only an insane productivity bonus but also changes they way you think about your project. For example, I'm a big fan of projects with minimal dependencies. Just because dependency management sucks and will always do by definition. With LLMs it's ridiculously easy to write the exact functionality you need from scratch taking existing implementations as templates. They might be in different languages and have awful APIs, but this doesn't matter - your assistant will grab the algorithm and rewrite it to be perfectly in line with your project style and architecture (if properly instructed to do so of course). Out of curiosity I've implemented rather complex things this way in MolAR: * Reader/writer of Gromacs TRR files (using low-level routines from Molly XTC reader). * Simple reader/writer of PDB files (from scratch). * Custom implementation of DSSP secondary structure prediction algorithm. * Custom implementation of simple sequence alignment for "fuzzy" RMSD fitting. All this took less than a day of Claude agent work. As a result zero additional dependencies and a very clean code base. I must admit that "rewrite it in Rust" is now easier than ever :) In addition assistants are amazing in generating any kind of boilerplate for CI/CD. I have zero knowledge of Github Actions (and have zero motivation to learn it), but I managed to deploy automatic building of MolAR python bindings (creatively called pymolar) and their automatic publishing on PyPI. You can now finally do "pip install pymolar" without compiling it. This is a much deserved automation of complex, boring and repetitive things that require zero creativity - perfect job for AI. You are welcome to check new MolAR out: https://lnkd.in/dimWEGpF
To view or add a comment, sign in
-
I'm currently developing Trammel, an open-source LLM-powered harness aimed at making larger code changes and refactors with AI more systematic and less error-prone. Key ideas so far: • Dependency-graph aware task decomposition • Multiple beam search planning strategies • Incremental sandboxed verification (including running tests) • Memory for reusable recipes and constraints It's still early and actively being refined alongside my other tools. Would love honest feedback or suggestions from the community! https://lnkd.in/g3HCuDWJ #LLM #AI #CodeRefactoring #DevTools #OpenSource #Python #SoftwareEngineering #AgenticAI
To view or add a comment, sign in
-
"If you want to learn python in 2026 do not do it the manual way" guy seems everywhere. Cannot watch a single video without seeing him first! First and foremost, people promoting AI for coding, and who are not hardcore developers themselves, seem to have a notion that developers type everything that is needed, while AI will enable them to generate it. Far from facts. For example, in 2003, when Microsoft released the beta version of Office 2003 (the first time they moved from an internal binary format to XML for storing the documents), a client needed their website to appear within Outlook, so that their application is just another folder in Outlook. At that time codeguru and codeproject were the goto place for code. Not just sample stuff, full working code in VC++, Visual Basic, ATL, MFC etc. I got a full working Outlook plug-in and added proxy capabilities into it and completed it. The AI frameworks probably have access to much more through a conversational interface. That does not exactly make it more productive. The AI approach of code generation at a micro-level against a TDD spec, in contrast to a developer downloading a full working app or module and then chipping away unneeded parts to get a full framework or app-skeleton for what he/she wants, is not exactly a more productive process, for people who really understand how developers work. Not saying there is no benefit. There sure is, but definitely not the way many articles paint it out to be. Productivity benefits are elsewhere and far more strategic and impactful. We are probably missing the forest for the trees. Peace!
To view or add a comment, sign in
-
I switched from n8n to Python + Claude Code mid-project. Best call I made all quarter. Here's the honest comparison. n8n is not the automation tool you think it is. It's perfect for 3-step workflows. It becomes a debugging nightmare past that. I've built workflows in both — here's the honest breakdown. n8n wins when: → The workflow is small (under 5 nodes) → Speed to first result matters more than everything → The person building it isn't a developer But complexity changes the math fast. A 20-node workflow breaks. You open the visual editor to find the problem. Half your afternoon is gone. And the AI token cost while building medium to large flows? Every tweak, every node adjustment burns more than you'd expect. It compounds quietly. That's where OpenClaw(or Claude Code) + Python changes everything. For medium to large workflows: → Debugging is just reading code — no visual maze → Building is faster, less back-and-forth with AI → Token usage drops significantly The visual layer feels like a feature when you start. It becomes friction when the workflow grows. Code doesn't have that problem. My rule now: → Quick, simple automations → n8n → Everything from medium up → Python + Claude Code (And I am NOT a Python Developer! I just can understand the generated code. But that is not the point. I just have to specify what I want and if anything breaks have to say what broke and how it is supposed to be. On the other hand, with n8n debugging is a nightmare! Try it out!!! The tool you prototype with isn't always the one you should scale with. Follow me for more honest takes on AI tooling. What's your experience been? Drop your thoughts below.
To view or add a comment, sign in
-
How to Build a Secure Local-First Agent Runtime with OpenClaw Gateway, Skills, and Controlled Tool Execution In this tutorial, we build and operate a fully local, schema-valid OpenClaw runtime. We configure the OpenClaw gateway with strict loopback binding, set up authenticated model access through environment variables, and define a secure execution environment using the built-in exec tool. We then create a structured custom skill that the OpenClaw agent can discover and invoke deterministically. Instead of manually running Python scripts, we allow OpenClaw to orchestrate model reasoning, skill selection, and controlled tool execution through its agent runtime....
To view or add a comment, sign in
-
Python looks simple. It isn’t. Example: a = [1, 2, 3] b = a b.append(4) print(a) Output: [1, 2, 3, 4] Why? Because: Variables store references Not actual values Fix: b = a.copy() Now a remains unchanged. This concept alone explains: Unexpected bugs Shared state issues Side effects in functions If this isn’t intuitive, you will struggle in real systems. Practice Python edge cases here: https://lnkd.in/gAxHAqji
To view or add a comment, sign in
-
Nice, Google ADK development reached interesting milestones in the last few days: Go and Java versions reached 1.0.0. Python 2.0 Alpha version introducing Graph-based or Dynamic workflows as well as collaborative agents. https://lnkd.in/dY7JJkMs https://lnkd.in/dpd-22GX #adk #ai #llm #gcpweekly
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Very cool! Wonder how easily this could be generalized to support broader range of languages…