🚀 Built Something Useful for Every Claude Developer While working with Claude Code, I realized one big gap — there’s no clear visibility into usage, tokens, or costs. So I built a solution 👇 🔗 https://lnkd.in/g7kCBnCn 💡 Claude Usage Dashboard A lightweight, local-first tool to track, analyze, and optimize your Claude usage in real-time. ✨ What it does: • Tracks token usage across sessions • Estimates API costs • Provides a clean dashboard + CLI insights • Detects anomalies & suggests optimizations • Includes a budget guard (yes, it can even stop overspending) ⚡ Best part: No setup headache. No dependencies. Just run it with Python. 🧠 Why I built this: When you're building with LLMs, visibility = control. This tool gives you exactly that. If you're working with Claude or exploring AI tools, this might help you 👇 Would love your feedback, ideas, or contributions 🙌 #AI #LLM #Claude #OpenSource #Developers #Python #BuildInPublic #GitHub
Claude Usage Dashboard for Developers
More Relevant Posts
-
I’ve been building a side project: a web-based combat tracker for a custom TTRPG. You can check out the repo here: https://lnkd.in/dZrM-mhe. I ran the full delivery loop, requirements through tests, while tightening agentic pipelines so they could run on trial-tier models and still land close to what I'd get from heavier ones. The bet was that clearer prompts and smaller scopes would do more than burning tokens, and that's where most of the learning actually happened. On the app itself: I drafted and refined requirements and scope in markdown in the repo (requirements-done, backlog notes) so changes could be checked against written intent. I used those pipelines to turn ideas into small, agent-ready stories. For design, Stitch let me iterate on layout and tone early; screens were then built as Flask templates and static assets so they still matched real routes, forms, and Socket.IO events. The stack is Flask + SQLAlchemy + SQLite, with Socket.IO for live updates; I added pytest where it helped, plus browser automation only where it paid off, and a one-command DB init so a fresh clone isn’t blocked on missing tables. The Python backend is mine line by line, with AI used in a teaching / review mode rather than "write the app for me" mode, which for me beat a generic paid course. This isn't evidence that agents replace engineers. It's one more example of using AI as leverage on a loop you still own. If you're trying something similar, the README and branch layout are meant to read without insider context; you're welcome to reuse the Skills in the repo if they help. If you’re using Cursor or similar tools, the practical suggestion is the same: treat AI as leverage on that loop, not as a substitute for thinking. #Python #Flask #Cursor #AgenticAI #OpenSource #TTRPG
To view or add a comment, sign in
-
-
Excited to Share My Latest Project! I’m proud to present SmartCodeFixer – AI-Based Code Error Detection & Fixing System 💻 This project is designed to help developers automatically detect coding errors and provide intelligent suggestions to fix them, improving efficiency and reducing debugging time. 🔹 Tech Stack: • Python • Machine Learning / AI • Flask / Backend Integration • HTML, CSS, JavaScript (Frontend) 🔹 Key Features: • Automatic code error detection • Smart suggestions for bug fixing • Clean and user-friendly interface • Faster debugging workflow 🔹 What I Learned: • Applying AI concepts to real-world problems • Building full-stack applications • Improving problem-solving and debugging skills 🔗 GitHub Repository: https://lnkd.in/gmjfqJ2v #ArtificialIntelligence #MachineLearning #Python #WebDevelopment #Innovation
To view or add a comment, sign in
-
i built something small. it might save your team from a massive headache. every time an AI writes code for you, it leaves behind zero documentation of why. six months later, nobody, not even the AI can explain the decision. that's AI tech debt. and it's compounding silently in most codebases right now. so i built maylang-cli - a tiny Python CLI that enforces one rule: every meaningful change ships with a .may.md file that documents: → what you intended → what the contract is → what invariants must hold → how to verify it works → how to debug it when it breaks one command. one file. lives in git. reviewable like code. pip install maylang-cli may new --id MC-0001 --slug auth-cache --risk low --owner "your-team" you can also enforce it in CI — block any PR that touches auth/ or db/migrations/ without a change package. zero-friction adoption. it's open source, MIT licensed, and on PyPI right now. if you've ever inherited a codebase and had no idea why something was built the way it was - this is for you. 🔗 https://lnkd.in/eMV28g27 🔗 https://lnkd.in/eSNVrpGM #opensource #python #developer #aitools #softwaredevelopment #devtools #engineering
To view or add a comment, sign in
-
-
Small surprise this morning: Renjith Ravindranathan wrote about Ogham MCP on Medium, pairing it with safishamsi's Graphify for Claude Code. He ran our init wizard with Voyage at 1024 dims, every technical detail checks out, and the screenshots show him pulling memory from kiro-cli into Claude Code -- cross-client memory working in the wild without us asking for it. Thank you Renjith. Genuinely didn't see this coming. If you're fighting context bloat in Claude Code, the article is worth a read. Graphify compresses the codebase, Ogham holds the memory across the sessions. Two Python MCP servers, no overlap. Article: https://lnkd.in/evMfMw5J Graphify: https://lnkd.in/emm8Qd_V Ogham MCP: https://ogham-mcp.dev
To view or add a comment, sign in
-
Developers are finding new ways to tame the complexity of LLM and agent workflows. At the heart of this effort is hieuchaydi/RepoBrain, a local-first codebase memory engine for AI coding assistants. RepoBrain indexes repositories, retrieves grounded evidence, traces logic flows, and ranks the safest files to inspect or edit before code generation. This is a critical step forward because teams are trying to make agent behavior more reliable, not just more powerful. What sets RepoBrain apart is its ability to provide actionable insights without requiring a hosted backend or API key. This is achieved through a combination of local index + evidence-backed retrieval, route/service/job flow hints for faster codebase orientation, and ranked edit targets with confidence and warnings. RepoBrain's capabilities include: - local index + evidence-backed retrieval - route/service/job flow hints for faster codebase orientation - ranked edit targets with confidence and warnings - built with Python The momentum behind RepoBrain looks earned because the project is easy to place inside a real workflow, not just admire from a distance. It lands in high-interest areas like agent, ai-agents, llm, and recent commits make it feel active instead of abandoned. The project still feels early, which gives it some discovery momentum. Repo: https://lnkd.in/ggAjSMGY #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Python #RepoBrain #Agent #AiAgents
To view or add a comment, sign in
-
-
I implemented Claude Code from scratch... well, sort of. I got deep into reverse-engineered breakdowns of how Claude Code works under the hood, and one weekend I just started building. What began as "let me see if I can replicate the agent loop" turned into a full published Python package called Klauso. Here's what I actually built: → A single-process async agent harness around the Anthropic Messages API → A master agent loop that orchestrates streaming, parallel tool execution, and delegation each turn → YAML-based permission gating for risky tool calls → An event bus with lifecycle hooks (logs tools, errors, policy denials) → Persisted sessions with context compaction when the thread grows → Background shell jobs, todos, a task graph with dependency mapping, and skill guides loaded from markdown → Parallel subagents that fan out and merge summaries → Specialist handoff via mailbox messaging between agents → MCP (stdio) tool bridge with dynamic tool registration → Git worktrees for isolated agent workspaces No agent frameworks. No LangChain. No shortcuts. Every layer was reasoned through: how do you handle a tool batch concurrently while keeping conversation state coherent? How do you inject cooperative interrupts mid-stream without breaking the turn lifecycle? How do you gate permissions at the tool dispatcher level without polluting your agent logic? The hardest part honestly wasn't the code. It was understanding what agentic engineering actually demands at the systems level. Tool dispatch, permission policy, context pressure, subagent coordination - these aren't solved by wrapping an API call. They require deliberate design. What's next: ◆ Parallel subagents with bounded fan-out and cancellation ◆ Webhook-based task hooks for external schedulers ◆ Remote MCP beyond stdio (HTTP/SSE) It's on PyPI now: pip install klauso I had a lot of fun building this and I'm genuinely open to critique. If you see architectural decisions you'd have made differently, or patterns I missed, I want to hear it. Dropping the repo link in the comments. #AI #AgenticAI #Python #OpenSource #LLM #ClaudeAI #SoftwareEngineering #MachineLearning #BuildInPublic
To view or add a comment, sign in
-
I just pushed a repo I've been sitting on for a while — five applied AI apps in one codebase, sharing one foundation. https://lnkd.in/g86ZX_qU The apps themselves are not the point. A mini chatbot, a prompt-chaining pipeline, a YouTube summarizer, a text-to-SQL tool, and a multi-doc RAG app with citations. Each one is the kind of thing you've seen a hundred times on LinkedIn. The reason I built them this way was that I was tired of seeing AI "portfolios" that were actually five disconnected repos with five different styles of spaghetti. A folder per tutorial. No shared abstractions. A new way of calling the model in every project. You learn very little from building that, and a reviewer learns even less from reading it. So I gave myself a constraint. One repo. One shared LLMClient. One config pattern. One UI framework. Provider-agnostic from day one — OpenAI or local Ollama, swap with a line of config. No LangChain, because I wanted the patterns to be readable, not hidden behind four layers of framework. Then I built the five apps on top of that foundation, each one demonstrating a different core pattern: chat state and provider abstraction prompt chaining and structured reasoning external data ingestion and summarization natural language to SQL with safety guardrails retrieval-augmented generation with source attribution What I learned writing it this way was the thing I didn't expect: the foundation forced me to be honest about what was actually the same between these apps and what was actually different. The "same" part was much bigger than I'd assumed. Most "AI apps" are 80% plumbing you've already written and 20% the actual idea. Once the plumbing is shared, you can see the idea clearly. A portfolio of five demos teaches you how to call an API five times. A codebase teaches you architecture. I wanted the second one. #BuildInPublic #AIEngineering #AppliedAI #LLM #Python #SoftwareEngineering #OpenSource
To view or add a comment, sign in
-
Update #2 on my RAG AI chatbot project! The most enjoyable part so far has been the trial and error. Not always getting the results I want, but actually understanding why — whether it's a retrieval problem, a chunking problem, or programming errors because I'm new to Python and this dang snake language has no respect for brackets and semicolons. I've enjoyed learning about the different reranking strategies, the different retrieval methods and understanding when and why I'd choose one over the other. I recently started a dev blog where I vent my frustrations, document my learnings, and indulge in memes through this project buildout process! If you want to follow along: 👉 https://lnkd.in/gN9ejAB7 Quick recap of what I've been building - all tested against completely made up, fictitious documentation by the way: Swapped the file system for a real database - migrated from saving embeddings to a .pkl file to pgvector, a Postgres extension that handles vector similarity search natively in SQL. Added hybrid search - pure vector search finds meaning, not exact words. Combined it with Postgres full text search and used Reciprocal Rank Fusion (RRF) to merge the results. Specific config keys and class names are now actually findable. Fixed table parsing - Confluence tables were coming through completely jumbled. Wrote a parser that pairs each cell with its header before chunking, so the context actually makes sense. #RAG #BuildingInPublic
To view or add a comment, sign in
-
-
🚀 New open-source project: playwright-ai-pilot I'm building an AI-powered test automation framework using Playwright, Python, and Claude. Three AI pillars so far: 🔧 Self-healing locators — when a selector breaks, Claude finds an alternative automatically 🤖 AI test generation — point it at a URL and get a full pytest test file back 📋 AI test planning — give it a user story and get a structured test plan with risks and automation notes Work in progress — Windows/NTLM and MFA authentication mocking coming next. GitHub: https://lnkd.in/g7hiN6Fq Would love feedback from other automation engineers. What would you add? #TestAutomation #Playwright #Python #AI #SDET #OpenSource
To view or add a comment, sign in
-
Stop Guessing, Start Visualizing: Introducing codegraph-viz! Ever joined a new team and felt completely lost in a massive, sprawling codebase? Or spent hours tracing a bug only to realize you broke a dependency you didn't even know existed? I’ve been there, and that’s exactly why I built codegraph-viz. codegraph-viz is a zero-config tool that turns any Python project into an interactive, D3.js-powered map. It’s designed to help you understand complex architectures in seconds, not days. No more digging through endless directories just to see how things connect. 🌟 Key Features: • Interactive Dependency Graphs: Click any node to see imports, dependents, and full source code without leaving the browser. • 4 Layout Modes: Switch between Force, Grid, Hierarchy, and Radial views to find the perspective that makes sense for your project. • Impact Analysis: Instantly see which files will be affected if you change a specific module. No more "I only changed one line" production incidents. • LLM-Ready Exports: Generate a token-efficient JSON index that helps AI agents understand your architecture for 90% fewer tokens. 🛠️ How to Get Started: You can install and run it directly from your terminal right now: 1️⃣ Install: pip install codegraph-viz 2️⃣ Scan Your Project: cd your-project codegraph scan Your browser will automatically open with a full interactive map of your codebase. No configuration, no databases, no accounts—just your code, visualized. Whether you're a new engineer onboarding, a tech lead catching architecture violations, or using AI to help you code, codegraph-viz is built for you. I've put the PyPI link in the first comment below! 👇 #Python #OpenSource #SoftwareArchitecture #DeveloperTools #DataVisualization #Coding #PythonProgramming #codegraph
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
the budget guard is smart, but most devs won't touch it until they get hit with a surprise bill. the real win isn't preventing overspend, it's surfacing costs early enough that behavior shifts before guardrails are even needed.