Just published a new open-source tool to GitHub. Claude Code has no built in way to track how many tokens you are burning across sessions, so I built one. It is a two-file setup. A Python server reads the JSONL session logs Claude Code writes to ~/.claude/projects/ and serves the data to a local browser dashboard. It breaks down input, output, cache write, and cache read tokens per session and calculates estimated cost using current Sonnet 4 rates. The whole thing auto-refreshes every 30 seconds and exports to CSV. No external dependencies, no cloud, nothing leaves your machine. Two features already in the pipeline. First is a configurable token budget alert that warns you in the dashboard when you cross a daily or monthly threshold. Second is an n8n workflow that pulls from the local API and delivers a usage digest by email on whatever schedule you want. #claudecode #aitools #anthropic #ITAutomation #DeveloperTools #opensource
Track Claude Code Token Usage with Open-Source Tool
More Relevant Posts
-
Excited to share something I've been building — a GitHub Repository Analyzer MCP Server! It connects Claude AI directly to any GitHub repository through 13 powerful tools, letting you analyze codebases, browse files, search code, inspect commits, and more — all through natural language. 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗰𝗮𝗻 𝗱𝗼: → Deep structural analysis of any repo (language breakdown, entry points, tooling detection) → Read files, search code patterns, browse full file trees → Fetch commits, branches, contributors, issues & PRs → Compare branches and get file URLs instantly → Clone repos locally (local mode) 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: The server is built with FastMCP + Python, deployed on Vercel as an async HTTP server. Anyone can connect it to Claude.ai in seconds by adding the MCP URL to the Connectors section. Each tool call accepts your own GitHub Personal Access Token via a github_token parameter — your token is never stored, never logged, and masked from all error output. It lives only for the duration of that single request. You stay in full control of your credentials at all times. No setup. No installation. Just paste the link, pass your token, and start analyzing repositories. Want to self-host? Clone the repo, run vercel --prod, and you have your own instance live in minutes. 𝗧𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸: Python · FastMCP · anyio · PyGithub · Vercel Serverless The project is fully open source under MIT license — contributions, PRs, and feedback are very welcome! 🙌 🔗 Try it now — add this to Claude.ai Connectors: https://lnkd.in/gbpfr59Y 🔑 You'll need a GitHub PAT — generate one at github.com/settings/tokens (public_repo scope is enough for public repos) 📂 GitHub repo: https://lnkd.in/gWgRQAFa #MCP #Claude #AI #Python #OpenSource #GitHub #DeveloperTools #BuildInPublic
To view or add a comment, sign in
-
Hello all, Kindly check my latest GitHub repo. Use it and let me know how it worked or you and feedback are welcome. If you like it then give a ⭐ Agent Diaries is a lightweight, zero-dependency local SDK that prevents your AI agents from getting stuck in loops. By providing a persistent "diary" memory, your agents can remember their past actions, avoid repetitive tasks, and reflect on their results over time. ✨ Features 🚫 Duplicate Prevention: Automatically filter out tasks your agent has already processed across different sessions. 💾 Local-First Storage: Operates entirely locally. Diary entries are stored as lightweight JSON files—no expensive cloud KV or Vector databases required. ⚡ Zero Dependencies: Fast, secure, and built entirely with native Node.js APIs. 🔌 Highly Extensible: Comes with an easy-to-use StorageAdapter interface so you can swap local files for SQLite, Redis, or anything else. 🛡️ 100% TypeScript: Built-in type safety and excellent developer experience. 📦 Installation Install the package via npm: npm install @swapwarick_n/agent-diaries https://lnkd.in/dpb6urpP
To view or add a comment, sign in
-
I have recently decided to seriously deep dive on writing MCP Server with production grade quality in mind. I knew nothing about Model Context Protocol when started (a couple of months ago). Other than studying the official MCP documentation and the best in class Python library (FastMCP) to implement Servers, I also followed a wonderful course from Alejandro AO on Udemy. I tried to create a kind of capstone project implementing a simple MCP Server wrapping the Yahoo Finance API so that you host (e.g., Claude Code, GitHub Copilot, etc.) is able to fetch stock prices at different time level over a given period of time. The focus was on writing production grade code using asyncio, factory functions, unit-tests, etc. but also implementing a FastMCP on top of FastAPI with Scalekit as auth server and Render as PaaS for containerized deployment via Docker (https://lnkd.in/daNbdMfa the repo link). I hope anybody willing to learn MCP Server development in Python can find the repo useful to move the first steps!
To view or add a comment, sign in
-
Found a GitHub repo today that fixes the most expensive mistake in Claude Code setups. It's called Claude Context, built by Zilliz (the team behind Milvus). Here's the problem it solves: When you run Claude Code on a large repo, the typical response is to dump entire directories into context. The costs spiral. A single refactoring session can burn $30-50 in tokens because Claude keeps reading files that aren't relevant to the task. Claude Context fixes the root cause, not the symptom. It indexes your entire codebase into a vector database. When Claude needs context, it runs a semantic search and retrieves only the relevant code chunks. No directory dumps. No manual file curation. Session costs stay flat as your repo grows. Here's how it works technically: 1. Index once: run the setup command and Claude Context builds a vector index of your entire codebase 2. Plug into Claude Code via MCP: one config line and it's live 3. Every Claude Code session now retrieves semantically relevant files instead of reading everything What I like most: it works with any MCP-compatible agent. Cursor, Windsurf, Claude Code. Not locked to one tool. Also ships as a VS Code extension and npm packages if you prefer that integration path. 7,500+ stars on GitHub. Trending #1 today. MIT license. The detail that makes this practical: you self-host the vector DB (Milvus) or use Zilliz Cloud. Your code never leaves your environment. For anyone building on large codebases, this is worth 20 minutes to set up. (Link to repo in the comments) --- 🎁 Bonus Get the best AI news, tips, tutorials and resources in my newsletter. Join today and receive: • AI Playbook • 60+ free AI courses • 3000+ helpful prompts 100% FREE 👉 https://lnkd.in/gmbECShG
To view or add a comment, sign in
-
-
Just shipped DIALED — a Claude Code skill I built to solve a problem I kept hitting: every new AWS project needed the same deploy pipeline wired from scratch. DIALED scaffolds it for you: • Per-PR preview environments. Open a pull request, a real AWS stack comes up. Close it, the stack tears down. • Staged testing: unit → integration → tf apply → wait-ready → system tests → merge → prod (auto-promoted). • GitHub OIDC federation — zero long-lived AWS credentials in GitHub secrets. • fck-nat by default (~$5/mo) instead of managed NAT Gateway (~$32/mo). • Stack-shape agnostic: Go, Python, Node, whatever — DIALED scaffolds the pipeline and the wiring; you fill in terraform/stack/ with your resources. • Add per-PR Postgres with one command (dialed:add-module database) — each PR gets its own logical database inside a shared RDS instance, dropped on PR close. DIALED = "Deploying Infrastructure with A Low Effort Delivery." The name is a wink at dialing in structure for vibe-coded side projects. Generalized from a pattern that was already running in production on a Go/Lambda app. https://lnkd.in/gxSUzVPg
To view or add a comment, sign in
-
Just shipped DIALED — a Claude Code skill I built to solve a problem I kept hitting: every new AWS project needed the same deploy pipeline wired from scratch. DIALED scaffolds it for you: • Per-PR preview environments. Open a pull request, a real AWS stack comes up. Close it, the stack tears down. • Staged testing: unit → integration → tf apply → wait-ready → system tests → merge → prod (auto-promoted). • GitHub OIDC federation — zero long-lived AWS credentials in GitHub secrets. • fck-nat by default (~$5/mo) instead of managed NAT Gateway (~$32/mo). • Stack-shape agnostic: Go, Python, Node, whatever — DIALED scaffolds the pipeline and the wiring; you fill in terraform/stack/ with your resources. • Add per-PR Postgres with one command (dialed:add-module database) — each PR gets its own logical database inside a shared RDS instance, dropped on PR close. DIALED = "Deploying Infrastructure with A Low Effort Delivery." The name is a wink at dialing in structure for vibe-coded side projects. Generalized from a pattern that was already running in production on a Go/Lambda app. https://lnkd.in/gr4AgEQb
To view or add a comment, sign in
-
Just Released! WasteLens AWS, a lightweight open source Python tool that scans an AWS account for a small set of common cost-waste signals and generates a simple HTML report with estimated monthly savings. I will be doing custom builds by request (Email: williamshehan@gmail.com for a quote) Live AWS scanning using boto3 HTML report generation Estimated monthly savings summary Windows-friendly install and run scripts Automated test suite Readable local logs https://lnkd.in/gKU-93iN
To view or add a comment, sign in
-
bareagent v0.5.0 — MCP Bridge bare-agent is a lightweight agent orchestration library. You give it an LLM, tools, and a goal — it runs the think/act/observe loop until done. Zero deps, ~2000 lines, composable components you can use independently. Until now, you wired tools manually. v0.5.0 changes that. The new MCP Bridge auto-discovers servers from your IDE configs (Claude Code, Cursor, Claude Desktop), connects via stdio, and exposes every tool in bareagent's standard format. Your agent doesn't know it's talking to MCP — it just sees tools. If you have barebrowse, baremobile, filesystem, GitHub, Slack, or any other MCP server configured, bareagent can orchestrate all of them. One agent, every tool. This is exciting and terrifying in equal measure. We've all seen the OpenClaw demos — autonomous agents clicking, typing, navigating with real hands. The MCP ecosystem is giving agents those same hands: file access, browser control, messaging, database operations. The problem is obvious. You can't give an autonomous agent 25+ tools across multiple servers and hope it doesn't delete something, send something, or break something. That's why the bridge ships with a governance layer. First run writes a .mcp-bridge.json listing every discovered tool as "allow". Change any to "deny" — the tool disappears entirely. The LLM never sees it, can't call it, doesn't know it exists. One JSON file is the single source of truth for what your agent is allowed to touch. Claws clipped, hands still functional. No code changes to restrict tools. No redeployment. Edit a JSON file. Deny entries survive refresh. Agents with hands need governance by default, not as an afterthought. npm install bare-agent@0.5.0 https://lnkd.in/dQvgdeWE
To view or add a comment, sign in
-
💥 Just completed an end-to-end CI/CD pipeline integrating Jenkins with AWS CodeBuild and CodeDeploy! Building on my previous work with AWS Copilot and ECS, I wanted to deepen my understanding of pipeline orchestration — this time with Jenkins as the central coordinator. The goal: automate the journey from a git push to a running Flask application on EC2. The architecture: GitHub → Jenkins (Poll SCM) → AWS CodeBuild → S3 → AWS CodeDeploy → EC2 Jenkins orchestrates, while AWS services handle the heavy lifting. What I built: ✅ Jenkins server on Amazon Linux 2023 with AWS CodeBuild, CodeDeploy, File Operations, and HTTP Request plugins ✅ Four IAM roles with least-privilege scoping ✅ CodeBuild project that pulls from GitHub, runs unit tests, and outputs artifacts to S3 ✅ CodeDeploy in-place deployment across two tagged EC2 app servers ✅ Jenkins freestyle project with SCM polling and CodeDeploy post-build action The real learning came from troubleshooting: 🔧 Java version mismatch — Current Jenkins LTS requires Java 21, but the user data script installed Java 17. Diagnosed via journalctl, patched the systemd override. 🔧 Plugin UI drift — The AWS CodeBuild Jenkins plugin now requires selecting "Use Project source" via radio button, not the legacy dropdown. 🔧 Python version incompatibility — Sample scripts called python3.7 and bare python, neither of which exists on AL2023. Patched with sed and pushed a fix. 🔧 CodeDeploy state corruption — Failed deployments cache scripts in /opt/codedeploy-agent/deployment-root/, causing the agent to run OLD ApplicationStop scripts before downloading new bundles. Resolved by clearing the archive and restarting the agent. 🔧 File collision protection — CodeDeploy refuses to overwrite existing files. Cleaning /web/* on both app servers got past it. Key takeaways: 🔵 CodeDeploy lifecycle event logs are the fastest path to diagnosing failures — drill three clicks deep, the error is always there. 🔵 Tutorials age faster than the underlying tools. Java versions, plugin UIs, and distro defaults all change. The fundamentals stay the same. 🔵 Jenkins's flexibility is a double-edged sword — managing plugin compatibility is ongoing work, but the trade-off is portability across providers. 🔵 The CodeDeploy agent's caching behavior is a real gotcha: one failed deployment can block all future ones until the cache is cleared. Code on GitHub: https://lnkd.in/g54d7BYD Big thanks and shoutout to the AWS docs and Jenkins community for the troubleshooting breadcrumbs! #AWS #DevOps #CICD #Jenkins #CodeBuild #CodeDeploy #CloudComputing #InfrastructureAsCode #Automation
To view or add a comment, sign in
-
For people developing in #Ada and #SPARK: LLMs are a fantastic tool to help you write code and lift code to SPARK Silver and above. Here are 2 skills, one for #Alire and one for lifting to SPARK: https://lnkd.in/eGVKTTKV Open VS Code, open your favourite LLM, point it at the skill and say: Use the /alire skill to create a project on Raspberry Pi Pico What other skills do you think are useful? I have a few in my head.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development