🚀 I just deployed my own CLI toolbox to PyPI — globally available — because I was DONE doing the same boring tasks every day. You know that feeling when you keep saying “I’ll automate this later”… and then you do it again manually? Yeah. That broke me. So instead of writing another random script, I built My Instant Toolbox — one CLI to rule all my everyday automations. Now, messy folders? One command. Need backups right now? One command. Curious if your system is dying mid‑work? One command. Publishing to PyPI? Still… one command. What this thing actually does 👇 🧹 Cleans chaos – Auto‑organizes folders by file type 🏷️ Renames at scale – Hundreds of files, renamed in seconds 🔒 Backs up smart – Timestamped ZIP backups, zero brain cells required 📊 Shows the truth – Live CPU, RAM, Disk stats in a beautiful terminal dashboard 📦 Ships fast – Build + publish Python packages like a cheat code Built with Python + Typer + Rich, because productivity shouldn’t look ugly. I deployed it to PyPI, so anyone in the world can install it and use it instantly. 📦 pip install my-instant-toolbox 🔗 Code & docs: https://lnkd.in/g8ur7wT6 This started as “let me save 10 minutes” It turned into “why wasn’t this always one command?” If you live in the terminal and hate repetitive work — this one’s for you 🛠️ #Python #OpenSource #CLI #Automation #DevOps #BuildInPublic #SoftwareEngineering
Deployed CLI Toolbox to PyPI for Automation
More Relevant Posts
-
🚀 rst-queue v0.1.6: Scaling Terabytes with Megabytes In a world of bloated data systems, we often find ourselves throwing more hardware at software problems. But what if our tools were engineered to be small, grounded, and incredibly powerful? Introducing rst-queue v0.1.6, a high-performance async queue system built for the modern developer who values efficiency above all else. Inspired by the psychology of the Leafcutter Ant, this project is the first major release from the Datarn initiative. Why rst-queue? Most Python-based queues are limited by the Global Interpreter Lock (GIL) and high memory overhead. rst-queue is different. By using Rust and the Crossbeam framework, we’ve built a system that: ⚡ Bypasses the GIL: Achieve true parallelism with native Rust worker pools. 🐜 Microscopic Footprint: 30-50x less memory usage than traditional message brokers. 🛡️ Dual Modes: Choose between AsyncQueue (In-memory for 1M+ items/sec) or the new AsyncPersistenceQueue (Durable storage with Sled KV). Grounded in the Kernel The secret to our speed is "Simple OS Layering." We’ve designed rst-queue to sit as close to the OS kernel as possible, utilizing direct system calls and memory-mapped I/O. This isn't just a library; it's a high-velocity data crossing (Taran) for your most critical applications. Get Started in Seconds We believe in zero-setup excellence. You can add high-performance queuing to your Python project with a single command: Bash pip install rst-queue==0.1.6 Join the Datarn Movement At Datarn, we are building a suite of "Small but Mighty" tools for data-intensive domains like B2B e-commerce and real-time analytics. rst-queue is just the beginning. Explore the project on PyPI: https://lnkd.in/d54yqdea Contribute on GitHub: https://lnkd.in/d_x3E-zj #Python #RustLang #DataEngineering #OpenSource #Efficiency #Datarn #PerformanceOptimization #SoftwareArchitecture
To view or add a comment, sign in
-
-
I built a RAG layer for Claude Code that cuts token usage by 80–90% Most devs using Claude Code don't realize they're burning tokens on files Claude doesn't need to read. Ask Claude "how does auth work?" and it reads 3 full files — 1,500+ tokens just to answer with 40 relevant lines. I fixed that. What I built: A local hybrid RAG system that sits between Claude and your codebase: → Late chunking — splits every file into overlapping 40-line windows → Dense retrieval — semantic search with all-MiniLM-L6-v2 (runs fully local, no API key) → BM25 sparse retrieval — keyword matching for exact symbol names → Cross-encoder reranking — picks the 3 best chunks from 20 candidates → File watcher — auto-rebuilds the index within 2 seconds of any file save Claude Code reads the CLAUDE.md and knows: run pip package before opening any file. It gets back 3 precise snippets with file path + line range. It reads only those lines. Nothing else. Real numbers on my Volta Engine project (76 files): - Without RAG: 17,235 chars across 3 files for one question - With RAG: 3,073 chars the exact 3 chunks that matter - 82% fewer tokens. Same answer. The whole thing runs offline. No cloud embeddings. No API calls. Just a one time pip install and run it. Stack: sentence-transformers · rank-bm25 · watchdog · Python If you use Claude Code daily on a real codebase, this pays for itself in the first session. DM me if you want the scripts. 🧠 #AI #ClaudeCode #RAG #DeveloperTools #Python #LLM #Productivity
To view or add a comment, sign in
-
I built an MCP server that roasts your pull requests You know that PR you shipped on Friday at 5pm with the description "misc fixes"? Yeah, this tool has opinions about that. pr-roast-mcp is an MCP server that reads any GitHub PR - the diff, the stats, the description (or lack thereof) - and delivers a brutally honest code review. With a severity rating from 🔥 to 🔥🔥🔥🔥🔥. ▎ "Your tests are thorough. Like, suspiciously thorough. 156 lines for a POST endpoint? ▎ You're basically writing a dissertation on HTTP status codes." ▎ ▎ "849 lines added, 7 removed. That's 121:1 ratio. For a 'bonus feature,' this ▎ sprawls." ▎ It's always technically accurate though. Every roast points at real issues -naming, complexity, missing edge cases, over-engineering. It just delivers the feedback the way your most senior engineer would after their third coffee. It always ends with one genuine compliment. Mine was about rounding edge cases in bonus calculations. Small wins. Two tools, ~150 lines of Python: - roast_pr - point it at any PR number or URL - roast_my_prs - lists your PRs so you can pick a victim Uses gh CLI to fetch the diff, Claude Haiku for the roast. Setup is one line. We've been using it in our team Slack before merges. Morale has either improved or collapsed, depending on who you ask. Code: https://lnkd.in/gHcZFTqB #buildInPublic #AI #claude #haiku #MCP #Python #DevTools #CodeReview #OpenSource
To view or add a comment, sign in
-
-
Just shipped RepoReview — an AI agent that autonomously reviews GitHub repositories for security vulnerabilities and code quality issues. How it works: - Paste any public GitHub repo URL - The system clones it, parses the code into chunks using Python's AST module, and embeds them into a vector database (ChromaDB) for semantic search - An LLM agent (Llama 3.3 70B) runs an autonomous tool-calling loop — it decides which files to read, runs Bandit static analysis, and generates structured findings - You get a downloadable review report sorted by severity Built with: Python, Streamlit, Groq, ChromaDB, sentence-transformers, Bandit Key concepts: Agentic AI, RAG pipelines, vector embeddings, LLM tool calling, static analysis Try it live: https://lnkd.in/gRQ-7inW Source code: https://lnkd.in/gb4WRsbm #AI #MachineLearning #Python #LLM #CodeReview #RAG #SoftwareEngineering #OpenSource
To view or add a comment, sign in
-
I was paying twenty cents a run for a hosted image pipeline on Replicate. At a few thousand runs a month, that started to hurt. No README. No docs. Just four input parameters and a price tag. I wanted to call the underlying models directly, but I had a hazy idea of what was chained together or in what order. Then I noticed Replicate's `predictions.create()` API returns a `logs` field. Raw stdout from the container. One call. The entire pipeline printed itself out with emojis. Step 1: LLM generates a contextual prompt ... Step 2: Segmentation extracts a face mask ... Step 3: Mask inversion (a detail that had been silently breaking my outputs) ... Step 4: Inpainting model does the swap ... Few lines of Python later, same output, roughly half the cost. Nothing clever. I just read what was already there. What stuck with me is how familiar the pattern felt. Recently someone reconstructed the full source of Claude Code from the shipped npm bundle. No breach. Just a minified file and an LLM to rename the variables. Observability, side channels, shipped bundles, container logs. Different layers, same lesson. A small reminder for builders: your debug output is part of your public interface. And for anyone integrating a closed system: check what it's already saying out loud before assuming it's opaque. What's the most useful thing you've learned from logs someone forgot to turn off? Details in the post in comments. #SoftwareEngineering #Security #MachineLearning #DeveloperTools
To view or add a comment, sign in
-
Tired of scattering print() statements across your FastAPI code just to chase down one bug? You restart the server. Hit the endpoint. Squint at logs. Still no idea what broke. The real issue? Most tutorials show debugging on a single main.py. The moment you have a subdirectory structure, a .venv, and a .env file, the same config silently breaks. Breakpoints don't fire. VS Code loads the wrong interpreter. You get "Could not import module" and have no idea why. Once I got the setup right, everything changed: ✅ Breakpoints that actually trigger on every request ✅ Live variable inspection mid-request — no prints needed ✅ Call stack navigation to see exactly how you got there ✅ Conditional breakpoints that pause only when a specific condition is true Zero changes to your source code. Please commit launch.json once; your whole team will get it. I wrote a full guide covering: 🔧 The exact launch.json that works and the one field most configs get wrong. 🐛 A 5-step mental model: if debugging fails, one of these broke. 🐳 Remote debugging inside Docker with debugpy. ⚡ Logpoints, conditional breakpoints, and exception pausing. If your breakpoints never hit, you'll recognize the fix within the first two minutes of reading. 👉 https://lnkd.in/ew4ueC8Z #FastAPI #Python #VSCode #Debugging #BackendEngineering
To view or add a comment, sign in
-
-
67KB...That's how much config my agent had to load just to read its own rules. Blew past the tool output limit. Got dumped to a temp file. The agent had to shell out to Python to parse the JSON to extract what it needed. Three tool calls just to read the config. I knew the payload was large. I didn't know what "large" means when your working memory is a context window. For a human, 67KB is a slow page load. For an agent, it's 15,000 tokens. Ten percent of everything you can think about, gone. So I asked the agent what they'd change. "Don't send me 86 rules and hope I need most of them. Let me tell you what I'm reviewing and send me the ones that matter." ...I'd been thinking about compression. They were thinking about selection. The consumer changed. The API didn't. Gotta say: being on the receiving end of your own bad API design is instructive. Would recommend. Full conversation + what we changed ⤵️
To view or add a comment, sign in
-
After experiencing issues with RAG demos breaking when handling real-world data, I dedicated my weekend to rebuilding the stack from scratch. Many tutorials simplify RAG to just Vector DB + Prompt, but in reality, semantic search can be noisy, and "vibes-based" retrieval often leads to hallucinations. My goal was to create a Compliance RAG pipeline capable of managing rigid, regulatory language without failure. Here’s the "v1" of my personal project and the architecture behind it: The Build: 📌The Hybrid Layer: I combined Qdrant with BM25. This approach ensures that if a compliance document references "Section 402.b," keyword search can capture it even if an embedding might miss it. 📌The Reranker: I incorporated a Cross-Encoder layer. Although slower than a vector lookup, it guarantees that the LLM only processes the most relevant context, significantly enhancing accuracy. 📌The Frontend: I developed a decoupled React + Vite UI utilizing Server-Sent Events (SSE) to prioritize real-time token streaming and eliminate frustrating spinning loaders. The Tech Stack: - Language: Python (FastAPI), Langgraph, - Embeddings: BGE + OpenAI. - Database: Qdrant(Vector Database) - Deployment: Successfully launched on AWS EC2 using Nginx, Docker and a GitHub Actions pipeline. 🚀Project Demo link: https://aryangupta.work/ 🧠 What I Learned: The LLM is actually the simplest component of the stack—serving primarily as a formatter. The true "intelligence" resides in the retrieval and ranking logic. If your retrieval is only 60% accurate, your LLM will also be limited to that accuracy, regardless of prompt quality. I'm pleased with the reranking latency results, though I'm still fine-tuning the hybrid weights. For those developing RAG systems: How do you manage the latency trade-off of a Cross-Encoder versus the precision benefits? #BuildInPublic #RAG #Python #FastAPI #MachineLearning #LLMOps #Qdrant #SoftwareEngineering
To view or add a comment, sign in
-
-
These past few days I've been diving into middleware in FastAPI — and honestly it clicked better than I expected. I implemented 4 types: → CORS — to control which frontends can talk to my API → GZip — to compress large responses and reduce payload size → HTTPS Redirect — to force secure connections automatically → Custom Timer Middleware — my favorite one, built from scratch using BaseHTTPMiddleware The custom one was the most interesting. I wrapped every request with a timer to measure how long each endpoint takes to respond. Something like this: start = time.time() response = await call_next(request) duration = time.time() - start Simple concept, but it made me realize how powerful middleware is — you intercept every request and response without touching a single endpoint. One thing that surprised me: even a basic loop of 10 million iterations is visible in the timing output. That's when I understood why performance monitoring at the middleware level actually matters in production. Still learning, but these small wins keep me going. Code here if you want to check it out 👇 https://lnkd.in/e773_smX #FastAPI #Python #WebDevelopment #BackendDevelopment #Learning
To view or add a comment, sign in
-
If you're a Claude Code user, check out these terminal tools! Glad to see Starship and CShip getting the love they deserve!
AI Tech Lead | Senior Data Scientist | Writing a book on Post-training LLM and Inference Optimization
Claude Code has pulled me back into the terminal full-time. These are the top tools for productivity boost in your terminal: 1. 𝐅𝐢𝐬𝐡 𝐬𝐡𝐞𝐥𝐥 → An alternative to zsh and bash with autocomplete for commands, options, flags, and git branches → Syntax highlighting: immediately shows you if a command is valid or not → Automatically activates Python virtual environments https://fishshell.com/ 2. 𝐒𝐭𝐚𝐫𝐬𝐡𝐢𝐩 → A fully customizable prompt → Shows your current folder, git branch, active Python/TS environment at a glance https://starship.rs/ 3. 𝐂𝐬𝐡𝐢𝐩 (𝐒𝐭𝐚𝐫𝐬𝐡𝐢𝐩 𝐟𝐨𝐫 𝐂𝐥𝐚𝐮𝐝𝐞 𝐂𝐨𝐝𝐞) → Brings Starship-level customization to the Claude Code status line → By default the status line is very barebones → Cship adds information on token usage, when your window resets, all in a customizable way. https://cship.dev/ 4. 𝐘𝐚𝐳𝐢 → A graphical file manager that runs inside your terminal → Replaces the ls and cd loop with a fast, visual interface → Shows a preview of every file (code, images, even PDFs) https://lnkd.in/ePcegMWA 5. 𝐑𝐢𝐩𝐠𝐫𝐞𝐩 → Search your codebase for regex patterns faster than grep → Respects .gitignore, so no false positives in your .venv or node_modules folders 6. 𝐀𝐭𝐮𝐢𝐧 → Replaces Ctrl+R with a searchable, filterable history across sessions → Super useful when you need to find that command you ran two weeks ago → Allows syncing across machines. Searching for that command you run on your other computer? https://atuin.sh/ Are you using these? What else should I add to this list? I write about data & AI every week. Subscribe to my newsletter to get each one in your inbox 👉 https://lnkd.in/echQG4Zu
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development