i built something small. it might save your team from a massive headache. every time an AI writes code for you, it leaves behind zero documentation of why. six months later, nobody, not even the AI can explain the decision. that's AI tech debt. and it's compounding silently in most codebases right now. so i built maylang-cli - a tiny Python CLI that enforces one rule: every meaningful change ships with a .may.md file that documents: → what you intended → what the contract is → what invariants must hold → how to verify it works → how to debug it when it breaks one command. one file. lives in git. reviewable like code. pip install maylang-cli may new --id MC-0001 --slug auth-cache --risk low --owner "your-team" you can also enforce it in CI — block any PR that touches auth/ or db/migrations/ without a change package. zero-friction adoption. it's open source, MIT licensed, and on PyPI right now. if you've ever inherited a codebase and had no idea why something was built the way it was - this is for you. 🔗 https://lnkd.in/eMV28g27 🔗 https://lnkd.in/eSNVrpGM #opensource #python #developer #aitools #softwaredevelopment #devtools #engineering
Mayank Katulkar’s Post
More Relevant Posts
-
Developers are finding new ways to tame the complexity of LLM and agent workflows. At the heart of this effort is hieuchaydi/RepoBrain, a local-first codebase memory engine for AI coding assistants. RepoBrain indexes repositories, retrieves grounded evidence, traces logic flows, and ranks the safest files to inspect or edit before code generation. This is a critical step forward because teams are trying to make agent behavior more reliable, not just more powerful. What sets RepoBrain apart is its ability to provide actionable insights without requiring a hosted backend or API key. This is achieved through a combination of local index + evidence-backed retrieval, route/service/job flow hints for faster codebase orientation, and ranked edit targets with confidence and warnings. RepoBrain's capabilities include: - local index + evidence-backed retrieval - route/service/job flow hints for faster codebase orientation - ranked edit targets with confidence and warnings - built with Python The momentum behind RepoBrain looks earned because the project is easy to place inside a real workflow, not just admire from a distance. It lands in high-interest areas like agent, ai-agents, llm, and recent commits make it feel active instead of abandoned. The project still feels early, which gives it some discovery momentum. Repo: https://lnkd.in/ggAjSMGY #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Python #RepoBrain #Agent #AiAgents
To view or add a comment, sign in
-
-
Most developers are sleeping on this AI dev stack that quietly 10x’d my output. I stopped opening 7 tabs, 3 docs, and 12 StackOverflow threads per task. Instead, I wired 3 “under-the-radar” tools into my daily workflow: - **Continue.dev** → VS Code/Cursor-style inline AI without sending your whole codebase to the cloud. - **smol-developer** → auto-generates small, focused codebases from specs (great for boring boilerplate). - **Codspeed** → AI-powered benchmark runner that actually tells you *where* your Python is slow. How I use it in practice: 1️⃣ Draft feature spec in Markdown. 2️⃣ Use smol-developer to generate the boring scaffolding. 3️⃣ Refactor + implement logic with Continue.dev in-editor. 4️⃣ Run Codspeed to hunt the real bottlenecks instead of guessing. This combo feels illegal because it removes 80% of the “grunt work” we’ve been gaslit into thinking is “real engineering.” Hot take: if you’re still doing everything manually “for learning,” you’re optimizing for ego, not impact. Which underrated dev tool changed the way *you* code? Drop it below so we can all steal it. Follow @flazetech for more. #Developers #AItools #Python #VSCode #Productivity #DevTools #Programming
To view or add a comment, sign in
-
I didn’t break my code. I broke my environment. And that lesson changed how I build software forever. For the past few days, I was working on an OCR-based backend system. Everything looked correct - the logic, the APIs, the flow. But nothing worked. Errors kept changing: • “No module named paddle” • “set_optimization_level not found” • “NumPy ABI mismatch” • “PyMuPDF build failed” At first, I thought: my code is wrong. But the truth was harsher - and more important: 👉 In real-world systems, code is only 50% of the problem. The other 50% is environment, dependencies, and compatibility. Here’s what I learned (the hard way): 🔹 Version mismatch can break everything Even if your code is perfect, incompatible library versions will crash your system. 🔹 Python version matters more than you think Some ML libraries still don’t support newer versions (like 3.12). 🔹 Virtual environments are not optional If you don’t isolate dependencies, you’ll chase ghosts for hours. 🔹 NumPy 2.0 broke half the ML ecosystem Real-world lesson: “latest” is not always “stable”. After fixing everything, the system finally worked. Not because I wrote better code - but because I understood the system behind the code. 💡 Biggest takeaway: A good developer writes code. A great developer understands the environment it runs in. If you’re building in AI/ML or backend systems, remember this: 👉 Your real skill is not just solving problems - 👉 It’s debugging chaos. #SoftwareEngineering #BackendDevelopment #AI #MachineLearning #Debugging #Python #DeveloperJourney #BuildInPublic
To view or add a comment, sign in
-
The AI dev stack got fast at writing code. It didn't get any better at showing you what that code does after deploy. New post: how we merged Vercel + Supabase logs into a single Gonzo terminal session for real-time cross-platform debugging. No log drains, no third-party platform, four lines of bash. → https://lnkd.in/gi_4V_hz #vibecoding #observability #devtools #opensource
To view or add a comment, sign in
-
🚀 Built Something Useful for Every Claude Developer While working with Claude Code, I realized one big gap — there’s no clear visibility into usage, tokens, or costs. So I built a solution 👇 🔗 https://lnkd.in/g7kCBnCn 💡 Claude Usage Dashboard A lightweight, local-first tool to track, analyze, and optimize your Claude usage in real-time. ✨ What it does: • Tracks token usage across sessions • Estimates API costs • Provides a clean dashboard + CLI insights • Detects anomalies & suggests optimizations • Includes a budget guard (yes, it can even stop overspending) ⚡ Best part: No setup headache. No dependencies. Just run it with Python. 🧠 Why I built this: When you're building with LLMs, visibility = control. This tool gives you exactly that. If you're working with Claude or exploring AI tools, this might help you 👇 Would love your feedback, ideas, or contributions 🙌 #AI #LLM #Claude #OpenSource #Developers #Python #BuildInPublic #GitHub
To view or add a comment, sign in
-
🔥 The Claude Code source code just leaked. And what’s inside changes everything. 59.8MB. 512,000 lines of TypeScript. 1,900 files. All of it accidentally shipped inside an npm package update this morning — and the internet moved fast. One GitHub repo hit 50,000 stars in under 2 hours. Here’s what the AI community found inside — and why it matters for every builder: 1️⃣ KAIROS — the autonomous agent Anthropic never announced Claude Code has a fully built “daemon mode.” It runs in the background while you’re idle, performing memory consolidation, merging observations, removing contradictions, and compressing context. When you return, your agent is smarter than when you left. This isn’t vaporware. It’s compiled code behind a feature flag. 2️⃣ Anti-Distillation: Fake tools to poison competitors There’s a flag called ANTI_DISTILLATION_CC. When enabled, Anthropic injects fake tool definitions into API requests — specifically to corrupt training data if someone records Claude’s outputs to train a rival model. This is competitive AI warfare written directly into production code. 3️⃣ 44 hidden feature flags. 20 unshipped. The roadmap Anthropic never published is now public. Persistent background agents. Remote control from your phone. Cross-session memory that studies its own mistakes. 4️⃣ Buddy. A literal Tamagotchi. I’m not joking. There’s a full companion pet system with species rarity, shiny variants, and a soul description written by Claude on first hatch. Gated behind a BUDDY compile flag. Someone at Anthropic is having the time of their life. 5️⃣ This is Anthropic’s second leak in a week. Days earlier, Fortune reported 3,000 internal files were publicly accessible — including a draft blog post about an unreleased model codenamed “Capybara.” The Claude Code leak confirmed it. The internet already has it. Go grab it and go absolutely bonkers with it 👇 🔗 https://lnkd.in/gjJCyZ-V #AI #ClaudeCode #AgenticAI #ProductManagement #ArtificialIntelligence #Anthropic
To view or add a comment, sign in
-
Curious about daily growing opportunities what #AI can practically do... Checking all new arrivals then I naturally digested the best from every hyped #ClaudeCode skills, hammered it on real projects, and turned it into a workbench I rely on every day... What started as a private dotfiles tweak became a pillar of my daily #OpenSource workflow. Out of the box, agents are role descriptions in a prompt. Mine carry years of OSS maintainer judgment and real SW dev practice, embedded as rules they actually enforce and skills they run by — and I keep refining them as we go. Under the hood: a blend of Anthropic #Claude and OpenAI #Codex — Claude for coding and long-horizon work, Codex as a second-opinion reviewer before anything ships and hand-off coder. Cross-vendor peer review turns out to be a surprisingly strong quality signal, and it buys real #autonomy — I brief once and get finished work, instead of babysitting through several rounds of "did you cover X?" 🤖 Meet AI-Rig — five composable plugins: 🏭 #foundry — 8 calibrated specialist agents (engineer, QA, perf, architect, docs, lint, web, mentor) plus a self-distillation loop so corrections actually stick 🌱 #oss — maintainer survival kit: triage, 6-lens parallel PR review with a Codex pre-pass, feedback resolution, SemVer-correct releases 🛠️ #develop — validate-first discipline: no feature without a demo test, no fix without a failing regression test, no refactor without characterization coverage 🔬 #research — structured ML loop: literature → spec → methodology judge → automated runs with auto-rollback on regression 🗂️ #codemap — one-shot structural index for Python projects; a lightweight CLI (not yet another heavy MCP) that saves a pile of tokens and lets agents finish tasks that used to choke on large codebases The thread tying them together: each plugin is a gate, not a generator. Annoying at first, then quietly indispensable. Give it a spin and let me know what you think 👉 https://lnkd.in/dQSttJ9E #AIEngineering #PythonDev #DeveloperTools #Agents #MLOps #python #caveman #RTK #AgenticAI
To view or add a comment, sign in
-
Developers are constantly seeking ways to streamline their workflows and make the most of their time. In the realm of LLM and agent workflows, teams often struggle to balance reliability and power. Most rely on cumbersome server-side solutions that are difficult to scale and maintain. This is where ComposioHQ/awesome-codex-skills comes in – a curated list of practical Codex skills for automating workflows across the Codex CLI and API. At its core, this repository provides a collection of Python-based skills that can be used to improve the reliability and efficiency of agent behavior. What stands out is the variety of skills available, including bernstein – a multi-agent orchestrator with Codex CLI adapter, and what Are Codex Skills? – a fundamental question that gets to the heart of how these skills work. What makes this repository particularly interesting is how it addresses a common pain point in the development process. By providing a list of practical skills that can be easily integrated into existing workflows, ComposioHQ/awesome-codex-skills makes it easier for developers to make agent behavior more reliable, not just more powerful. Here are some key highlights: - bernstein – Multi-agent orchestrator with Codex CLI adapter. Runs parallel Codex agents in isolated git worktrees with quality gates. - what Are Codex Skills? - a curated list of practical Codex skills for automating workflows across the Codex CLI and API. - built with Python The traction makes sense: a repository sitting at #3 with around 637 new stars in the current trending window is usually solving a problem people can feel immediately. With its focus on making fast-moving AI workflows easier to steer and reuse in real projects, it's no wonder that ComposioHQ/awesome-codex-skills is getting attention. Repo: https://lnkd.in/eTmpF-UT #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Python #AwesomeCodexSkills #Awesome #AwesomeLists
To view or add a comment, sign in
-
-
Let’s be honest: Developers love writing code, but we hate updating the documentation. 😅 So, I spent the last few days building an AI Agent to do it for me. I’m calling it Watchtower 🏰 an autonomous AI agent that keeps technical documentation in sync with your code changes, in real-time. Here is how the architecture currently works: 1️⃣ The Ears: A FastAPI server listens for GitHub Webhooks whenever someone pushes code. 2️⃣ The Hands: It uses PyGithub to fetch the exact git diff. (I added a "Noise Filter" so it ignores massive files, deleted files, and image assets to save API costs!). 3️⃣ The Brain: Powered by LangGraph and GPT-4o, it analyzes the code logic. If the change is trivial (like fixing a typo), it ignores it. 4️⃣ The Action: If the change matters, it creates a new branch and automatically opens a Pull Request with the updated documentation for human review. Tech Stack: Python, FastAPI, LangGraph, LangChain, PyGithub. ✅ Phase 1 is complete: The end-to-end pipeline (Webhook → AI → PR) is working perfectly! Now, I need your feedback to make it better: I’m planning the next phase and considering adding Graph RAG to map file dependencies (so if you change auth.py, it knows to update payment.py docs too). For the senior engineers and open source maintainers out there: 👇 What edge cases should I look out for? What features would make you actually want to install this on your team's repo? Let me know in the comments! #BuildInPublic #AI #LangGraph #Python #SoftwareEngineering #OpenSource #OpenAI
To view or add a comment, sign in
-
Explore related topics
- Open Source Tools for Autonomous AI Software Engineering
- AI Tools for Code Completion
- Open Source AI Tools and Frameworks
- Open Source Tools for Machine Learning Projects
- Top AI-Driven Development Tools
- AI Coding Tools and Their Impact on Developers
- How to Manage AI Coding Tools as Team Members
- Reasons for Developers to Embrace AI Tools
- How to Use AI Code Suggestion Tools
- How to Use AI for Manual Coding Tasks
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
🙌🙌🙌