I’ve been using the chore/agent-friendly-repo-foundation branch(https://lnkd.in/gtypVfA9) in ExcelAlchemy as an experiment in agent-friendly engineering. The interesting part is not just asking an agent to write code. It’s shaping the repo so an agent can actually navigate it: clear docs, explicit architecture, stable public boundaries, linked plans, tests, invariants, and examples. That idea maps closely to OpenAI’s recent post on [harness engineering](https://lnkd.in/gtHiDZbU): humans define intent and constraints; agents execute inside a well-structured environment. ExcelAlchemy itself is a schema-driven Python library for typed Excel workflows with Pydantic: template generation, workbook validation, error mapping, locale-aware result workbooks, and storage-backed integration. This branch is our small testbed for a bigger question: not “can agents code?” but “can we build repos they can reliably understand?” #AgentEngineering #HarnessEngineering #DeveloperTools #RepositoryDesign #Python #OpenSource #Pydantic #ExcelAutomation
Agent-Friendly Repo Foundation in ExcelAlchemy
More Relevant Posts
-
I built a tool that lets you ask questions about your codebase in plain English. 🧠 Like literally just type — "where is the FAISS vector store initialized?" — and it finds the exact file, function, and code for you. No more ctrl+F. No more digging through 20 files manually. It's called CodeMind. Getting started is super simple too — just paste your GitHub repo link and it'll clone it automatically, or upload a ZIP file if you prefer. That's it, you're ready to start asking questions. Here's how it works under the hood: → Loads your entire codebase → Breaks it into chunks and converts them into embeddings → Stores everything in a FAISS vector store → When you ask something, it pulls the most relevant code and sends it to Groq LLM for a proper answer Built with Python · LangChain · FAISS · Groq · Streamlit 🔗 Try it: https://lnkd.in/gYV8UfC8 🐙 GitHub: https://lnkd.in/gk3F5kZf Still a lot to improve but happy with how v1 turned out. Would love honest feedback from anyone into AI or dev tooling! 🙌 #RAG #LangChain #GenerativeAI #Python #OpenSource #BuildInPublic #AIEngineering
To view or add a comment, sign in
-
-
Stop spending days reading code alone. 🚀 I built CodeBoard—an AI-Powered Repository Intelligence Platform that acts as your personal technical consultant. 🧠 Whether you are onboarding to a new project or auditing complex business logic, CodeBoard leverages Python AST parsing and Llama 3.3 (via Groq API) to give you instant insights. Key Highlights in this Demo: ✅ Real-time Logic Auditing: Identifying API handlers and function flows in seconds. ✅ Contextual Understanding: No more "hallucinations"—the AI understands the actual code structure. ✅ Onboarding Efficiency: Perfect for developers who want to skip the manual line-by-line review. Built with: Python, FastAPI, React, and Groq. 🛠️ Check out the demo below! 👇 #FullStack #AI #GenerativeAI #Python #WebDevelopment #OpenSource
To view or add a comment, sign in
-
3 RAG chunking strategies. 2 cost the same. 1 costs double. Here's why 👇 Building my own RAG pipeline, I hit a wall. Everyone talks about WHAT chunking is. Nobody talks about WHERE the money goes. Here's what I figured out: 💰 The $ only comes from ONE place — the embedding model API call. Splitting → FREE (plain Python) Storage in FAISS/CSV → FREE Embedding API call → costs money Ingestion → ONE TIME cost only Semantic costs 2x because it calls the embedding API twice: → Once to find topic shifts → Once to store vectors in FAISS Rule of thumb: Start with Document Aware. Upgrade to Semantic only when your data has no clear structure. Ingestion cost = one time only. Query cost? That's a whole different story. 🔜 Follow so you don't miss it. Which chunking strategy are you using? Drop it below 👇 #RAG #AIEngineering #DataEngineering #LLM #Python #VectorDatabase #OpenAI #LangChain #Claude
To view or add a comment, sign in
-
-
Installing an AI agent runtime shouldn't require reading 3 docs pages. So we made it one command. curl -fsSL https://lnkd.in/dZSQ5Mqc | bash That's it. pydantic-deepagents — the modular agent runtime for Python — installed. Here's what the install experience looked like before: 1. Install Python (which version? 3.10? 3.12?) 2. Create a virtual environment 3. pip install pydantic-deep[cli] 4. Figure out why `pydantic-deep` isn't in your PATH 5. Realize textual wasn't included in the base install 6. Install again with the right extras 7. Try again Seven steps. Fifteen minutes minimum. That's before you've even seen the tool. What install.sh actually does: 1. Detects whether `uv` is already installed 2. If not: installs `uv` automatically from astral.sh 3. Runs `uv tool install "pydantic-deep[cli]"` — no virtualenv management needed 4. Verifies the binary is accessible 5. Prints PATH instructions if needed (and explains exactly how to fix it) After the first install, updating is one command too: pydantic-deep update That's brew upgrade for AI agents. It uses `uv tool upgrade` if uv is available, falls back to pip. No version juggling. We also added startup update notifications. Every time pydantic-deep starts, it quietly checks PyPI for a newer version (2-second timeout, never blocks startup). If one exists, you see: Checks are cached for 24 hours so it doesn't hit PyPI on every single invocation. The features we built this week — context window awareness, smarter history search, agent loop detection — none of it matters if the install experience drives people away at step 1. Tomorrow: same agent, different environment. Docker sandbox. What's the most painful install experience you've had with an AI or ML tool? (I'll go first: a certain GPU-accelerated library that required matching CUDA versions, driver versions, and Python versions to a compatibility matrix that was 2 versions out of date.) Pydantic | Vstorm #Python #DeveloperExperience #OpenSource
To view or add a comment, sign in
-
-
STOP USING the BranchPythonOperator, Airflow 3.2 has something NEW for you 👇 When you use the BranchPythonOperator, you write a function that returns the task ID of the branch you want to execute. While it works, you're hardcoding the routing logic yourself with if/else statements. But, you don't have to! Here is the New @task.llm_branch 🤩 Instead of writing the routing logic, you describe the situation in a prompt and let an LLM decide which downstream task should run 😎 When the task executes, it looks at the DAG topology and discovers all the downstream task IDs automatically. It builds a Python Enum from those task IDs and uses pydantic-ai structured output to force the LLM to pick from that enum. So the LLM can only return valid task IDs. Your function just returns the prompt string and the LLM picks the right task. Obviously, you can pass data to this prompt that come from upstream task. Oh, and there's also an allow_multiple_branches parameter if you wanna pick multiple tasks. Anywhere you'd normally write a bunch of if/else conditions that are hard to maintain and don't generalize well, use llm_branch instead. Enjoy ❤️ #airflow #apacheairflow #dataengineer #dataengineering #ai
To view or add a comment, sign in
-
-
Yesterday I got the API working and posted from a Python script. Today I'm building the landing page. Here's what StreakPosts actually is: You sign up. Every morning you get a question by email or text. You reply in plain English. AI turns your answer into a polished social post and schedules it automatically. No dashboard. No blank page. No excuses. The goal is simple — show up consistently for 90 days and watch what happens to your audience. Anyone can do it. Most people just need a system. That's what I'm building. Show up raw. Every day. 🔥 #buildinpublic #StreakPosts #indiehacker
To view or add a comment, sign in
-
I built something I'm genuinely proud of this week. 🛠️ Clipper Maker — an AI-powered YouTube clip extractor built entirely in Python. How it works: → Paste any YouTube URL → Whisper AI transcribes the audio with timestamps → librosa scores each moment by energy and speech activity → ffmpeg cuts the top 5–6 clips automatically → Download them individually or as a ZIP What I learned building this: ✅ How to work with audio signals and RMS energy scoring ✅ Running AI models locally with no cloud dependency ✅ Calling command-line tools (ffmpeg) from Python ✅ Building a full web app in pure Python with Streamlit ✅ Structuring a real project with modular, clean code Every line of code written from scratch. Every bug fixed. Every phase tested. This is what learning by building looks like. 💪 🔗 https://lnkd.in/gmqXjgm6 🔗 https://lnkd.in/gWSpczsp #Python #AI #Whisper #BuildInPublic #OpenSource #Developer #SideProject #MachineLearning
To view or add a comment, sign in
-
-
Paying per token for internal automation? There's a better way. I built a proxy that wraps Claude Code CLI as a standard Anthropic API — same format, same SDKs, zero per-token billing. The trick: Claude Code CLI runs under a flat-rate Claude Max subscription. We exposed it as POST /v1/messages on our internal server. Now every machine on the network calls it like the real API. One line change in Python: base_url = "https://your-server:port" # that's it Bonus: Claude CLI can read and fix actual files on the server — something the real API cannot do. Stack: Flask + Podman + config.json. Fully self-hosted. #ClaudeCode #DevOps #Anthropic #AI #CloudNative #PlatformEngineering
To view or add a comment, sign in
-
Day 52 of 100 Days of AI — 🐍 Python Backend is Up Infrastructure was Day 51. Backend is Day 52. Got the FastAPI skeleton running today — the core of the entire newsletter pipeline. Subscriber management is live. You can subscribe, unsubscribe, and update your preferences. The ingestion endpoint is ready and waiting — the moment Cloudflare sends articles over, the backend knows exactly what to do with them. Database connected. Everything talking to each other. The boring part is done. Tomorrow is where it gets interesting. The AI agent that actually reads the sources, filters the noise, synthesizes multiple stories into one clean summary, and writes the final newsletter — that's tomorrow. That's the part I've been building toward for 52 days. Next: The synthesis agent — the actual brain of the newsletter. #100DaysOfAI #BuildInPublic #FastAPI #AIEngineering #Python #Newsletter #SideProject #OpenRouter #LangChain
To view or add a comment, sign in
-
📣 Bad API docs don't fail loudly. They waste hours quietly. Have a look into our API doc- 📖 https://lnkd.in/dvr6Nyhx You install a framework. The README looks clean. The examples run. Then you try to do something slightly off the happy path and you're reading source code at midnight trying to figure out what the second argument actually does. This is why we treat SynapseKit's documentation as a first-class feature not an afterthought shipped after the code. Every public API is documented. Every parameter explained. Every method shows you what it accepts, what it returns, and what happens when you pass the wrong thing. Architecture docs explain why decisions were made, not just what they are. The cookbook has real patterns, not toy examples. Because the engineers who evaluate a framework in the first 30 minutes are not the same engineers who maintain it six months later. The second group lives in the docs. #Python #LLM #OpenSource #AI #Documentation #DeveloperExperience #SynapseKit
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development