2 days of debugging. Fixed by adding 1 line. 😅 If you use FastAPI + GitHub Actions, read this. I was building a Tier 2 Event Pipeline with FastAPI microservices. Everything worked perfectly on my machine — 8/8 tests passing, 96% coverage, Docker builds clean. Pushed to GitHub Actions. Two jobs failed instantly. ❌ Python Checks (ingestion-service) — FAILED ❌ Python Checks (fusion-engine-service) — FAILED The error: RuntimeError: The starlette.testclient module requires the httpx package. Both services used TestClient from FastAPI in their tests. TestClient has a hard runtime dependency on httpx. But httpx was nowhere in my pyproject.toml. So why did it work locally? My Windows machine already had httpx installed — silently pulled in as a transitive dependency from earlier work. I never noticed because it was never declared. GitHub Actions runners are clean Ubuntu containers. They install only what you declare. Nothing more. Classic "works on my machine" — and I had no idea. ───────────────────── The fix? One line in each pyproject.toml: [project.optional-dependencies] dev = [ "pytest>=8.3,<10.0", "pytest-cov>=5.0,<6.0", + "httpx>=0.27,<1.0", ← this ] ───────────────────── All jobs green. ✅ The real lessons I'm keeping with me: ① If your test imports it → it's a dev dependency. No exceptions. ② Coverage at 2% and 39% wasn't the bug. It was a symptom. The tests couldn't even be collected. Always read the FIRST error, not the summary. ③ Before every push, test in a clean venv: python -m venv .venv-clean source .venv-clean/bin/activate pip install -e ".[dev]" pytest tests This single habit would have saved me 2 days. If you're using FastAPI + TestClient + GitHub Actions — double-check your pyproject.toml right now. You might have the same silent bomb waiting. Ever lost hours to a "works on my machine" bug? Drop it in the comments 👇 #Python #FastAPI #GitHubActions #CI #DevOps #SoftwareEngineering #LessonsLearned #OpenSource #Testing
FastAPI + GitHub Actions: Fixing Silent Dependency Issues
More Relevant Posts
-
I got tired of Claude (and other LLMs) confidently giving me wrong function signatures, so I built a small tool that generates markdown API references from source and keeps them in your repo. (see at the end for comparison with context7) It works for Rust crates and Python packages (via runtime introspection). The output is a two-tier markdown file: curated patterns and gotchas on top, full machine-generated API reference on the bottom. Re-running the script updates the reference while keeping your curated notes intact. There's a Claude Code plugin that makes it hands-free, just say "vendor docs for tokio" and it generates the guide. After that, Claude checks the vendored docs before reaching for the internet. No more invented method signatures. Then you could just ask it to vendor a doc for you, for example today I wanted to vendor the kube crate's doc: "vendor the doc for kube" Once done, you can ask it to compare your implementation against it, and check if there are issues: "compare our usage of kube with the vendored doc" In a few seconds it'll do the check for you :) How about context7 you may ask? the main difference is that everything is local and yours. the docs are generated from your actual lockfile version and committed to your repo. no API calls, no rate limits, no downtime, works offline, works in CI, works on a plane, etc. the other thing is you own the content. you can add patterns, gotchas, project-specific notes on top of the generated reference. when you upgrade a dependency you regenerate and git shows you exactly what changed in the API. you can ask claude to adapt the documentation by adding sections that fit your usage; you can make it learn from your usage. if you're just doing a quick lookup context7 is fine, but if you're spending weeks with a library it's nice to have the docs right there in the repo, version-locked, no external dependency. https://lnkd.in/d9U2ambf
To view or add a comment, sign in
-
mini-claw-code I rebuilt the core idea behind Claude Code in 68 lines of Python. It's a while loop. LLM receives input, calls tools, gets results, loops until done. 2 tools: bash and todowrite. 3 prompt files. That's it. The interesting part isn't the code. It's the context engineering. Tool descriptions aren't just API docs -- they're behavioral instructions in disguise. They tell the model WHEN to use a tool, WHEN NOT to, and HOW to think about it. Even the todowrite return message is a nudge to keep the model on track. 3 levers: 1. System prompt = who the agent is 2. Tool descriptions = how it behaves 3. Tool results = real-time context that the agent builds as it works This is just the idea. The real Claude Code is thousands of engineering hours: sandboxing, permissions, streaming, caching, LSP, IDE integrations, and countless edge cases. Massive respect to the Anthropic team. But if you want to understand the concept, 68 lines is enough. Repo: https://lnkd.in/gxBxH2dN #ClaudeCode #ContextEngineering #AI #AgenticLoop
To view or add a comment, sign in
-
This morning I mentioned adding -vvv to a playbook and finally seeing the Python traceback that "FAILED" was hiding. But verbosity is just one debugging tool. Here's the full Ansible Debugging Toolkit I use when a playbook fails and the output isn't helping. 🔧 The Ansible Debugging Toolkit: ✔️ Verbosity Levels (Know What Each One Shows): 1️⃣ `-v` (task results: shows the return values from each task) 2️⃣ `-vv` (input parameters: shows the arguments passed to each module) 3️⃣ `-vvv` (connection details: shows SSH args, raw module output, file transfer paths) 4️⃣ `-vvvv` (full debug: shows the complete SSH negotiation, plugin loading, and internal Ansible operations) ⚠️ Start with `-vvv`. Only go to `-vvvv` if you suspect a connection or plugin-level issue. Four v's generates enormous output that's hard to read. ✔️ Inline Debugging (Inside the Playbook): 1️⃣ debug module: `- debug: var=my_variable` (print any variable's value mid-playbook) 2️⃣ debug with msg: -` debug: msg="The value is {{ my_variable }}"` (formatted output with context) 3️⃣ register + debug: register the output of any task, then print it on the next line to see exactly what came back 4️⃣ assert module: `- assert: that: my_variable == 'expected'` (fail the playbook with a clear message if a condition isn't met) ✔️ Execution Control (Isolate the Problem): 1️⃣ `--step` (pause before each task, ask y/n to continue. Walk through the playbook one task at a time.) 2️⃣ `--start-at-task="task name"` (skip everything before a specific task. Jump straight to the problem area.) 3️⃣ `--limit=failing_host` (run only against the failing hosts, skip the 200 that work fine) 4️⃣ `--check --diff` (dry run that shows what would change without changing it) ✔️ Environment Debugging: 1️⃣ `ansible --version` (shows Python version, config file path, module location) 2️⃣ `ansible -m setup target_host` (dumps all facts from the target, shows OS, Python, network info) 3️⃣ `ansible -m ping target_host` (tests connectivity and Python availability, not ICMP) ⚠️ `ansible -m ping` is not a network ping. It tests whether Ansible can connect, authenticate, and execute Python on the target. If ping fails, the problem is SSH, credentials, or Python, not network reachability. 🔍 The Debug Workflow: 1️⃣ Add `-vvv` to see what's actually happening 2️⃣ Use `--limit` to isolate the failing hosts 3️⃣ Add debug tasks to print variables at the point of failure 4️⃣ Use `--step` to walk through the playbook interactively 5️⃣ Run `ansible -m setup` on the failing host to compare its environment to a working one Good engineers run playbooks and check the output. Great engineers have a systematic debugging workflow before the output even makes sense. 💾 Save this Ansible Debugging Toolkit. The next time a playbook says "FAILED" with no explanation, start at step 1 and work down. #Ansible #NetworkAutomation #DevOps #DamnitRay #QOTD_FollowUp
To view or add a comment, sign in
-
-
Python build fails on CI but works on locally and how to resolve the issue? Mainly happens due to differences in environment, missing dependencies, version mismatches, missing credentials or permissions between local machine and ci environment. step by step trouble shooting: 1. check python version: our local machine might use 3.13 but the ci runner may default to 3.10 check with python --version fix the python version in ci configuration(eg: github actions) - uses actions/setup-python@v4 with: python-version: '3.13' 2. check dependency installation we need to install dependencies from a requirements.txt or pyproject.toml if ci doesn't install dependencies or install different dependencies due to version pinning, the build may fail. easily fix with: pip install -r requirements.txt 3.Check missing files or modules CI system start with clean and new environment and ensures all required files like .env, config.yaml, local modules etc are checked into git or provided securely 4. check for missing secrets in our local credentials eg aws, databases might be stored in .env but ci needs them as environment variables or secret manager injections fixing in github actions: env: API_KEY: ${{ secrets.API_KEY }} 5. run tests In CI, run tests with verbose output pytest -v it helps the exact failure such as importerror, modeulenotfound or permission issues 6. check for c extension issues Some Python packages require system-level dependencies (e.g., psycopg2, lxml). These might be installed on your machine, but not available in the CI container Fix: Add dependencies via apt before pip install: sudo apt-get install -y libpq-dev pip install psycopg2
To view or add a comment, sign in
-
I use Claude Code almost every day at work. So when I woke up this morning and saw "Anthropic leaked their own source code" trending — I had to read everything. Here's what actually happened, and why it's one of the wildest 12 hours in developer history: The leak: Anthropic pushed a Claude Code update at 4AM. A debugging artifact — a source map file — was accidentally bundled in the npm package. That file contained 512,000 lines of TypeScript across 1,900 source files. The kind of thing you'd never ship intentionally. Security researcher Chaofan Shou spotted it within minutes and posted the download link on X. 22 million people saw the thread. The entire codebase was downloaded, mirrored, and archived across GitHub before Anthropic's team had even woken up. The countermove: Anthropic pulled the package and fired DMCA takedowns at every repo hosting it. That's when Sigrid Jin — a Korean developer, and reportedly the most active Claude Code user on the planet (WSJ documented him using 25 billion tokens last year) — woke up at 4AM to his phone blowing up. His girlfriend was worried he'd get sued just for having the code on his machine. So he did what engineers do under pressure. He rewrote the entire thing in Python from scratch. Before sunrise. Called it claw-code and pushed it to GitHub.👇 https://lnkd.in/gigYFKjx A Python rewrite is a new creative work. DMCA can't touch it. The repo hit 30,000 stars faster than any repository in GitHub history. Then he started rewriting it in Rust. The twist that made me pause: Anthropic had built a feature inside Claude Code called Undercover Mode — specifically designed to stop Claude from accidentally leaking internal codenames, unreleased features, and proprietary details in public commits. They built a leak-prevention system. Then they leaked their own source code through an npm config mistake. What this actually reveals (the developer angle): The source code showed 44 hidden feature flags, 20 unshipped features, and Claude's actual internal system prompts. For anyone building on Claude Code or designing AI agent pipelines — this is architectural gold that Anthropic never meant to publish. The code is now on decentralized platforms with a single message: "Will never be taken down." Anthropic cannot get it back. #ClaudeCode #AI #Anthropic #OpenSource #DeveloperTools #SoftwareEngineering
To view or add a comment, sign in
-
A lot of Python teams use packages with binary parts every day, but rarely think about how those wheels are actually built across platforms and architectures. There is a very practical example of using cibuildwheel around compiled parts in django-modern-rest with mypyc. What makes this setup interesting is that it turns a painful release problem into a fairly structured pipeline: - define which wheels need to be built - reuse the existing build system from pyproject.toml - enable compiled builds only when needed - test the built wheel, not just the source tree - verify that compilation actually improves performance - generate CI matrix configs automatically instead of maintaining them by hand - publish releases cleanly through PyPI Trusted Publisher A nice detail here is that compiled parts can stay optional, so it is still possible to keep native Python-only builds without forcing .so artifacts everywhere. This is one of those tools that looks niche at first, but becomes very practical once you need repeatable binary builds at scale. For teams shipping Python packages across multiple environments, this kind of setup can remove a lot of manual CI pain. Based on recent work shared by Nikita Sobolev around django-modern-rest. Has your team ever needed this outside open source?
To view or add a comment, sign in
-
Your AI code reviews are burning tokens they don't need to. Every time Claude reviews your code, it re-reads the entire codebase. 200 files. 150,000 tokens. For a change that touched 8 files. That's not smart. That's expensive. code-review-graph fixes this. It's an open-source tool that builds a persistent, incremental knowledge graph of your codebase using Tree-sitter and SQLite. Instead of dumping your entire repo into Claude's context, it sends only the changed files plus every file impacted by those changes. The result? 5 to 10x fewer tokens per code review. Before: 200 files scanned, ~150k tokens used. After: 8 changed + 12 impacted files, ~25k tokens used. Here's what makes it practical for real engineering teams: Works natively with Claude Code via MCP (Model Context Protocol). No extra setup, no new workflow to learn. Increments intelligently. After the first build (~10s for 500 files), subsequent updates take under 2 seconds. Only re-parses what changed. Understands blast radius. It traces dependency chains so Claude knows not just what changed, but what else that change could break. Supports 12+ languages out of the box: Python, TypeScript, JavaScript, Go, Rust, Java, C#, Kotlin, Swift, Ruby, PHP, and C/C++. Needs no external database. SQLite is all it takes. The architecture is clean: Tree-sitter parses your code into an AST, a SQLite + NetworkX graph stores the relationships, git diff drives incremental updates, and 8 MCP tools expose everything to Claude Code. Three review workflows ship with it: /code-review-graph:build-graph /code-review-graph:review-delta /code-review-graph:review-pr Whether you're a junior engineer just getting into AI-assisted development or a senior architect thinking about LLM cost optimization at scale, this tool addresses a real problem: context window efficiency. AI code review should be precise. Not brute-force. Check it out: https://lnkd.in/giHvG8pR #AIEngineering #ClaudeCode #LLM #TokenOptimization #CodeReview #OpenSource #DeveloperTools #SoftwareEngineering #MCP #GenAI
To view or add a comment, sign in
-
Shipped ShipIt-agent v1.0.0 An open-source Python agent runtime for building powerful, production-style agents with a clean API. What’s in it: - Multiple LLM support - AWS bedrock/OpenAI / Anthropic / Gemini / Groq / Together / Ollama via adapters - prebuilt tools for web search, open URL, workspace files, code execution, memory, planning, verification, artifacts, AskUser, human review - MCP support for remote tool discovery and tool execution - connector-style tools for Gmail, Google Calendar, Google Drive, Slack, Linear, Jira, Notion, Confluence, and custom APIs - session history, memory stores, trace stores, and structured streaming packets - notebook test flows for no-tools, multi-tools, MCP, connectors, AskUser, HILT, streaming, and reasoning Built so you can do things like: - create an agent with llm, tools, mcps, prompt, history - stream live runtime events and tool packets - plug the agent into chat products or internal workflows - inspect reasoning with visible planning / decomposition / synthesis / decision tools GitHub: https://lnkd.in/dpUiYqzF #python #ai #llm #agents #mcp #opensource #bedrock #toolcalling
To view or add a comment, sign in
-
I started learning Rust recently, using LLMs to help me along the way. At first, it was just curiosity. But something felt different. The code was stricter, clearer, and surprisingly… easier to trust. Strong typing and clean memory management made LLM outputs feel more reliable. Then I stumbled on something unexpected. A Python package manager called uv (https://lnkd.in/diP_EW-2). It replaces pip, pip-tools, pipx, poetry, pyenv, twine, virtualenv and more. What caught my eye wasn’t just that it’s written in Rust. It was this : ~83k GitHub stars Still not even version 1.0 (0.11.4 Currently) pip is around 10k. npm is under 5k. pnpm and yarn is around 40k, brew at 47k: Even Python itself is close - 72k. So why is this new tool growing so fast? It didn’t feel like hype. It felt like a signal. Maybe developers are getting tired of “good enough.” Maybe speed, simplicity, and better design are finally winning. And maybe… tools built with more care are getting noticed faster than ever. Feels like something is changing. Are we seeing the beginning of a shift in how developers choose tools? #Rust #Python #OpenSource #DeveloperTools #LLM
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development