OpenTelemetry was overkill. A JSON logger was enough. Everyone reaches for OpenTelemetry. We almost did too. We were working on a system with several integrations. Logs were unstructured and our log provider couldn't query them properly. Someone suggested OpenTelemetry. It made sense on paper: industry standard, widely adopted, serious tooling. But when I looked at what we actually needed, it didn't fit. We weren't dealing with dozens of services talking to each other. We just needed structured output. Pulling in a full observability SDK for that felt like overkill. We went with python-json-logger instead. Same logging module underneath, same config style, same stdout. The output just became structured JSON. For request tracing we added asgi-correlation-id, one line in the logging config and every log entry carries a trace_id you can follow through the whole request. Performance also came up at some point. We swapped the default JSON encoder to msgspec. Still no OpenTelemetry. The lesson I took from this: match your observability tooling to your actual system complexity. Ecosystem hype will push you toward solutions your architecture doesn't need yet. If you're figuring out your Python logging stack, happy to share what worked. Drop a comment or connect. #BackendEngineering #Python #Observability #SoftwareEngineering #OpenTelemetry
OpenTelemetry Overkill: Structured Logging with Python
More Relevant Posts
-
🎬 MovieMate - AI Powered Tracker (React/FastAPI - Python) Live: https://lnkd.in/gdqynPRK GitHub: https://lnkd.in/gniV-Xqq Built a complete full-stack application to track movies and TV shows, with AI-powered features and production-grade infrastructure. Moving beyond Spring Boot, I explored python FastAPI backend to build this end-to-end application. 🚀What I built : → REST API with FastAPI, SQLAlchemy ORM, and Pydantic schema validation → TMDB API integration → Season-wise episode progress tracking stored as JSON in PostgreSQL → AI Review Generator - rough notes → full review using Groq (Llama 3.3 70B) → AI Recommendations based on watch history and ratings → Watch time stats with Recharts bar charts by genre and platform 🏗 Infrastructure : → Multi-stage Docker build (builder + runtime) to minimize image size → Docker Compose with PostgreSQL healthchecks and service dependency ordering → Backend healthcheck via curl on /docs endpoint → Deployed on Railway (backend + PostgreSQL) and Vercel (frontend) #FastAPI #React #PostgreSQL #Docker #GroqAI #TMDB #FullStack #Python #TailwindCSS #Vercel #Railway #WebDevelopment #Backend
To view or add a comment, sign in
-
I spent too much time reconciling logs and traces until I understood how OpenTelemetry logging actually works. 🔑 The key insight: OTel doesn't try to be your logging library. It's a bridge. Your existing logger (Log4j, Python logging, winston) keeps working exactly as it does today. But behind the scenes, an appender automatically enriches every log record with trace context — the TraceId and SpanId from the active span. ✨ That's it. That's the whole idea. And it changes everything. ⚡ Suddenly, debugging is faster. You see logs in context of their span. You see which logs caused a trace anomaly. Your backend (Jaeger, Tempo, Elastic, whatever) can now correlate logs to traces without you writing SQL joins or doing manual detective work. 📖 Just published a 16-minute technical guide walking through log formats, the unified LogRecord schema, the Logs API and SDK, processors, and exporters. Available on LearnObservability — link in comments. #OpenTelemetry #Observability #DevOps #DistributedTracing #SRE #Logging
To view or add a comment, sign in
-
Half my context window was gone before I typed a single prompt. Claude Code indexed my entire monorepo at session start — Python files, Airflow DAGs, three months of task logs. Then it generated a migration that referenced a table that doesn't exist. I spent weeks rebuilding my project setup from scratch. Token usage dropped over 60%. But the real win was rework time going down significantly. Here's what actually moved the needle: - permissions.deny in settings.json — the official way to block files Claude shouldn't read. Read(./.env), Read(./airflow/logs/), Read(./.venv/). The airflow/logs line alone cut 15%. - .claudeignore — an unofficial shortcut that works like .gitignore. Not in the docs yet, but a lot of people use it. Same result, cleaner syntax. - CLAUDE.md hierarchy — root file under 200 lines. Subdirectory files load only when needed. Past 200 lines, Claude starts treating your instructions as optional. - MCP servers (BigQuery + Airflow) — live database access without pre-loading schemas into context. Deferred by default, costs almost nothing until Claude actually queries one. - Skills & agents — on-demand workflows at ~100 tokens each instead of 3,000-5,000 tokens baked into CLAUDE.md every session. - /compact and /context — the two commands I run multiple times a day to manage what's eating my context window. 30 minutes of setup. Every session after that starts lean. Full walkthrough with real configs from a data pipeline project: https://lnkd.in/gaNuSUta -- What does your Claude Code project setup look like? Are you using permissions.deny or .claudeignore — or just letting it index everything? #AICoding #SoftwareEngineering #DataEngineering #ClaudeCode #DeveloperTools #AIEngineering #SystemDesign
To view or add a comment, sign in
-
📣 SynapseKit just hit 1.0.0 A few weeks ago this was an idea. Today it's a production-grade Python framework that ships with everything you need to build real LLM applications without the complexity that usually comes with it. Here's what 1.0 looks like: ⚡ Async-native from day one - not retrofitted, not a wrapper. Every API is async/await first. 🌊 Streaming-first - token-level streaming across all 15 providers, identically. 🪶 2 hard dependencies - numpy[NumPy] and rank-bm25. Everything else is opt-in. What's inside: 🔌 15 LLM providers behind one interface : swap models without rewriting a line 🔍 18 retrieval strategies : from basic vector search to Self-RAG, Adaptive RAG, HyDE, FLARE 🤖 3 multi-agent patterns : Supervisor, Handoff Chain, Crew 🛠️ 32 built-in tools : search, code, files, databases, APIs, arXiv, PubMed, GitHub and more 🔗 MCP client and server : native Model Context Protocol support 📊 Built-in RAG evaluation : Faithfulness, Relevancy, Groundedness metrics out of the box 🔍 Full observability : OpenTelemetry tracing, TracingUI dashboard, auto-trace every LLM call 🛡️ Production guardrails : PII detection, content filters, topic restrictors 🤝 A2A protocol : agents that discover and talk to each other across services 🖼️ Multimodal : images and audio, automatic format conversion across providers 1,011 tests. 2 dependencies. Apache 2.0 license[ApacheCon - ASF Events]. Built in the open. No VC. No team. No marketing budget. Just engineers who thought the Python LLM ecosystem deserved something better. Thank you to every contributor, every person who opened an issue, every engineer who cloned it at 11pm to try something. This is yours too. This is 1.0.0 The stable foundation. Everything from here gets built on top of it. ⚡ pip install synapsekit==1.0.0 #Python #AI #LLM #RAG #OpenSource #MachineLearning #Agents #MCP #BuildInPublic #SynapseKi
To view or add a comment, sign in
-
Anthropic Leaks Its Own Source Code Anthropic ships Claude Code as an npm package. Someone runs `ls` on the source map. Entire codebase just sitting there. Unobfuscated. Plugins, skills, tools, hooks, commands — everything. Internal architecture of the most hyped AI coding agent, fully readable. Anthropic says nothing. Meanwhile, they're selling Enterprise contracts. The source map was in the registry the whole time. Nobody checked. Security through obscurity lasted about 3 months. Full code is here. https://lnkd.in/efajfgQ4
To view or add a comment, sign in
-
Let’s talk about something fun and interesting I did quite a while ago. I optimized a keyword-driven query system, focusing on improving throughput and stability under constraints. The core problem: Maximize queries/hour while avoiding conflicts, throttling, and system instability. Key optimizations: • Parallel processing with controlled concurrency • Keyword-based query pipeline for structured input distribution • User-agent rotation to distribute request patterns • Retry + backoff mechanisms for handling transient failures • Idempotent execution to avoid duplicate processing One interesting tweak that made a noticeable difference: I introduced a keyword expansion strategy - combining each keyword with incremental alphabet variations (e.g., keyword + a, keyword + b, ...). This helped: • Increase result coverage without changing the core keyword set • Avoid repetitive query patterns • Improve overall discovery efficiency per keyword After multiple iterations, the system stabilized at ~70 leads/hour from about ~15–20 leads/hour with consistent performance. This was one of the most interesting things I had worked on, may not be as flashy but interesting for sure that such a small change can have such a great impact! Curious to know your thoughts! #Optimizations #Python #Software #SaaS
To view or add a comment, sign in
-
LinkedIn: 📣 SynapseKit v0.6.9 is live. Two graph features in this release that I think matter more than they look. approval_node(): gates your graph on a human decision. The workflow hits a node, pauses, waits for a human to approve or reject, then continues. No polling, no hacks. One function call. dynamic_route_node(): routes to completely different subgraphs at runtime based on whatever logic you write. Sync or async. Your graph decides where it goes next while it's running. Together these two make human-in-the-loop workflows actually practical to build. Not a demo. Production. Also shipped: 💬 SlackTool [Slack]— send messages via webhook or bot token 📋 JiraTool— search, create, comment on issues via REST 🔍 BraveSearchTool [Brave]— web search via Brave API All three stdlib only. Zero new dependencies. Where we stand: 32 tools · 15 providers · 18 retrieval strategies · 795 tests · 2 dependencies. ⚡ pip install synapsekit 🔗 https://lnkd.in/d2fGSPkX #Python #LLM #RAG #OpenSource #AI #MachineLearning #Agents #SynapseKit
To view or add a comment, sign in
-
Most RAG systems answer "what does this code do?" TraceRoot answers "why does this code exist?" — and that needs a completely different approach. Me and My friend Manikumar Garimella, built TraceRoot — Vectorless RAG Codebase Archaeologist to let you ask natural language questions about your codebase's decision history. Manikumar handled the reasoning and retrieval layer and wired up the Streamlit interface. Great experience building this together. The problem: you're staring at a function that bypasses rate limiting. You can read what it does. You cannot read why it was written that way, who objected in the PR review, or what customer incident forced the decision. Existing RAG tools fail here. They embed code into vectors and retrieve by similarity. Similarity is the wrong signal for causality. "This code looks similar to that code" is not the same as "this commit was caused by that issue." So we built something different. TraceRoot ingests your GitHub history and builds a typed provenance graph. Every edge has a type: modified, authored-by, motivated-by, reviewed-by, fixed-by. When you ask a question, the system walks that graph backwards through time following causal chains, not similarity scores. No embedding model. No vector database. BM25 + graph traversal + Groq Llama 3.3 for reasoning. The retrieval is entirely vectorless. Vector RAG cannot tell you that PR #88 was opened specifically to fix issue #421 — it can only tell you they talk about similar things. Graph traversal follows the actual edge. The repo is open source. Groq API is free. Setup takes 10 minutes. GitHub: https://lnkd.in/gK-nD3Cx If you have tried something similar or have thoughts on the approach, I would like to hear them. #RAG #LLM #Python #OpenSource #Groq #Streamlit #MachineLearning #SoftwareEngineering
To view or add a comment, sign in
-
AI Second Brain is now Dockerised. 🐳 This was the last piece missing. Before: Clone repo → install Python → install PostgreSQL → install pgvector → configure everything → hope it works on your machine After: git clone docker compose up → FastAPI + PostgreSQL + pgvector → running in 2 minutes. Anywhere. Two containers. One command. Zero setup. This is what production-ready actually means. Not just deployed. Reproducible. Project is now complete. ✅ RAG pipeline from scratch ✅ Hybrid search (vector + keyword + RRF) ✅ RAGAs evaluation — faithfulness 1.0 ✅ LangSmith observability — 0% error rate ✅ Token tracking — <$0.001 per query ✅ Deployed on Render ✅ Dockerised On to the next one. 🚀 Site: https://lnkd.in/dPQQsX-x Github: https://lnkd.in/dza9AeSZ #Docker #RAG #BuildInPublic #AIEngineering #FastAPI #GenerativeAI #Python
To view or add a comment, sign in
-
-
🎯 Precision Engineering: Beyond Basic Queries "A great API doesn't just give you data—it gives you the right data, or a clear reason why it can't. 🛡️ Today I expanded my TodoApp by implementing Path Parameters. Moving beyond fetching 'all' records, I’ve added logic to retrieve specific tasks by their ID. Key technical highlights from this update: ✅ Input Validation: Used FastAPI’s Path to ensure only valid IDs (greater than 0) are processed. ✅ Robust Error Handling: Integrated HTTPException to return a clean 404 Not Found status if a user requests an ID that doesn't exist. ✅ Clean Code: Refactored using Annotated dependencies to keep the route handlers lean and readable. Building a backend isn't just about the 'Happy Path'—it's about handling every edge case with precision. Next: Implementing POST requests to allow users to create their own tasks! 🚀" #FastAPI #Python #BackendDevelopment #WebAPI #CleanCode #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development