We just open sourced OpenTrace. Apache 2.0. The core is free. It will stay free. When we build paid tiers for MCP and team collaboration, the graph explorer you're using today isn't going anywhere. → oss.opentrace.ai What you're looking at in this screenshot is the OpenTelemetry repo - 2000 nodes, 2524 edges, Go, Python, TypeScript, C#, Java, Rust - fully rendered in a browser in seconds. No install. No account. No infrastructure. And those 14 open PRs on the right? OpenTrace pulls those in too. You can see exactly what each one touches before anyone merges it. Here's just one little use case that one of our dev's shared with me earlier: Load two repos that solve the same problem. Ask OpenTrace to compare the architectures. Which scales better? Where are the hidden dependencies? What's the better approach for our use case? It answers - grounded in the actual graphs of the two repo's, not a guess. Bring your own Anthropic or OpenAI API key to power the AI queries. Your code stays in your browser. Your queries never touch our infrastructure. That's a deliberate choice - your codebase is yours. Go load a repo. The one you know best, or one you've always wanted to understand. Then tell us in the comments what you found. oss.opentrace.ai #OpenSource #Apache2 #DeveloperTools #AIEngineering #CodeIntelligence #BuildInPublic
OpenTrace: Open Source Graph Explorer for Code Intelligence
More Relevant Posts
-
We've spent 20 years helping engineering teams see what's happening inside their systems. In that time, the single most consistent thing we've heard from customers is this: visibility isn't enough. You need context. That's exactly the problem our sister company OpenTrace just shipped something to fix. OpenTrace OSS is live today. Free. Apache 2.0. No account. Runs in your browser. → oss.opentrace.ai Here's why we think this matters for every engineering team using AI coding tools right now. Cursor, Claude Code, and Copilot are fast (almost magical), but they're blind to runtime behavior, service dependencies and blast radius. They generate code without understanding how your system actually behaves in production. If you've ever merged an AI-generated change and watched something unexpected break - this is why. OpenTrace builds a live knowledge graph of your entire codebase and connects it directly to your AI tools via MCP. Your AI stops guessing. You stop cleaning up after it. Drop in any repo at oss.opentrace.ai and see what your codebase actually looks like when it's properly understood. We think you'll feel the difference immediately.
We just open sourced OpenTrace. Apache 2.0. The core is free. It will stay free. When we build paid tiers for MCP and team collaboration, the graph explorer you're using today isn't going anywhere. → oss.opentrace.ai What you're looking at in this screenshot is the OpenTelemetry repo - 2000 nodes, 2524 edges, Go, Python, TypeScript, C#, Java, Rust - fully rendered in a browser in seconds. No install. No account. No infrastructure. And those 14 open PRs on the right? OpenTrace pulls those in too. You can see exactly what each one touches before anyone merges it. Here's just one little use case that one of our dev's shared with me earlier: Load two repos that solve the same problem. Ask OpenTrace to compare the architectures. Which scales better? Where are the hidden dependencies? What's the better approach for our use case? It answers - grounded in the actual graphs of the two repo's, not a guess. Bring your own Anthropic or OpenAI API key to power the AI queries. Your code stays in your browser. Your queries never touch our infrastructure. That's a deliberate choice - your codebase is yours. Go load a repo. The one you know best, or one you've always wanted to understand. Then tell us in the comments what you found. oss.opentrace.ai #OpenSource #Apache2 #DeveloperTools #AIEngineering #CodeIntelligence #BuildInPublic
To view or add a comment, sign in
-
-
My entire Updates page was crashing and I couldn't figure out why. The dashboard shows a visual timeline of every Azure Local platform update — installed, pending, in-progress. The page would load and immediately white-screen. The error: "is not a function" on a .toLowerCase() call. I was calling .toLowerCase() on what I assumed was a string. It wasn't. It was the number 7. Here's what happened: PowerShell's ConvertTo-Json serializes .NET enum values as integers, not strings. So the State field on update objects was arriving as 7 instead of "Installed". JavaScript doesn't care about your assumptions — (7).toLowerCase() throws an error and your whole page dies. The fix: I built enum resolution maps in the Python backend that convert every numeric state value to a human-readable string before it ever reaches the frontend. SOLUTION_UPDATE_STATE, SOLUTION_UPDATE_RUN_STATE — every possible enum gets resolved at the API layer. Lesson: When you're bridging PowerShell → Python → TypeScript, trust nothing at the boundaries. Every value crossing a language boundary is a potential type bomb. Coercion is not the same as validation. String() everything at the edge. #Azure #AzureLocal #Python #TypeScript #DevOps
To view or add a comment, sign in
-
-
🌾 Cut your RAG token costs in half - without losing the answer. I've been quietly building Winnow, an open-source RAG prompt compression middleware, and I'm finally ready to share it publicly (still deep in testing, but the core is working!). The idea clicked for me when I came across The Token Company (YC W26), they're tackling the token economy problem at scale, and it got me thinking: why are we sending bloated, noisy context to LLMs when most of it doesn't matter? So I built Winnow to sit between your vector DB and your LLM. It uses LLMLingua-2 by Microsoft to score tokens against your actual query and drops the irrelevant ones, keeping the signal, cutting the noise. The results on SQuAD benchmarks so far: 🗜️ ~50% token reduction at Balanced preset 📉 Only 2.3 F1-point drop at 50% compression ⚡ ~85ms average latency What you can do with it: → pip install winnow-compress and compress in 3 lines → Drop-in OpenAI-compatible proxy, zero code changes → Native LangChain WinnowRetriever wrapper → Self-host with a single Docker command → MIT licensed, no API key needed My favourite UI so far has been trywinnow.vercel.app : clean, no fluff, just shows you what compression actually does to your context in real time. Still testing edge cases and hardening the API, would love feedback from anyone working with RAG pipelines. Drop a comment or open an issue on GitHub! 🌐 Live Demo: https://lnkd.in/gGmxEY3g 📦 PyPI: winnow-compress 🤗 HuggingFace Space ⭐ GitHub: itsaryanchauhan/Winnow #RAG #LLM #OpenSource #AIEngineering #PromptEngineering #Python #BuildInPublic
To view or add a comment, sign in
-
-
Most backend projects stop at “it works.” I wanted to know — how well does it work under pressure? So I built a small performance lab comparing Flask vs FastAPI under real-world load. What I did: • Built identical REST APIs in both frameworks (same logic, same endpoints) • Simulated concurrent users using k6 • Tested both lightweight and CPU-heavy endpoints • Measured latency, throughput, and scalability behavior What I found: • Flask performance degrades sharply as concurrency increases • FastAPI handles concurrent requests more efficiently (~50% higher throughput at peak load) • Both frameworks struggle with CPU-bound workloads — async helps with concurrency, not heavy computation The most interesting part wasn’t which framework is “faster” — it was seeing where systems break and why. This project shifted my focus from just building APIs to understanding how backend systems behave under load ⚡ GitHub Repo: https://lnkd.in/gUhFmp4i #backend #fastapi #flask #python #systemdesign #performance #softwareengineering #webdevelopment
To view or add a comment, sign in
-
How We Slashed ~40ms Latency with One Line of Django Config We recently noticed something odd — our API response times were spiky. Even for simple GET requests, there was a consistent 30–50ms overhead that didn’t add up. After digging into DB logs, we found : connection handshake exhaustion. By default, Django sets: CONN_MAX_AGE = 0 Which means for every request, your app: 1.Opens a new TCP connection 2.Performs SSL/TLS handshake 3.Authenticates with the database 4.Closes the connection immediately We were spending more time opening the door than actually walking through it. ✅ The Fix: Persistent Connections DATABASES = { 'default': { 'CONN_MAX_AGE': 300, # Reuse connections for 5 minutes 'CONN_HEALTH_CHECKS': True, # Ensures stale connections are handled }} The Impact 📈 ~35ms drop in average latency 📉 ~15% reduction in DB CPU usage ⚡ Eliminated handshake timeout errors during traffic spikes Pros: ✔️ Skips expensive handshakes ✔️ Reduces DB overhead ✔️ Improves consistency under load Watchouts: ⚠️ Each worker holds a connection → can hit DB connection limits ⚠️ Needs proper pooling/limits in high-scale systems #Django #BackendEngineering #PerformanceOptimization #WebDevelopment #DevOps #SoftwareEngineering #Python
To view or add a comment, sign in
-
-
You have no idea how unhealthy your codebase is. Iris makes it obvious. Install it in VS Code and get an instant health score on every file you open. Complexity, code smells, unused imports, long functions, debug prints… everything surfaced automatically. Every finding is clickable, so you can jump straight to the line and fix it. Runs fully locally. No AI. No cloud. Your code never leaves your machine. Works with JavaScript, TypeScript, Go, and Python. Free to install. Upgrade to Pro for workspace-wide scans, a Problems tab, TODO aggregation, and more. → iriscode.co Search “Iris — Code Health” on the VS Code Marketplace.
To view or add a comment, sign in
-
Stumbled across this open-source Claude Code configuration and honestly it's one of the most complete AI dev setups I've seen publicly shared. 91k+ stars. Called everything-claude-code. Here's what's packed inside: The scale of it: → 28 language-specific agents (TypeScript, Python, Go, Rust, Kotlin, Java, database and more) → 116 skills covering everything from TDD workflows to security reviews to frontend and backend patterns → 59 commands including /plan, /code-review, /e2e, /security-scan, /evolve → 34 rules across 6 languages What makes it actually useful: → 15+ hooks that auto-load context at session start and save state at session end → SQLite state store tracking session history and skill confidence scores → 14 MCP configurations for GitHub, Supabase, Vercel, Railway and more → AgentShield security scanner running 1282 tests across 102 rules What makes it practical: → Selective install so you only pull what you need → Cross-platform across Claude Code, Codex, Cursor and OpenCode Built by an Anthropic hackathon winner. That detail matters. GitHub link: https://lnkd.in/dwk28CSF #AI #Dev #ClaudeCode #OpenSource #DeveloperTools
To view or add a comment, sign in
-
-
OpenTelemetry was overkill. A JSON logger was enough. Everyone reaches for OpenTelemetry. We almost did too. We were working on a system with several integrations. Logs were unstructured and our log provider couldn't query them properly. Someone suggested OpenTelemetry. It made sense on paper: industry standard, widely adopted, serious tooling. But when I looked at what we actually needed, it didn't fit. We weren't dealing with dozens of services talking to each other. We just needed structured output. Pulling in a full observability SDK for that felt like overkill. We went with python-json-logger instead. Same logging module underneath, same config style, same stdout. The output just became structured JSON. For request tracing we added asgi-correlation-id, one line in the logging config and every log entry carries a trace_id you can follow through the whole request. Performance also came up at some point. We swapped the default JSON encoder to msgspec. Still no OpenTelemetry. The lesson I took from this: match your observability tooling to your actual system complexity. Ecosystem hype will push you toward solutions your architecture doesn't need yet. If you're figuring out your Python logging stack, happy to share what worked. Drop a comment or connect. #BackendEngineering #Python #Observability #SoftwareEngineering #OpenTelemetry
To view or add a comment, sign in
-
𝗬𝗼𝘂𝗿 𝗟𝗮𝗺𝗯𝗱𝗮 𝗶𝘀 𝗳𝗮𝘀𝘁. 𝗧𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝗮𝗳𝘁𝗲𝗿 𝗶𝗱𝗹𝗲 𝗶𝘀𝗻'𝘁. Cold starts aren't a bug — they're the cost of the execution model. For APIs under SLA, that first request is a silent breach. 🔹 Cold start = container init + runtime boot + module imports — all before your handler runs 🔹 On a Python function with heavy deps, that's easily 800ms–2s 🔹 Provisioned Concurrency keeps environments pre-initialized — zero init latency, predictable p99 🔹 Cost: you pay per concurrency-hour — wire it to Application Auto Scaling to avoid paying 24/7 Pin to a published alias — not $𝗟𝗔𝗧𝗘𝗦𝗧. Provisioned Concurrency silently does nothing on $𝗟𝗔𝗧𝗘𝗦𝗧. #AWS #ServerlessArchitecture #Python #CloudEngineering #SoftwareArchitecture
To view or add a comment, sign in
-
-
512,000 lines of TypeScript. One missing .npmignore. The fastest-growing GitHub repo in history. The Claude Code source code leaked on March 31st — and the reaction told us something more interesting than anything inside the code itself. Within hours, developers weren't just reading it. They were cloning it, forking it, rebranding it. A repo called "claw-code" hit 100K GitHub stars in a single day. Here's the take nobody's saying: The leak accidentally ran the most expensive open-source experiment in AI history — and the result was: the code isn't the moat. Thousands of engineers tore through 512K lines and found genuinely brilliant engineering. A three-layer memory architecture. A plugin-based tool system. An unreleased "KAIROS" daemon mode for always-on background agents. Fascinating stuff. But here's what they didn't find: the models. The training data. The RLHF. The alignment work. The thing that actually makes Claude useful isn't in any npm package. It never was. The companies racing to clone Claude Code from a leaked skeleton are learning the hard way what Anthropic already knew: the CLI is the last mile. The model is the product. Meanwhile, the real story — that a missing debug config file exposed 59MB of internal source in minutes — is a reminder that even the most sophisticated AI systems are still built by humans who forget to update .npmignore. Which part of this surprises you more: what they found, or how easy it was to leak? #AI #ClaudeCode #OpenSource #AIEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
The sourcecode is open on github: https://github.com/opentrace/opentrace