LinkedIn: 📣 SynapseKit v0.6.9 is live. Two graph features in this release that I think matter more than they look. approval_node(): gates your graph on a human decision. The workflow hits a node, pauses, waits for a human to approve or reject, then continues. No polling, no hacks. One function call. dynamic_route_node(): routes to completely different subgraphs at runtime based on whatever logic you write. Sync or async. Your graph decides where it goes next while it's running. Together these two make human-in-the-loop workflows actually practical to build. Not a demo. Production. Also shipped: 💬 SlackTool [Slack]— send messages via webhook or bot token 📋 JiraTool— search, create, comment on issues via REST 🔍 BraveSearchTool [Brave]— web search via Brave API All three stdlib only. Zero new dependencies. Where we stand: 32 tools · 15 providers · 18 retrieval strategies · 795 tests · 2 dependencies. ⚡ pip install synapsekit 🔗 https://lnkd.in/d2fGSPkX #Python #LLM #RAG #OpenSource #AI #MachineLearning #Agents #SynapseKit
SynapseKit v0.6.9: Human-in-the-Loop Workflows & New Tools
More Relevant Posts
-
AI can write code — but it can’t design your system Recently, I worked on a small project — a housing price prediction platform — to explore how different parts of a system come together. The setup was simple: 💠 A Python service running an ML model 💠 A Java backend handling APIs and market data 💠 A Next.js frontend combining everything into one portal While building this, I also experimented with using AI tools to generate parts of the code. But one thing became very clear: 👉 AI helps you build faster 👉 But it doesn’t replace thinking The real work was in: 💠 Deciding how services should communicate 💠 Separating responsibilities between layers 💠 Designing clean data flow across frontend and APIs For example, instead of calling the ML service directly from the frontend, I routed everything through the backend layer. That small decision made the system cleaner and easier to manage. So even though AI helped with implementation, the important part was still: understanding the system and making the right design decisions. I wrote a detailed breakdown of this project and what I learned here: 🔗 👉 https://lnkd.in/giAsVGt2 GitHub repo: https://lnkd.in/gMYbdCba Curious how others are using AI in development — especially when balancing code generation vs system design. #SoftwareArchitecture #AI #WebDevelopment #FullStack #DeveloperExperience
To view or add a comment, sign in
-
📣 SynapseKit v1.4.7 + v1.4.8 just dropped. Back to back. Huge thanks to Dhruv Garg and Abhay Krishna who drove most of this sprint. 🙌 Two themes in these releases: getting data in, and making workflows resilient. Getting data in: 5 new loaders The gap between "I have a RAG pipeline" and "I can actually feed it my company's data" is a loader problem. These close it: 📨 SlackLoader — pull channel messages directly into your pipeline 📝 NotionLoader — ingest pages and databases from Notion 📖 WikipediaLoader — single article or multiple, pipe-separated 📄 ArXivLoader — search arXiv, download PDFs, extract text automatically 📧 EmailLoader — any IMAP mailbox, stdlib only, zero extra dependencies SynapseKit now has 24 loaders. Your data is probably already covered. Better retrieval — ColBERT ColBERTRetriever brings late-interaction ColBERT via RAGatouille. Instead of comparing a single query vector against a single document vector, ColBERT scores every query token against every document token (MaxSim). On long documents the recall improvement is significant- single-vector approaches lose detail in the compression. Token-level scoring doesn't. Resilient graph workflows Subgraph error handling now ships with three strategies — retry with backoff, fallback to an alternative graph, skip and continue. Production workflows break. The question is whether they break gracefully. Where SynapseKit stands today: 27 providers · 9 vector backends · 42 tools · 24 loaders · 2 hard dependencies ⚡ pip install synapsekit==1.4.8 📖 https://lnkd.in/dvr6Nyhx 🔗 https://lnkd.in/d2fGSPkX #Python #LLM #RAG #AI #OpenSource #MachineLearning #Agents #SynapseKit
To view or add a comment, sign in
-
I built a tool that lets you ask questions about your codebase in plain English. 🧠 Like literally just type — "where is the FAISS vector store initialized?" — and it finds the exact file, function, and code for you. No more ctrl+F. No more digging through 20 files manually. It's called CodeMind. Getting started is super simple too — just paste your GitHub repo link and it'll clone it automatically, or upload a ZIP file if you prefer. That's it, you're ready to start asking questions. Here's how it works under the hood: → Loads your entire codebase → Breaks it into chunks and converts them into embeddings → Stores everything in a FAISS vector store → When you ask something, it pulls the most relevant code and sends it to Groq LLM for a proper answer Built with Python · LangChain · FAISS · Groq · Streamlit 🔗 Try it: https://lnkd.in/gYV8UfC8 🐙 GitHub: https://lnkd.in/gk3F5kZf Still a lot to improve but happy with how v1 turned out. Would love honest feedback from anyone into AI or dev tooling! 🙌 #RAG #LangChain #GenerativeAI #Python #OpenSource #BuildInPublic #AIEngineering
To view or add a comment, sign in
-
-
Two claps → my entire workflow is ready 👏👏 I built a small automation using Python and Claude that listens for two consecutive snaps/claps and instantly sets up my working environment. Once triggered, it automatically: • Opens Claude • Launches Chrome with my main tabs (Outlook, tracking Claude usage, and Lovable for web development) The idea was simple: reduce the friction of getting started and make my workflow faster and smoother. Instead of manually opening everything every time, it’s now done in seconds with a single trigger. Projects like this are helping me explore how AI and automation can be integrated into everyday tasks to improve efficiency and productivity. Looking forward to building more systems like this. 🚀 #AI #Python #Automation #Productivity #DeepLearning
To view or add a comment, sign in
-
OpenAI just acquired Astral, the company behind Ruff and uv. Hundreds of millions of downloads per month. Every serious Python developer already depends on at least one Astral tool. That’s the default Python toolchain, and OpenAI just bought it. Same play Microsoft ran with GitHub. Buy the tool everyone already uses, then embed your AI product. Copilot didn’t win on quality. Distribution put it in every editor by default. Now Codex gets that advantage across linting, packaging, and type checking. They say the tools stay open source. GitHub said that too.
To view or add a comment, sign in
-
Asking an LLM to explain your code is easy —if you know exactly what to give it. For a codebase of 50,000+ lines, that's the actual hard problem. Your context window fills up fast, and naive retrieval gives you the function but misses everything it depends on. So I built something that handles this differently. 🔍 Here's what it actually does: You give it a GitHub URL or a local folder path. Then: 1. Parses every Python file using AST to extract functions, classes, parameters, and what each function calls internally 2. Embeds all the code using SentenceTransformers for semantic search 3. Detects your query intent — specific function, full class, or vague question 4. Builds a dependency graph using NetworkX — ask about login() and it automatically pulls in hash_password() and generate_token() because the graph knows what each function calls 5. Sends the structured, relevant context to Gemini and returns a developer-friendly explanation 💡 The key insight that changed everything: The hard part isn't the LLM call. It's building the right context before you make that call. A well-structured 50-line context beats a 5000-line code dump every time. 🛠 Tech stack: Python · AST · SentenceTransformers · NetworkX · Google Gemini API · GitPython 🔗 GitHub: https://lnkd.in/g3J2JCb7 #Python #AI #LLM #MachineLearning #SoftwareEngineering #BuildInPublic
To view or add a comment, sign in
-
AI Second Brain is now Dockerised. 🐳 This was the last piece missing. Before: Clone repo → install Python → install PostgreSQL → install pgvector → configure everything → hope it works on your machine After: git clone docker compose up → FastAPI + PostgreSQL + pgvector → running in 2 minutes. Anywhere. Two containers. One command. Zero setup. This is what production-ready actually means. Not just deployed. Reproducible. Project is now complete. ✅ RAG pipeline from scratch ✅ Hybrid search (vector + keyword + RRF) ✅ RAGAs evaluation — faithfulness 1.0 ✅ LangSmith observability — 0% error rate ✅ Token tracking — <$0.001 per query ✅ Deployed on Render ✅ Dockerised On to the next one. 🚀 Site: https://lnkd.in/dPQQsX-x Github: https://lnkd.in/dza9AeSZ #Docker #RAG #BuildInPublic #AIEngineering #FastAPI #GenerativeAI #Python
To view or add a comment, sign in
-
-
🚀 Efficient Duplicate Detection with Hash Sets | LeetCode Today, I tackled the Contains Duplicate problem. While the brute force approach is often the first instinct, optimizing for time complexity is where the real fun begins! 💡 The Problem: Given an integer array nums, return true if any value appears at least twice in the array, and return false if every element is distinct. ⚡ My Approach: I utilized a Hash Set to track elements as I traversed the array. This allows for near-instantaneous lookups compared to nested loops. 👉 The Logic: Initialize an empty set seen. Iterate through the array once. For each number, check: "Have I seen this before?" (Is it in the set?) If Yes → Return True immediately. If No → Add the number to the set and keep moving. 🔥 Complexity Analysis: ⏱ Time Complexity: $O(n)$ – We only pass through the list once. 📦 Space Complexity: $O(n)$ – In the worst case (all unique elements), we store all $n$ elements in the set. 🏆 The Result: ✔️ Accepted: All 77 test cases passed. ✔️ Performance: 9 ms runtime, beating 73.44% of Python3 submissions! 📌 Key Takeaway: Using a Set turns a potential $O(n^2)$ search into a sleek $O(n)$ operation. Choosing the right data structure isn't just about passing tests; it's about writing scalable, "production-ready" code. 💻 Tech Stack: #Python | #DataStructures | #Algorithms #leetcode #dsa #coding #programming #softwareengineering #100DaysOfCode #pythonprogramming #tech #growthmindset
To view or add a comment, sign in
-
-
Anthropic Leaks Its Own Source Code Anthropic ships Claude Code as an npm package. Someone runs `ls` on the source map. Entire codebase just sitting there. Unobfuscated. Plugins, skills, tools, hooks, commands — everything. Internal architecture of the most hyped AI coding agent, fully readable. Anthropic says nothing. Meanwhile, they're selling Enterprise contracts. The source map was in the registry the whole time. Nobody checked. Security through obscurity lasted about 3 months. Full code is here. https://lnkd.in/efajfgQ4
To view or add a comment, sign in
-
📣 SynapseKit just hit 1.0.0 A few weeks ago this was an idea. Today it's a production-grade Python framework that ships with everything you need to build real LLM applications without the complexity that usually comes with it. Here's what 1.0 looks like: ⚡ Async-native from day one - not retrofitted, not a wrapper. Every API is async/await first. 🌊 Streaming-first - token-level streaming across all 15 providers, identically. 🪶 2 hard dependencies - numpy[NumPy] and rank-bm25. Everything else is opt-in. What's inside: 🔌 15 LLM providers behind one interface : swap models without rewriting a line 🔍 18 retrieval strategies : from basic vector search to Self-RAG, Adaptive RAG, HyDE, FLARE 🤖 3 multi-agent patterns : Supervisor, Handoff Chain, Crew 🛠️ 32 built-in tools : search, code, files, databases, APIs, arXiv, PubMed, GitHub and more 🔗 MCP client and server : native Model Context Protocol support 📊 Built-in RAG evaluation : Faithfulness, Relevancy, Groundedness metrics out of the box 🔍 Full observability : OpenTelemetry tracing, TracingUI dashboard, auto-trace every LLM call 🛡️ Production guardrails : PII detection, content filters, topic restrictors 🤝 A2A protocol : agents that discover and talk to each other across services 🖼️ Multimodal : images and audio, automatic format conversion across providers 1,011 tests. 2 dependencies. Apache 2.0 license[ApacheCon - ASF Events]. Built in the open. No VC. No team. No marketing budget. Just engineers who thought the Python LLM ecosystem deserved something better. Thank you to every contributor, every person who opened an issue, every engineer who cloned it at 11pm to try something. This is yours too. This is 1.0.0 The stable foundation. Everything from here gets built on top of it. ⚡ pip install synapsekit==1.0.0 #Python #AI #LLM #RAG #OpenSource #MachineLearning #Agents #MCP #BuildInPublic #SynapseKi
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development