📣 Every LLM framework eventually adds async support. SynapseKit started there. There's a difference between async-retrofitted and async-native. Most frameworks started synchronous, bolted async on later, and shipped the seams - hidden event loop management, sync wrappers that infect the core, bugs that only surface under concurrent load. SynapseKit was designed async-first from the first commit. Every public API is async/await. No exceptions. No hidden sync layers underneath. If you understand Python and async, you understand SynapseKit. What that means in practice: → Stream tokens from any of 33 providers identically- not a special mode, the default → Run parallel graph nodes via real asyncio.gather - not simulated concurrency → No event loop surprises under load → Sync wrappers exist for scripts and notebooks - they call into the async layer, they don't replace it And the dependency story: 2 hard dependencies. numpy and rank-bm25. That's it. Everything else - LLM providers, vector stores, document loaders, tools - is behind an optional install extra. You pay only for what you use. No transitive conflicts. No 267-package installs. No surprise breakage when a framework you didn't know you depended on ships a breaking change. pip install synapsekit[openai] # 2 deps + openai pip install synapsekit[all] # everything Async-native. Minimal. Transparent. #Python #AsyncPython #LLM #RAG #OpenSource #AI #MLEngineering #SynapseKit
SynapseKit: Async-Native LLM Framework with Minimal Dependencies
More Relevant Posts
-
"Stop breaking your AI's logic with 'Blind' Chunking" Most RAG (Retrieval-Augmented Generation) systems fail because they treat code like plain text. When you slice a Python script every 500 characters, you risk cutting a function in half. This leads to LLM "hallucinations" because the model only sees partial logic. How it works: AST Parsing: Instead of raw text slicing, I use Python's Abstract Syntax Tree (AST) to identify logical boundaries (Classes and Functions). Token-Aware Packing: Using the tiktoken tokenizer, I calculate exact costs and pack these logical blocks into chunks that never exceed the model’s context window. Semantic Retrieval: I integrated ChromaDB to store these chunks as high-dimensional vectors, enabling semantic search instead of just keyword matching. The Result: A context-aware ingestion pipeline that ensures the LLM always receives complete, logical code blocks, drastically increasing accuracy and reducing errors. Tech Stack: FastAPI | ChromaDB | Sentence-Transformers | Tiktoken Check out the repo here: https://lnkd.in/gkVYmnwW #AI #GenerativeAI #Python #FastAPI #MachineLearning #RAG #SoftwareEngineering
To view or add a comment, sign in
-
🐍 PyCharm 2026.1 was just released, the first of three planned releases this year. Along with new features – `debugpy`, uv as a remote interpreter, free JavaScript/TypeScript/CSS features, improved AI integrations – the team also fixed 593 bugs: yes, 593. Here’s a closer look at what was fixed. As you can see, most of the bugs stem from the fast-moving Python ecosystem, not PyCharm itself. This is why a full team of engineers, product managers, and developer advocates works on PyCharm at JetBrains—with a separate team dedicated entirely to AI. 🗂️ How These Bugs Get Prioritized With hundreds of open issues at any given time, the PyCharm team uses a combination of internal triage, severity, and — importantly — community votes to decide what rises to the top. 🪂 Of the 593 bugs fixed in 2026.1, 105 had community votes. The most-voted fix in the entire release? PY-13276, with 60 votes — a bug where the call argument inspection didn't work correctly on decorated methods. It affected nearly every Python project that uses decorators, which is to say nearly every Python project. 🎁 A few other community favorites that finally got resolved: PY-49946 (42 votes): Support for kw_only in dataclasses (Python 3.10) PY-54269 (42 votes): Imports from a Poetry path dependency not resolving PY-42057 (29 votes): "Unexpected Argument" false positive for matplotlib.patches PY-51768 (20 votes): Correct parameter inference with ParamSpec (PEP 612) PY-50890 (19 votes): Invalid interpreter when a venv path contains non-ASCII characters on Windows Some of these had been open for years. They moved to the front of the queue because the community made clear they mattered. Your vote matters If there's a bug affecting your workflow right now, you can just find it in YouTrack: https://lnkd.in/dd3A37hM and vote for it. The earlier in a release cycle the team sees demand, the better its chances of making the next one — and as you read this, the minor releases and 2026.2 are already underway.
To view or add a comment, sign in
-
-
This was a fun post to write. PyCharm has three major releases a year and each release fixes hundreds of bugs, most not related to the IDE itself but to the fast-changing Python ecosystem: * Python updates * Packages and library updates * New typing packages * And so on. There's a lot of work required to have a professional Python IDE. And that's separate from all the AI work going on elsewhere within JetBrains and available across all the IDEs, including PyCharm.
🐍 PyCharm 2026.1 was just released, the first of three planned releases this year. Along with new features – `debugpy`, uv as a remote interpreter, free JavaScript/TypeScript/CSS features, improved AI integrations – the team also fixed 593 bugs: yes, 593. Here’s a closer look at what was fixed. As you can see, most of the bugs stem from the fast-moving Python ecosystem, not PyCharm itself. This is why a full team of engineers, product managers, and developer advocates works on PyCharm at JetBrains—with a separate team dedicated entirely to AI. 🗂️ How These Bugs Get Prioritized With hundreds of open issues at any given time, the PyCharm team uses a combination of internal triage, severity, and — importantly — community votes to decide what rises to the top. 🪂 Of the 593 bugs fixed in 2026.1, 105 had community votes. The most-voted fix in the entire release? PY-13276, with 60 votes — a bug where the call argument inspection didn't work correctly on decorated methods. It affected nearly every Python project that uses decorators, which is to say nearly every Python project. 🎁 A few other community favorites that finally got resolved: PY-49946 (42 votes): Support for kw_only in dataclasses (Python 3.10) PY-54269 (42 votes): Imports from a Poetry path dependency not resolving PY-42057 (29 votes): "Unexpected Argument" false positive for matplotlib.patches PY-51768 (20 votes): Correct parameter inference with ParamSpec (PEP 612) PY-50890 (19 votes): Invalid interpreter when a venv path contains non-ASCII characters on Windows Some of these had been open for years. They moved to the front of the queue because the community made clear they mattered. Your vote matters If there's a bug affecting your workflow right now, you can just find it in YouTrack: https://lnkd.in/dd3A37hM and vote for it. The earlier in a release cycle the team sees demand, the better its chances of making the next one — and as you read this, the minor releases and 2026.2 are already underway.
To view or add a comment, sign in
-
-
LangChain has shipped langchain-text-splitters 1.1.2 with a security-focused update to URL-based text extraction, moving to an SSRF-safe transport. The release also fixes silent data loss in RecursiveJsonSplitter for empty dict values, adds Python 3.14-related test support, and rolls in several dependency and security maintenance updates. For teams using LangChain in production AI pipelines, this looks like a practical reliability and hardening release.
To view or add a comment, sign in
-
We benchmarked LLMs on their ability to analyze code files across 25 files in TypeScript, Python, Rust, Go, Markdown, and HTML on parameters like Search, Graph, Semantics, Integration, Security Mapping, Business Context, and JSON handling. Models can top leaderboards like SWE-Bench Pro while failing at basic creative and practical tasks. Optimizing for the test is not the same as optimizing for the work. Results: #claude-sonnet-4.6 ranked #1 with a weighted score of 121.2 at $1.66, the only Pareto Optimal pick among premium models. #claude-opus-4.6 scored 117.0 but at $8.00, sonnet-4.6 beats it cheaper. #glm-5v-turbo emerged as the best budget Pareto Optimal pick at just $0.28 with a score of 113.0. #gpt-5.4-nano is the most cost-efficient Pareto Optimal option at $0.04 with a score of 103.3. Notably, glm-5.1 despite ranking #1 on SWE-Bench Pro scored only 106.7 here at $0.60, beaten by cheaper alternatives. Models like Gemini 3.1 Pro Preview and GPT-4o-mini ranked near the bottom despite their popularity. The leaderboard confirms that benchmark performance rarely translates to real-world task quality. If you would like to reduce your AI agents cost including your AI copilots, please get in touch with us.
To view or add a comment, sign in
-
-
I built an AI agent from scratch. No LangChain. No LangGraph. No CrewAI. Just Python, Gemini 2.5 Flash, and raw tool calling. Here's what I learned that no framework tutorial will teach you: 1. The agentic loop is embarrassingly simple Build messages → call LLM → if tool_call → execute → feed result back → repeat. That's it. Every framework is just a wrapper around this. Once you see it raw, you can never unsee it. 2. Frameworks hide your bugs from you When something breaks in LangChain, you're debugging the framework. When something breaks in raw Python, you're debugging your logic. Big difference. One makes you smarter. One makes you dependent. 3. Tool schema design is where agents actually fail The LLM doesn't call the wrong tool because it's dumb. It calls the wrong tool because your schema description was ambiguous. Write your tool descriptions like you're explaining them to a junior dev on their first day. Precise. No assumptions. 4. 50 lines of Python is enough to go to production My personal concierge agent — the one that lives on my portfolio, captures leads, and pings my phone instantly — is ~50 lines. No overhead. No magic. Just code I fully understand and can debug at 2am. 5. You should build one without a framework at least once Not because frameworks are bad. LangGraph is excellent. I'm using it next. But if you've never written the raw loop yourself, you're flying blind. You're trusting abstractions you don't understand. Build it raw first. Then use the framework. You'll use it 10x better. --- Full source code in the comments — ~50 lines, no magic, just the loop. Follow along if you're into agentic AI and building real things, not just demos. #AgenticAI #Python #BuildingInPublic #LLM #SoftwareEngineering
To view or add a comment, sign in
-
-
I spent one afternoon building an AI agent from scratch. Why? Because I wanted to understand the Agent Client Protocol (ACP). If you haven't looked at it yet, think of it as HTTP, but for AI agents. It allows any agent to talk to any client (IDE, terminal, script) using a universal language. 🛠️ The Project: A SchemaCheck Agent I built an agent that validates data files in real-time. → Mixed types in JSON? Caught. → Inconsistent CSV columns? Caught. → Missing fields or nulls? All caught. The biggest surprise? The AI wasn't the hard part. It was understanding the protocol. The "Lightbulb" Moment I swapped Gemini CLI for the GitHub Copilot CLI as my ACP server. It only took two lines of code to switch the backend. That is the power of a standard. I’ve open-sourced the project on GitHub. Feel free to clone it, poke around, or contribute: https://lnkd.in/gpcmvYqH #AI #AgentClientProtocol #BuildInPublic #Python #GenerativeAI #Coding
To view or add a comment, sign in
-
Model serving is what turns an experiment into a real product. You trained the ML model, evaluated it, and everything seems fine. What's next? You won’t just hand that model to the client as an exported file (.onnx, .pkl, ...) This is still not helpful to them. How will they use it? Of course, they won’t write ML code to use it. Here we do the "Model Serving" step, which is where you provide endpoints (APIs) that run the model to return predictions. So you provide the client with APIs that serve predictions from your model, then the client can integrate them with their solutions. The most common Python frameworks for this are: 1- FastAPI: Widely popular for building high-performance APIs, it provides native async support and Pydantic validation. 2- Flask: Still widely used in legacy systems for its simplicity, though largely replaced by FastAPI for new ML projects due to lack of native async support. This way, you wrap all ML code into a ready-to-use APIs that abstract the entire project. Have you served and deployed machine learning models before? What frameworks or tools did you use?
To view or add a comment, sign in
-
My AI agent spent 20 minutes debugging the wrong file. I only know because I built the thing that caught it. A few weeks ago I built agent-replay-debugger - a CLI that turns agent session traces into interactive timelines. v1 was basically a fancy log viewer. It told you what happened (the agent read 40 files) but not why it read 40 files when it only needed 3. So I added --analyze. One flag, and every reasoning block gets classified by an LLM: is the agent planning? investigating? implementing? Or - my personal favorite - is it backtracking because it just realized it's been editing the wrong file for the last 15 minutes? On a real 2-hour session with 600+ events, I got exactly 2 red flags. Those 2 flags were worth more than the other 598 events combined. Total cost of running the analysis: 2 cents. What else is new: The viewer used to show one flat blob per session. Now each user message creates its own span - a 2-hour session becomes 33 clickable nodes in the DAG, each showing how long the agent spent and how many tool calls it made. You can instantly see that "PR 1" took 2 hours and 83 tool calls while "list issues" took 10 seconds and 1 call. Also shipped a pick command because I got tired of copy-pasting UUIDs: ard view $(ard pick chore-champions) Still zero runtime dependencies. Still pure Python stdlib. 188 tests, 100% coverage enforced in CI. The --analyze flag talks to the Anthropic API using urllib - no SDK needed. Live demo (real session, LLM-annotated, all secrets auto-scrubbed): https://lnkd.in/gRFB7uWf Code: https://lnkd.in/gPTUt4ue #buildInPublic #AIAgents #LLM #Python #OpenSource #DevTools #AIEngineering
To view or add a comment, sign in
-
PSA: Check your AI-generated requirements files before they nuke production. I've noticed a pattern — when you ask an AI to write a requirements.txt or environment.yml, it almost always reaches for >=: flask>=2.3.0 sqlalchemy>=2.0.0 pydantic>=2.5.0 Looks reasonable, right? It's not. Here's what actually happens six months later when you deploy to a fresh server: 1. Pydantic 2.x → 3.x ships a breaking change. Your entire validation layer silently starts rejecting payloads that worked yesterday. No error on install. Just 500s at runtime. 2. SQLAlchemy quietly drops a deprecated API. Your ORM queries that ran fine for a year now throw AttributeError deep in a call stack. Good luck debugging that at 2 AM. 3. Flask upgrades and one of its pinned sub-dependencies conflicts with yours. Now pip install itself fails and your CI/CD pipeline is just... red. Indefinitely. On code you never changed. 4. NumPy 2.0 lands. Half the scientific Python ecosystem isn't compatible yet. Your data pipeline that "just works" no longer does — on a Monday morning, naturally. The fix is boring: pip freeze > requirements.txt Pin with ==. Every time. In production, reproducibility isn't a nice-to-have — it's the whole game. If an AI generates your dependency file, treat it like any other code review. The convenience of >= is a deferred incident report. #Python #DevOps #SoftwareEngineering #AI #LessonsLearned
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
🔗 github.com/SynapseKit/SynapseKit 📖 synapsekit.github.io/synapsekit-docs