The shortest agent framework tutorial ever written. Step 1: Install it. pip install definable Step 2: Build an agent. agent = Agent(model="openai/gpt-4o", instructions="You help with code reviews.") Step 3: Give it hands. @tool def search_code(query: str) -> str: return search_codebase(query) agent = Agent(model="openai/gpt-4o", tools=[search_code]) Step 4: Give it memory. agent = Agent(model="openai/gpt-4o", memory=True) Step 5: Give it knowledge. agent = Agent(model="openai/gpt-4o", knowledge="./docs/") Step 6: Give it guardrails. agent = Agent(model="openai/gpt-4o", security=True) Step 7: Run it. result = await agent.arun("Review my latest PR") That's the whole tutorial. You just learned an entire agent framework in one LinkedIn post. Every step adds one organ. Model (the brain), Tools (the hands), Memory (continuity), Knowledge (understanding), Security (immune system). Composition over configuration. Always. https://lnkd.in/g_fCTVfC #AIAgents #Tutorial #Python #ArtificialIntelligence #DeveloperTools #LLM #OpenSource #DefinableAI #LearnAI #BuildInPublic
Definable AI Tutorial: Build an Agent in 7 Steps
More Relevant Posts
-
LinkedIn: 📣 SynapseKit v0.6.9 is live. Two graph features in this release that I think matter more than they look. approval_node(): gates your graph on a human decision. The workflow hits a node, pauses, waits for a human to approve or reject, then continues. No polling, no hacks. One function call. dynamic_route_node(): routes to completely different subgraphs at runtime based on whatever logic you write. Sync or async. Your graph decides where it goes next while it's running. Together these two make human-in-the-loop workflows actually practical to build. Not a demo. Production. Also shipped: 💬 SlackTool [Slack]— send messages via webhook or bot token 📋 JiraTool— search, create, comment on issues via REST 🔍 BraveSearchTool [Brave]— web search via Brave API All three stdlib only. Zero new dependencies. Where we stand: 32 tools · 15 providers · 18 retrieval strategies · 795 tests · 2 dependencies. ⚡ pip install synapsekit 🔗 https://lnkd.in/d2fGSPkX #Python #LLM #RAG #OpenSource #AI #MachineLearning #Agents #SynapseKit
To view or add a comment, sign in
-
🚀 Day 11/30 | LeetCode Problem: Merge Two Sorted Lists (21) Problem: You are given the heads of two sorted linked lists list1 and list2. Merge the two lists into one sorted linked list and return its head. 💡 Approach (Recursive) Since both lists are already sorted: If one list is empty → return the other. Compare the current values of both lists. Attach the smaller node to the result. Recursively merge the remaining nodes. This keeps the final list sorted automatically. ⏱ Complexity Time Complexity: O(n + m) Space Complexity: O(n + m) (due to recursion stack) 🧠 Python Code class Solution: def mergeTwoLists(self, list1, list2): if not list1: return list2 if not list2: return list1 if list1.val < list2.val: list1.next = self.mergeTwoLists(list1.next, list2) return list1 else: list2.next = self.mergeTwoLists(list1, list2.next) return list2 📌 Example Input: list1 = [1,2,4] list2 = [1,3,4] Output: [1,1,2,3,4,4] 🎯 Key Takeaway When working with sorted data structures, compare and attach is a powerful pattern. Also, recursion makes linked list problems elegant and clean. ✅ Accepted 🔖 Hashtags #LeetCode #30DaysOfLeetCode #Day11 #Python #LinkedList #Recursion #DataStructures #Algorithms #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
📣 SynapseKit v0.6.8 is live. Your agents can now search PubMed, GitHub, and YouTube. Send emails. Query your own vector store. All with zero new dependencies for most of it. That last one matters more than it sounds- every tool you add to an agent is a potential point of failure. We built these to be stdlib-first wherever possible. Also in this release: WebSocket streaming for graph workflows and structured execution tracing with timestamps. So when something breaks in production, you know exactly where and how long each node took. What SynapseKit looks like today: ⚡ 743 tests 🔌 15 LLM providers 🛠️ 29 built-in tools 🔍 18 retrieval strategies 🧠 8 memory backends 📄 14 document loaders 💾 4 cache backends 🔗 2 hard dependencies Async-native from day one. Not retrofitted. No hidden chains. No magic. Just Python you can actually read. pip install synapsekit 🔗 https://lnkd.in/d2fGSPkX #Python #LLM #RAG #OpenSource #AI #MachineLearning #Agents #SynapseKit
To view or add a comment, sign in
-
I wasted 3 days fighting a Python error. The fix took 10 minutes. If you're building AI agents in 2026, avoid these mistakes 👇 ❌ Mistake 1: pyautogen on Python 3.13 Every version blocked. Hours lost. Fix: use ag2 — same library, new name, Python 3.13 support. ❌ Mistake 2: Paying for embeddings OpenAI charges per token. Adds up fast. Fix: sentence-transformers runs locally. 90MB download. $0 forever. ❌ Mistake 3: AutoGen GroupChat for everything Overcomplicated. Breaks on newer versions. Fix: direct Groq API calls per agent. Simpler, faster, more reliable. ❌ Mistake 4: No memory between sessions Agents forget everything on restart. Fix: MongoDB stores every decision, message, and learning permanently. I learned all of this the hard way building a 7-agent autonomous system. You don't have to. Save this post. It'll save you days. Which of these have you hit before? 👇 #AIAgents #OpenSource #Python #FreeTools #BuildInPublic #Groq #AutoGen #MachineLearning #SoftwareEngineering
To view or add a comment, sign in
-
-
A ~550-word AGENTS.md reduced agent runtime by 28.64% and token usage by 16.58% on SWE-bench Verified. The trick wasn’t more context — it was less ambiguity. I tested these ideas while refactoring agent docs for a production Python/FastMCP monorepo at NOS. What stuck with me: 𝗔𝗚𝗘𝗡𝗧𝗦.𝗺𝗱 𝘄𝗼𝗿𝗸𝘀 𝘄𝗵𝗲𝗻 𝗶𝘁’𝘀 𝗲𝘅𝗲𝗰𝘂𝘁𝗮𝗯𝗹𝗲 𝗼𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴. Setup + test commands beat prose (Lulla et al.). 𝗔𝗚𝗘𝗡𝗧𝗦.𝗺𝗱 𝗶𝘀 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗹𝗲 𝗱𝗲𝗳𝗮𝘂𝗹𝘁. 4,860 context files across GitHub; `.cursorrules` is basically legacy (Galster et al.). 𝗦𝗵𝗼𝗿𝘁 𝗯𝗲𝗮𝘁𝘀 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲. Most files are <500 words; medians cluster around ~335–535 words (Chatlatanagulchai et al.). 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗵𝗶𝗴𝗵𝗲𝘀𝘁-𝘀𝗶𝗴𝗻𝗮𝗹 𝘀𝗲𝗰𝘁𝗶𝗼𝗻. They show up in ~75% of high-quality files. 𝗔𝘂𝘁𝗼-𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗰𝗮𝗻 𝗯𝗮𝗰𝗸𝗳𝗶𝗿𝗲. LLM-generated files dropped success by ~3% on average while raising cost >20% (Gloaguen et al.). 𝗙𝗶𝗹𝗲 𝗹𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗮𝗴𝗲𝗻𝘁𝘀 𝗳𝗮𝗶𝗹 𝗳𝗶𝗿𝘀𝘁. If they edit the wrong file, everything downstream collapses (ContextBench). What I did with this: one canonical AGENTS.md (~550 words, every snippet verified), CLAUDE.md + Copilot instructions as thin pointers, deleted `.cursorrules`, and 4 path-scoped instruction files that auto-inject context per folder. Takeaway: context engineering is mostly negative space — remove contradictions, name the right files, and make “run tests” unmissable. Sources: https://lnkd.in/eM-HnnGs https://lnkd.in/eN7pUsfY https://lnkd.in/eHAarmSC https://lnkd.in/e9Fx6UC7 https://lnkd.in/eJM2EHkh https://lnkd.in/eTqgZZqK https://lnkd.in/egk_dX8U #ContextEngineering #AICoding #CodingAgents #SoftwareEngineering #MCP #LLMs #DeveloperTools
To view or add a comment, sign in
-
-
I just published my latest tutorial on Weights & Biases: GPT-5.4 Python quickstart using the OpenAI API. The big shift with 5.4 is moving from “chat-first” workflows to professional-grade agentic workflows, and in the tutorial, I show how to track the whole thing end-to-end with W&B Weave (inputs/outputs, token usage, reasoning metadata, tool calls, and computer-use trajectories). What I cover: 🤔 Reasoning spectrum (none → xhigh) + when each makes sense 🗣️ Verbosity controls (stop begging the model to “be concise”) 🏗️ Structured outputs with JSON + Pydantic validation ⚒️ Custom tools for raw payloads (code/SQL/diffs without JSON pain) 🔍 Tool Search for large tool ecosystems (deferred tool loading) 💻 Computer Use loop for UI agents (screenshot → actions → execute → repeat) 🗺️ Native Compaction to keep long-running agents stable If you’re building agents, this should save you a lot of time (and a lot of tokens). Link in the comments. 👇
To view or add a comment, sign in
-
-
Stop using brittle JSON tool calling. Let the LLM write Python that is the plan. ✅ Expressivity: Logic (filter → validate → mutate) is more robust in code than rigid schemas. ✅ State Management: Handles multi-step transactions (e.g., TinyDB updates) atomically. ✅ Sandboxing: exec(code, SAFE_GLOBALS) allows for secure, restricted execution. ✅ Observability: Capture stdout + before/after state snapshots for full auditability. Code doesn't just describe the plan - it is the execution.
To view or add a comment, sign in
-
I built trinops: a terminal-native monitoring tool for Trino. If you run Trino queries, you've probably alt-tabbed to the web UI more times than you'd like to admit. trinops puts that information where you already are: your terminal. trinops top gives you a live TUI dashboard with query stats, sorting, and detail drill-down. trinops queries --json pipes query metadata to jq or scripts. There's also a Python library for progress tracking during long-running queries, and it ships with a Claude Code skill so your AI assistant can debug query performance directly. uvx runnable, pip-installable, Python 3.10+, zero config beyond pointing it at your coordinator. uvx trinops https://lnkd.in/geaV2AdJ
To view or add a comment, sign in
-
-
Asking an LLM to explain your code is easy —if you know exactly what to give it. For a codebase of 50,000+ lines, that's the actual hard problem. Your context window fills up fast, and naive retrieval gives you the function but misses everything it depends on. So I built something that handles this differently. 🔍 Here's what it actually does: You give it a GitHub URL or a local folder path. Then: 1. Parses every Python file using AST to extract functions, classes, parameters, and what each function calls internally 2. Embeds all the code using SentenceTransformers for semantic search 3. Detects your query intent — specific function, full class, or vague question 4. Builds a dependency graph using NetworkX — ask about login() and it automatically pulls in hash_password() and generate_token() because the graph knows what each function calls 5. Sends the structured, relevant context to Gemini and returns a developer-friendly explanation 💡 The key insight that changed everything: The hard part isn't the LLM call. It's building the right context before you make that call. A well-structured 50-line context beats a 5000-line code dump every time. 🛠 Tech stack: Python · AST · SentenceTransformers · NetworkX · Google Gemini API · GitPython 🔗 GitHub: https://lnkd.in/g3J2JCb7 #Python #AI #LLM #MachineLearning #SoftwareEngineering #BuildInPublic
To view or add a comment, sign in
-
While transforming my API from synchronous to asynchronous, I ran into an error: author Input should be a valid dictionary or instance of UserOut While debugging the issue, I discovered something interesting — it was related to lazy loading in SQLAlchemy. By default, relationships in SQLAlchemy are lazy-loaded, meaning related objects like author or comments are not fetched until they are accessed. When my Pydantic schemas tried to serialize the response, it threw an error because those related objects had not been accessed or loaded yet. This error pushed me to explore more about lazy loading and eager loading. Lazy loading fetches only the main object in the initial query, while eager loading fetches the related objects as well. During this process, I also learned about the N+1 query problem and how loading strategies can impact an API’s performance. I resolved the issue by using selectinload() in my query, although joinedload() can also be used depending on the situation. What started as a confusing error ended up becoming a great learning experience about how ORMs fetch data and why controlling loading strategies is important when building APIs. #FastAPI #Python #BackendDevelopment
To view or add a comment, sign in
Explore related topics
- Steps to Build AI Agents
- How to Build AI Agents With Memory
- How to Build Agent Frameworks
- How to Build Intelligent Agents
- How to Build Production-Ready AI Agents
- How to Design an AI Agent
- How to Use AI Agents to Optimize Code
- How to Build Custom AI Assistants
- Open Source Frameworks for Building Autonomous Agents
- How to Develop Trustworthy AI Agents
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development