Open Source Tools for Autonomous AI Software Engineering

Explore top LinkedIn content from expert professionals.

Summary

Open source tools for autonomous AI software engineering enable developers to build intelligent systems that can plan, reason, and act independently, using freely available frameworks and libraries. These tools make it easier for anyone to create AI agents that manage tasks, learn over time, and interact with their environment, without relying on proprietary software.

  • Explore layered architecture: Familiarize yourself with the step-by-step structure behind autonomous AI agents, from foundational language models to safety and governance features.
  • Integrate diverse tools: Combine open source packages for task planning, memory systems, voice interfaces, and robust testing to create reliable, adaptive AI agents.
  • Prioritize ethical design: Use tools for monitoring and evaluating agent behavior to ensure your systems are safe, fair, and aligned with human values.
Summarized by AI based on LinkedIn member posts
  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41,887 followers

    The open-source AI agent ecosystem is exploding, but most market maps and guides cater to VCs rather than builders. As someone in the trenches of agent development, I've found this frustrating. That's why I've created a comprehensive list of the open-source tools I've personally found effective in production. The overview includes 38 packages across: -> Agent orchestration frameworks that go beyond basic LLM wrappers: CrewAI for role-playing agents, AutoGPT for autonomous workflows, Superagent for quick prototyping -> Tools for computer control and browser automation: Open Interpreter for local machine control, Self-Operating Computer for visual automation, LaVague for web agents -> Voice interaction capabilities beyond basic speech-to-text: Ultravox for real-time voice, Whisper for transcription, Vocode for voice-based agents -> Memory systems that enable truly personalized experiences: Mem0 for self-improving memory, Letta for long-term context, LangChain's memory components -> Testing and monitoring solutions for production-grade agents: AgentOps for benchmarking, openllmetry for observability, Voice Lab for evaluation With the holiday season here, it's the perfect time to start building. Post https://lnkd.in/gCySSuS3

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    721,017 followers

    As AI evolves beyond static prompts and reactive chatbots, we are entering an era defined by agentic behavior — where AI systems can plan, act, reason, and adapt dynamically in complex environments. To build and evaluate such systems, we need a clear blueprint. That’s why I created this framework: The 7 Pillars of Agentic AI — a structured lens to understand and engineer intelligent agents that are autonomous, collaborative, and aligned with human goals. Here’s a breakdown of each pillar, along with representative tools pushing the frontier in that space: 𝟭. 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝘆 Agents must operate independently, initiate actions, and pursue objectives without continuous human intervention. Representative tools: AutoGen, CrewAI, LangGraph, OpenAgents, MetaGPT, AgentVerse 𝟮. 𝗚𝗼𝗮𝗹-𝗗𝗶𝗿𝗲𝗰𝘁𝗲𝗱 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 Agents should be able to break down abstract objectives into concrete tasks and adapt their plans as the environment changes. Representative tools: ReAct, LangChain Agent Executors, Camel, DUST 𝟯. 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 & 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 Agents need to coordinate effectively with other agents or humans to achieve shared tasks and avoid conflicts. Representative tools: AutoGen, CrewAI, LangGraph, ChatDev, SupaAgent, AgentHub 𝟰. 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 & 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗠𝗮𝗸𝗶𝗻𝗴 Agents must apply logical and contextual understanding to make high-quality decisions based on goals, constraints, and environment. Representative tools: GPT-4o, Claude 3 Opus, Mistral, Chain-of-Thought Prompting, OpenDevin, Thought Source 𝟱. 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲 & 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 Modern agents interact with external tools, APIs, browsers, and code execution environments to perform complex tasks. Representative tools: LangChain Toolkits, Function Calling (OpenAI, Claude, Gemini), BrowserPilot, WebAgent, ToolLLM, Gorilla, CrewAI Tools 𝟲. 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Agents must store, retrieve, and evolve knowledge over time — enabling continuity and adaptation across tasks. Representative tools: LangChain Memory, MemGPT, LlamaIndex, Pinecone, Chroma, Weaviate, Qdrant, MemoryGraph 𝟳. 𝗦𝗮𝗳𝗲𝘁𝘆, 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 & 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 Agents must behave ethically, remain within defined boundaries, and be evaluated for robustness, fairness, and alignment. Representative tools: Guardrails AI, Constitutional AI, OpenAI Moderation API, Red-Teaming Agents, TruLens, Helicone 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: Agentic AI represents a fundamental shift in how intelligent systems are designed. These agents are not just tools — they are collaborators capable of reasoning, learning, and acting across environments. As builders, researchers, and practitioners, we must ensure that our systems are robust, transparent, and beneficial. I welcome thoughts, feedback, and discussion — this space is moving fast, and collaboration is essential.

  • View profile for Pinaki Laskar

    2X Founder, AGI Researcher | Inventor ~ Autonomous L4+, Physical AI | Innovator ~ Agentic AI, Quantum AI, Web X.0 | AI Infrastructure Advisor, AI Agent Expert | AI Transformation Leader, Industry X.0 Practitioner.

    33,418 followers

    What are the building blocks behind autonomous AI agents with #𝗔𝗜𝗔𝗴𝗲𝗻𝘁𝘀𝗟𝗮𝘆𝗲𝗿𝗲𝗱𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 and 𝗧𝗼𝗼𝗹𝘀 driving them? Understanding the building blocks behind #autonomousAIagents is essential for any professional working at the intersection of AI agents, and product development. This layered architecture provides a structured roadmap, from foundational models to governance — helping us build safer, more powerful, and context-aware #AIagents. Here’s a quick breakdown of each layer and the tools driving them. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟭: 𝗟𝗟𝗠 (𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿) This is the reasoning and language core. Large Language Models like GPT-4, Claude, Mistral, and LLaMA form the foundation for text generation and understanding. 𝗧𝗼𝗼𝗹𝘀: OpenAI GPT-4, Claude, Cohere, Gemini, LLaMA, Mistral. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟮: 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗕𝗮𝘀𝗲 (𝗞𝗕) Provides external context (structured/unstructured) for better decisions. 𝗧𝗼𝗼𝗹𝘀: Chroma, Pinecone, Redis, PostgreSQL, Weaviate. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟯: 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 (𝗥𝗔𝗚) Retrieves relevant data before generation to improve factual accuracy. 𝗧𝗼𝗼𝗹𝘀: LangChain RAG, LlamaIndex, Haystack, Unstructured .io. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟰: 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲 Where users and agents meet —via text, voice, or tools. 𝗧𝗼𝗼𝗹𝘀: OpenAI Assistant API, Streamlit, Gradio, LangChain Tools, Function Calling. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟱: 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻𝘀 Agents connect with CRMs, APIs, browsers, and other services to take action. 𝗧𝗼𝗼𝗹𝘀: Zapier, Make .com, Serper API, Browserless, LangChain Agents, n8n. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟲: 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗟𝗼𝗴𝗶𝗰 & 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝘆 The brain of autonomous agents — task planning, decision-making, execution. 𝗧𝗼𝗼𝗹𝘀: AutoGen, CrewAI, MetaGPT, LangGraph, Autogen Studio. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟳: 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 & 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 Ensures traceability, ethical alignment, and debugging. 𝗧𝗼𝗼𝗹𝘀: Helicone, LangSmith, PromptLayer, WandB, Trulens. 🔹 𝗟𝗮𝘆𝗲𝗿 𝟴: 𝗦𝗮𝗳𝗲𝘁𝘆 & 𝗘𝘁𝗵𝗶𝗰𝘀 Builds trust by preventing toxic, biased, or unsafe behavior. 𝗧𝗼𝗼𝗹𝘀: Azure Content Filter, OpenAI Moderation API, GuardrailsAI, Rebuff. This architecture is more than just a stack — it’s a blueprint for responsible AI innovation. Whether you're building internal copilots, autonomous agents, or customer-facing assistants, understanding these layers ensures reliability, compliance, and contextual intelligence.

  • View profile for Paolo Perrone

    No BS AI/ML Content | ML Engineer with a Plot Twist 🥷100M+ Views 📝

    128,953 followers

    10 Open Source AI Tools Every Engineer Should Know After 3 months of testing, here's what survived my workflow: 1️⃣ Talkd.ai — JSON to AI Agent in Minutes Forget complex backends. Define agent behavior in YAML. Built a PDF analyzer agent during lunch break. Perfect for "I need this working by EOD" situations. 🔗 https://talkd.ai 2️⃣ Marimo — Python Notebooks That Don't Suck Reactive cells. Built-in versioning. No more kernel panic at 3am. Finally, notebooks I can push to production without shame. My data science team switched in a week. 🔗 https://lnkd.in/gxwrtBJc 3️⃣ Unsloth AI — Fine-tune LLMs on Your Gaming GPU Llama 3 fine-tuning on a single 3090. No cloud bills. 2x faster than standard methods. Your GPU won't melt. Democratizing model customization for real. 🔗 https://lnkd.in/gJZtH4Y4 4️⃣ HackingBuddyGPT — Ethical Hacking Assistant Fully offline. Generates payloads. Runs recon scripts. Because your pentesting data shouldn't touch the cloud. Red teamers, this one's for you. 🔗 https://lnkd.in/gRrJ-Zwh 5️⃣ Giskard — Unit Tests for AI Models Catch hallucinations before users do. Test for bias, toxicity, and edge cases systematically. Saved me from shipping a model that thought all CEOs were male. 🔗 https://lnkd.in/g3QhG9FB 6️⃣ OpenWebUI — Self-Hosted ChatGPT Runs Llama, Mistral, or Claude locally. Zero API costs. Tool calling, memory, custom personas included. Privacy-first teams love this one. 🔗 https://lnkd.in/gk3t65RG 7️⃣ Axolotl — YAML-Driven Fine-Tuning One config file. Multiple training strategies. QLORA, PEFT, LORA - pick your poison. Fine-tuning without the PhD in configuration. 🔗 https://lnkd.in/gu6pJxWk 8️⃣ FastRAG — RAG in 5 Minutes Flat No Pinecone. No LangChain bloat. Just local RAG that works. Point it at PDFs or websites. Start querying. Built for prototypes that become production. 🔗 https://lnkd.in/gNrG6HyE 9️⃣ Nav2 — Robot Navigation That Actually Ships ROS 2 based. Real-time obstacle avoidance. Multi-robot coordination out of the box. If you're building robots, you need this. 🔗 https://lnkd.in/gYiqsiTJ 🔟 MindsDB — ML Inside Your Database Train models with SQL: `SELECT predict(sales) FROM data` No export/import dance. No separate ML pipeline. Your DBA will either love or hate you. 🔗 https://lnkd.in/gYiqsiTJ My Quick Match Guide: Need fast prototypes? → Talkd.ai + FastRAG Building data apps? → Marimo + MindsDB Shipping to production? → Giskard + Axolotl Privacy critical? → OpenWebUI + HackingBuddyGPT The best part? Clone → Install → Ship. No waitlists. No API keys. No surprises. Open source AI isn't just catching up. It's setting the pace. What open source AI tool saved your project this week? ♻️ Repost to help a developer discover their next favorite tool

  • View profile for Igor Bobriakov

    AI Architect. Author of “Production-Ready AI Agents”.

    18,246 followers

    I just Open Sourced my reference architecture for Production-Ready AI Agents. There is a massive gap between a "working prototype" and a "reliable system". It is easy to make an AI agent work once. It is incredibly hard to make it work 10,000 times without crashing, hallucinating, or getting stuck in a loop. For the past few months, I’ve been working on a standardized approach to bridge this gap. Today, I decided to open source the entire engineering curriculum. What is inside: A 10-lesson lab where you build an "AI Codebase Analyst" from scratch. It focuses on the engineering constraints that often get skipped in tutorials: 1. State Management: Moving from brittle linear scripts to cyclic State Machines (using LangGraph) to handle loops, retries, and human approvals. 2. Reliability: Treating the LLM as an untrusted API. We use Pydantic to enforce strict schema validation on every output, catching hallucinations before they break the app. 3. Deployment: A production-hardened Docker setup for serverless deployment. The Goal: To provide a clean, standardized "Reference Architecture" for anyone looking to build robust, scalable agentic systems. If you are looking to move from "experimental scripts" to "production services", this is for you. 💻 Link to the Repo: https://lnkd.in/dwnHbPGX #AI #LLM #LangGraph #Python #OpenSource #SoftwareEngineering

  • View profile for Piyush Ranjan

    28k+ Followers | AVP| Tech Lead | Forbes Technology Council| | Thought Leader | Artificial Intelligence | Cloud Transformation | AWS| Cloud Native| Banking Domain

    28,397 followers

    The rise of Agentic AI is transforming how we build, deploy, and interact with intelligent systems Here’s a complete look at an Open Agentic AI Stack, showcasing the essential tools and frameworks across each layer: 🔹 Foundation Models: LLaMA 4, Mistral, Qwen 3 Fusion, DeepSeek — open-source giants powering intelligent reasoning and generation. 🔹 Serving & Fine-Tuning: From vLLM and Text Generation Inference to LoRA Adapters, Ollama, and BentoML — enabling efficient model deployment and adaptation. 🔹 Memory & Retrieval: LanceDB, Weaviate, Mem0, Marqo, Qdrant — robust vector databases for contextual memory and real-time retrieval. 🔹 Orchestration & Agents: LangGraph, AutoGen, CrewAI, DSPy, Flowise, OpenDevin — empowering modular, composable agent workflows. 🔹 Evaluation & Safety: AgentBench 2025, RAGAS, TruLens, PromptGuard 2, Zeno — ensuring performance, transparency, and responsible AI use. The open ecosystem is evolving fast, making it easier than ever to build production-ready AI agents from scratch. 💡 Building your agentic stack? Start modular, go open, and think long-term.

  • View profile for Daron Yondem

    Author, Agentic Organizations | Helping leaders redesign how their organizations work with AI

    57,403 followers

    🚀 OpenManus: The open-source alternative to Manus AI that lets you build autonomous agents without invite codes! In case you missed it, Manus AI (by Butterfly Effect) has made waves as a fully autonomous AI agent that can execute complex tasks across multiple domains. Now, this framework from MetaGPT researchers provides a barrier-free path to creating similar LLM-powered agents. Impressively, their team launched a working prototype in just 3 hours - showcasing how well-designed architecture can accelerate agent development. Two technical innovations that make this project stand out: - A configuration-driven implementation using TOML files that cleanly separates agent definitions from execution logic. This allows you to swap between different LLM backends (currently supporting GPT-4o) without modifying core code - an elegant approach for futureproofing against model changes and potentially supporting multi-modal capabilities similar to Manus AI's text, image, and code processing. - OpenManus-RL: A research-focused extension implementing GRPO (Generative Reinforcement from Preference Optimization) for fine-tuning agents. This collaboration between UIUC and OpenManus researchers brings cutting-edge RL techniques to practical agent development - critical for achieving the kind of autonomous task execution that has made Manus AI stand out in benchmark tests. The implementation is deliberately minimalist, prioritizing flexibility over prescriptive patterns - making it suitable for both experimental prototypes and production systems that could eventually rival Manus AI's capabilities in report generation, content creation, and tool integration. For developers exploring agent frameworks: Would you rather build autonomous agents on this type of lightweight, open foundation or wait for invite codes to closed commercial platforms with predetermined workflows? What capabilities from Manus AI would you prioritize implementing first? #AIAgents #OpenSource #LLM #ReinforcementLearning #AutonomousAI

  • View profile for Pallavi Ahuja

    AI | Software Engineering | Writes @techNmak

    95,993 followers

    If you’re building AI agents, here’s what you actually need. It’s no longer enough to just call an LLM and hope for the best. Autonomous agents require a complete architecture made of multiple moving parts - each playing a critical role in how the agent thinks, plans, acts, and improves. Here are the 12 core components every serious AI agent needs: 1. Memory (Short & Long-Term) Stores past interactions and context to ensure continuity across sessions. Tools like LangChain Memory and Weaviate help with this. 2. Knowledge Base (KB) Provides structured facts, context, and reference data for reasoning. Popular tools include Pinecone, Redis, and vector databases. 3. Tool Use & API Integration Enables the agent to interact with external tools or systems via APIs. Integration tools include OpenAI Function Calling and AutoGen. 4. Planning & Decomposition Engine Breaks big tasks into smaller steps. Tools like CrewAI and MetaGPT automate multi-step workflows. 5. Execution Loop Carries out tasks, monitors results, and decides the next steps. Patterns like ReAct and frameworks like BabyAGI enable this. 6. Reasoning & Decision Making Selects the best next step using logic or probabilistic reasoning. Common methods include Chain-of-Thought and Tree-of-Thought. 7. Natural Language Interface (LLM) Handles understanding and generating natural language. Powered by models like GPT-4, Claude, and Gemini. 8. Goal Definition & Tracking Keeps track of what the agent is trying to achieve and adjusts accordingly. Tools like AutoGen Goals and CrewAI Objectives help. 9. Guardrails & Safety Filters Ensures safe and ethical use of AI with filters and constraints. Includes tools like Guardrails AI and OpenAI Moderation. 10. Logging & Feedback Loop Tracks performance, success/failure rates, and learns from mistakes. Tools like WandB and Helicone support this. 11. Evaluation & Testing Frameworks Ensures agents are actually doing the job right. Tools like LangChain Benchmarks and Ragas handle evaluation. 12. Multi-Agent Collaboration Coordinates multiple agents working together on complex tasks. Frameworks like CrewAI and AgentVerse make this possible. The takeaway? An effective AI agent isn’t just a single model, it’s an ecosystem of systems working in sync. ♻️ Repost to save someone $$$ and a lot of confusion. ✔️ You can follow Pallavi for more insights.

  • View profile for Umair Ahmad

    Senior Data & Technology Leader | Omni-Retail Commerce Architect | Digital Transformation & Growth Strategist | Leading High-Performance Teams, Driving Impact

    11,168 followers

    𝗧𝗵𝗲 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗥𝗼𝗮𝗱𝗺𝗮𝗽: 𝗙𝗿𝗼𝗺 𝗖𝗼𝗱𝗲 𝘁𝗼 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 Building AI agents that think, reason, and act autonomously requires more than prompt engineering. It demands a structured approach that combines foundational skills, advanced frameworks, and production ready infrastructure. 𝗘𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗦𝗸𝗶𝗹𝗹𝘀 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 Start with Python as your core language, master FastAPI for building robust APIs, understand Docker for consistent deployments, and learn Git for version control. These tools form the bedrock of any serious AI development workflow. 𝗟𝗟𝗠 𝗮𝗻𝗱 𝗠𝗟 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 Get comfortable with leading models like Nova, GPT-5, Claude, and LLaMA. Use Hugging Face Transformers for model access, LangChain for orchestration, and LlamaIndex for retrieval. Apply PEFT and LoRA for efficient fine tuning, and run models locally with Ollama and vLLM. 𝗗𝗲𝘀𝗶𝗴𝗻𝗶𝗻𝗴 𝗔𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 Study ReAct, CoT, AutoGPT, and function calling patterns. Explore multi agent systems with MCP and A2A frameworks for workflow coordination. Understanding these patterns separates basic chatbots from intelligent autonomous systems. 𝗞𝗲𝘆 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗧𝗼𝗼𝗹𝘀 Deploy vector databases like OpenSearch, ChromaDB, Pinecone, and Weaviate for semantic search. Manage memory with Redis, Zep, and SQLite. Monitor performance using LangSmith, Prompty, and TruLens. Handle async operations with Celery, RabbitMQ, and Kafka. 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 Protect against prompt injections with GuardrailsAI and Rebuff. Enable multi agent collaboration via CrewAI or AutoGen. Implement reinforcement learning from human feedback to continuously improve agent performance. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 Host on AWS, Azure, GCP, or Render. Containerize with Docker and FastAPI. Automate with GitHub Actions and Hugging Face Spaces. Track everything with Loguru and Prometheus. Follow Umair Ahmad for more insights. #AgenticAI #MultiAgentSystems #LLM #MachineLearning #Python #AI #AutoGPT #LangChain 

  • View profile for Chris Paxton

    AI + Robotics Research Scientist

    8,937 followers

    I'd like to introduce what I've been working on the last few months at Hello Robot Inc: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration. The goal is to allow researchers and developers to quickly build and deploy AI-enabled robot applications. Stretch AI is designed so that you can easily get started and try it out on your robot. It supports multiple LLMs, from open-source models like Qwen to OpenAI. You can even do voice control of the robot, talk to it, and have it clean up your floor! The codebase: https://lnkd.in/eyDU_3Hk Blog post with some details: https://lnkd.in/eaSzMAtG more on other socials: Twitter/X: https://lnkd.in/e3Pg9ubk Bluesky: https://lnkd.in/e-Q2SFKb

Explore categories