🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy
Open-Source Robotics Solutions for Practical Applications
Explore top LinkedIn content from expert professionals.
Summary
Open-source robotics solutions for practical applications make advanced robot technologies accessible by sharing software, data, and models freely with the community. These efforts help robots perform complex tasks in real-world environments, from home automation to warehouse management, while enabling rapid innovation and collaboration.
- Experiment freely: Download, modify, and deploy open-source robotics software to test robot behaviors, simulation tools, and navigation systems without expensive licensing.
- Share and learn: Tap into community datasets, models, and research to improve your robot’s performance and contribute your own findings for others to build on.
- Adapt for real-world tasks: Use open frameworks and datasets to teach robots practical skills like object manipulation, navigation, and interaction in everyday environments.
-
-
Nav2Bot: ROS 2 Autonomous Navigation in Ignition Gazebo ➡ Differential drive robot simulation using ROS 2 Humble ➡ Autonomous navigation using Nav2 stack ➡ LiDAR-based obstacle detection and environment perception ➡ AMCL-based localization for accurate robot positioning ➡ Global and local path planning with real-time execution ➡ Complete TF tree (map → odom → base_link → lidar_link) ➡ RViz visualization for costmaps, paths, and robot pose ➡ Keyboard teleoperation support for manual control ✨ Why this matters: Autonomous navigation is one of the core challenges in robotics, where a robot must perceive its environment, determine its position, and plan a safe path to a goal without human intervention. This project demonstrates a complete ROS 2 Nav2 pipeline that integrates localization, planning, and control into a unified system. By combining LiDAR data, odometry, and costmaps, the robot can intelligently navigate through unknown environments while avoiding obstacles in real time. These principles are widely used in real-world robotics applications such as autonomous vehicles, warehouse automation systems, delivery robots, and service robotics. 📊 Key Highlights: ✔ Full ROS 2 Navigation Stack (Nav2) integration ✔ LiDAR-based perception and obstacle avoidance ✔ AMCL localization for accurate positioning ✔ Global and local path planning ✔ Real-time costmap generation ✔ Gazebo simulation with realistic robot behavior ✔ RViz-based monitoring and debugging 💡 Future Potential: This framework can be extended to: ➡ Multi-robot navigation systems ➡ SLAM + Nav2 integration for unknown environments ➡ AI-based dynamic obstacle detection ➡ Reinforcement learning for path optimization ➡ Real-world deployment on mobile robots 🔗 For students, engineers & robotics enthusiasts: This project provides a complete hands-on implementation of autonomous navigation using ROS 2, making it ideal for understanding how intelligent robots perceive, plan, and act in real environments. 🔁 Repost to support robotics research & engineering education! #ROS2 #Nav2 #Robotics #AutonomousSystems #Gazebo #Mechatronics #EngineeringProjects #Lidar #RViz #Automation #Navigation #AI #STEM #EngineeringEducation #RobotSimulation
-
Big shift in robotics: NVIDIA just open-sourced Isaac Sim and Isaac Lab. Isaac Sim has already been a cornerstone for high-fidelity robotics simulation—RTX-accelerated physics, realistic lidar/camera simulation, domain randomization, ROS/URDF support, and synthetic data pipelines. Now, it’s all on GitHub with full source access. But the real multiplier? The release of Isaac Lab—a modular, open reinforcement learning and robot control framework built directly on top of Isaac Sim. It comes with ready-to-use robots (Franka, UR5, ANYmal), training loops, and environments for manipulation, locomotion, and more. What’s different now: *You’re no longer limited to APIs—developers can modify physics, sensors, and control logic at the source level. *Isaac Lab provides a training-ready foundation for sim-to-real robotics, speeding up learning pipelines dramatically. *Debugging, benchmarking, and custom integrations are now transparent, flexible, and community-driven. *Collaboration across research and industry just got easier—with reproducible environments, tasks, and results. We’ve used Isaac Sim extensively, and this open-source release is going to accelerate innovation across the robotics community. GitHub: https://lnkd.in/gcyP9F4H
-
🚀Amazon FAR (Frontier AI & Robotics) introduce OmniRetarget: teaching humanoids to interact with objects and their environment, just like humans do 𝘏𝘦𝘳𝘦’𝘴 𝘵𝘩𝘦 𝘮𝘢𝘪𝘯 𝘪𝘥𝘦𝘢 (𝘴𝘪𝘮𝘱𝘭𝘪𝘧𝘪𝘦𝘥): In robotics, teaching humanoid complex skills means showing them how humans move and interact — but just copying human motions (or using them as kinematic references) don’t work cleanly. Human body vs. robot body: not the same shape, not the same joints, not the same kinematics. On top of that, interactions (touching objects, walking on surfaces) are often lost or distorted during retargeting (the process of adapting human motions to robot bodies). But OmniRetarget fixes these. What is OmniRetarget? A system that converts human motion + human scenes into robot-compatible motion while preserving interactions (contacts, spatial relations) with objects and terrain. Uses an interaction mesh to model where contacts happen (hand touching box, feet on ground) and keeps them consistent when mapping to a robot. From one demonstration (a recording of a human performing the task), it can generate many variations: different robots, object positions, terrains. Why it’s better than older approaches? Older methods often ignore interaction preservation, leading to artifacts like foot sliding or unrealistic motions. OmniRetarget enforces both robot limits (joints, geometry) and real interactions (which part touches what) at the same time. Produces 8+ hours of high-quality trajectories, beating baselines in realism and consistency. Trained reinforcement learning (RL) policies can now perform long, complex tasks (up to 30 seconds) on a physical humanoid (Unitree G1). 📖Open-source contribution They are releasing the OmniRetarget Dataset — over 8 hours of humanoid loco-manipulation and interaction data — freely available on Hugging Face: [https://lnkd.in/eYBn2hfe] Why this matters? Robots don’t just need to move, they must interact with the world. High-quality, interaction-aware data has been a major bottleneck. OmniRetarget makes this data available to the community, helping researchers and companies build humanoids that can operate in cluttered, object-rich environments. 📖 Full paper: https://lnkd.in/ej2But4W 👩💻GitHub: https://lnkd.in/ejmUahtr 👩🔬 Authors: Lujie Yang, Xiaoyu Huang, Zhen Wu, Angjoo Kanazawa, Pieter Abbeel, Carmelo Sferrazza, C. Karen Liu, Rocky Duan, Guanya Shi Thank you, Lujie Yang, for giving permission to use the video: Video: Unitree G1 humanoid carries a chair, climbs, leaps, and rolls, all in real time, using only its own body senses (no vision or LiDAR). A big step toward agile, human-like loco-manipulation.
-
💥 A 450M model just beat bigger VLAs on real robot tasks, and… 100% open source! [📍 bookmark for later] Came across SmolVLA, a new vision-language-action model for robotics that’s compact, fast, and trained entirely on open community datasets from LeRobot via Hugging Face. What stood out to me is how it matches or outperforms much larger models like ACT using noisy, real-world community data instead of giant private datasets. Why it’s worth a look ✅ 26% performance boost from pretraining on open-source data ✅ Runs on consumer hardware, even a MacBook ✅ 30% faster responses with async inference and smart architecture tweaks ✅ Strong results across Meta-World, LIBERO, SO100, and SO101 ✅ Fully open source: weights, code, training pipeline, eval stack They also introduced smart efficiency tricks like using fewer visual tokens, pulling outputs from mid-layer, and separating perception from action to make it all run fast. Useful links 📘 Blog: https://lnkd.in/dnZSHdqU 📦 Model: https://lnkd.in/dUZMzTDN 📄 Paper: arxiv.org/abs/2506.01844 SmolVLA is a strong case for what can happen when the robotics community shares data and builds in the open. Definitely worth keeping an eye on.
-
⭐️ We're releasing a comprehensive, hands-on recipe for teaching robots to fold clothes 🤟 … a 25 min read with a full breakdown of modern end-to-end robot learning from hardware to training to evaluation, all open-sourced with LeRobot and Hugging Face 🤗. → Built from 131 hours of teleoperation data, 5k+ GPU hours, 8 robot setups, and a set of practical findings we didn’t expect 👀 We trained language-conditioned vision-action policies for bimanual cloth folding, reaching 90% success on arbitrary t-shirts on real hardware. But the most interesting result wasn’t the model. With architecture and training held fixed, performance moved from 40% → 90% almost entirely by changing the data: – making demonstrations more consistent (same strategy each time) – selecting higher-quality trajectories instead of using everything – giving the model a notion of “progress” through the task (SARM) – adding examples of how to recover from mistakes (Dagger-style) This suggests a useful lens: For long-horizon, contact-rich tasks, we are not yet model-limited. Performance depends heavily on how we structure and supervise interaction data over time. Concretely: – consistency helps more than showing many different ways of doing the task – learning which parts of a trajectory matter is more important than treating every step equally – teaching the model how to recover from failure is as important as showing successful executions We wrote this as a detailed, reproducible system for others to build on. hope it’s useful if you’re working on real-world robot learning. Blog: https://lnkd.in/dW_8JKD9
-
Control your ROS 2 Autonomous Robots with text commands! Alberto Tudela Roldán just released nav2_mcp_server, an open-source Model Context Protocol (MCP) server that lets you control and monitor your Nav2 navigation stack directly, interesting for developers building intelligent robot interfaces or integrating Nav2 with AI agents. What it offers: ✅ Navigate to poses, follow waypoints, dock/undock, or spin in place ✅ Manage costmaps and Nav2 lifecycle from a single interface ✅ Get real-time robot pose and status feedback ✅ Easy to run with Python, uv, or Docker ✅ Async navigation operations with progress monitoring It’s a great example of how the MCP ecosystem can bridge ROS 2 and modern AI tooling for interactive robot control. 🔗 Give it a try and step up the human robot interaction of your autonomous systems! What is your pipeline to interact with your autonomous ROS 2 Robots? Let's connect and share Robotics tips 🔽 #Robotics #ROS2 #AI #Nav2
-
Researchers from Princeton University, Stanford University, and Dexterity, Inc. have introduced 𝐓𝐢𝐝𝐲𝐁𝐨𝐭++, an open-source mobile robot designed to make learning tasks easier and faster. It is inexpensive, sturdy, and flexible, making it useful for many real-world household jobs. TidyBot++ can support different robot arms, allowing it to perform various tasks, such as picking up objects or cleaning. What makes it special is its powered casters. These allow the robot to move smoothly in all directions simultaneously, making it more maneuverable. This feature removes tricky movement limitations seen in other robots, which often waste time with complicated motions. The team created an easy-to-use mobile phone control system to collect data for teaching the robot. This tool allows people to guide and show the robot how to complete tasks. The researchers used this system to gather data and teach TidyBot++ how to perform common household tasks successfully. TidyBot++ is a step forward for robot learning and shows how simple, innovative design can make robots more beneficial for everyday jobs. 📝 Research Paper: https://lnkd.in/eVjwy9Ru 📊 Project Page: https://lnkd.in/eZ8nvdGx #robotics #research
-
How can we bridge the gap between simulation and reality in robotics? Developed by a team from UC Berkeley, Google DeepMind, and other leading institutions, MuJoCo Playground is a fully open-source framework revolutionizing robotic learning and deployment. This tool enables rapid simulation, training, and 𝘇𝗲𝗿𝗼-𝘀𝗵𝗼𝘁 𝘀𝗶𝗺-𝘁𝗼-𝗿𝗲𝗮𝗹 𝘁𝗿𝗮𝗻𝘀𝗳𝗲𝗿 across diverse robotic platforms. MuJoCo Playground supports quadrupeds, humanoids, dexterous hands, and robotic arms, train reinforcement learning policies in minutes on a single GPU, and streamline vision-based and state-based policy training with integrated batch rendering and a powerful physics engine. The framework’s real-world success is evidenced by its deployment on platforms like Unitree Go1, LEAP hand, and the Franka arm within 8 weeks. Its efficiency and simplicity empower researchers to focus on innovation. A simple 'pip install playground' will do! Congratulations to the team, Kevin Zakka, Baruch Tabanpour, Qiayuan Liao, Mustafa Haiderbhai, Samuel Holt, Carmelo (Carlo) Sferrazza, Yuval Tassa, Pieter Abbeel and collaborators, for this game-changing contribution to robotics! 🔗 Check out their website here https://lnkd.in/g7mbZtXg for their paper, github, live demo, and even a google colab setup for an easy start! 💬 What do you think is the next big challenge for sim-to-real transfer in robotics? Let's discuss below! P.S. Excited to share an open source framework I've been experimenting with recently! #Robotics #AI #Simulation #MachineLearning #Engineering #Innovation #ReinforcementLearning
-
Hugging Face is leading the way in open-source robotics. In May 2024, they launched LeRobot, providing the models, datasets, and tools for robotics (all open-source). This lowered the barriers for enthusiasts to experiment with robotic policies. Since its inception, LeRobot has taken the robotics world by storm, reaching 12k GitHub stars. But robots are both hardware and software! So they also 'open-sourced' the process of 3D printing your own SO-100 robot arms. Now, they're diving even deeper into hardware. They just acquired Pollen Robotics, maker of Reachy 2, an open-source and VR-friendly humanoid used in labs worldwide for embodied AI research. Hugging Face plans to expand on this open-source ethos, selling the robot and allowing engineers to download, modify, and suggest improvements to its code. To increase the rate of progress in robotics, we need a vibrant open-source community. The Transformer architecture was open-sourced in 2017, which (after many dominoes) led to the incredibly powerful LLMs we use today. I hope that the same phenomenon happens in robotics! Keep shipping Remi Cadene and team 🦾👏🏾
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development