Sim2Real Techniques for Bipedal Robot Engineering

Explore top LinkedIn content from expert professionals.

Summary

Sim2Real techniques for bipedal robot engineering involve training robots in computer simulations before deploying them in the real world, aiming to bridge the gap between virtual models and actual performance. These methods help robots learn complex movements like walking, jumping, and manipulating objects without relying exclusively on costly real-world experiments.

  • Refine simulation models: Invest time in improving your simulation’s physics and randomness so your robot’s movements translate more accurately when tested on real hardware.
  • Utilize large-scale training: Harness the power of cloud computing and GPUs to run millions of simulated trials, allowing your robot to practice and adapt to a wide variety of scenarios.
  • Map joint behaviors: Analyze each robot joint separately to identify which ones need more precise tuning for real-world stability, rather than applying blanket adjustments across the board.
Summarized by AI based on LinkedIn member posts
  • View profile for Javi Lopez

    Founder at Magnific AI 🪄 (Acq. by Freepik) | Angel Investor 😇 | Founder and former CEO at Erasmusu (Acq. by Spotahome)

    17,093 followers

    I just watched this UMV clip way too many times. At first I genuinely thought it was CGI or some clever edit because the moves are so "unreal" looking. But nope → it's the Robotics and AI Institute, Marc Raibert's lab (the Boston Dynamics founder) showing a real two wheeled robot doing hops, weird out of plane balance saves, and even level ground flips. And the part that matters is not the stunt. It's the training recipe! They basically did: - Train the control policy with reinforcement learning - Feed it millions of physics based simulations - Then deploy it with zero shot transfer to the real robot If you've ever tried to get a real robot to do anything dynamic, you know how insane this is, because physics does not care about your demo video. One tiny mismatch in friction, mass, torque constants, wheel bounce, battery sag → and your "perfect" sim policy eats the floor. Their blog literally calls this out and explains how they fight the sim to real gap with better models and randomized sim parameters, plus they're using NVIDIA Isaac Lab to train policies faster. My take: - Robotics is finally getting the same scaling curve GenAI got. Not because robots can chat. Because robots can learn athletic control from absurd amounts of simulation data → and then survive contact with the real world. Maybe this is the blueprint for the next decade of mobility robots. Source: https://lnkd.in/eEXmb5mw

  • View profile for Tairan He

    Robotics PhD at CMU, Research Intern at NVIDIA GEAR

    1,183 followers

    Is real-world data still the bottleneck for robot learning? We just flipped the script. Zero real-world data. ➔ Autonomous humanoid loco-manipulation in reality. I’m excited to introduce VIRAL: Visual Sim-to-Real at Scale. The robotics community has long relied on expensive, slow, human-collected data. We took a different path. By training entirely inside NVIDIA Isaac Lab, we achieved 54 autonomous cycles (walk, stand, place, pick, turn) in the real world using a simple recipe: RL + Simulation + GPUs. Here is how we achieved photorealistic sim-to-real transfer without a single drop of real-world data: 1. The Pipeline (Teacher ➔ Student) We accelerate physics by 10,000x real-time. We train a privileged teacher with full state access in sim, then distill that into a vision-based student policy using DAgger and Behavior Cloning. 2. Scale is not "Optional" We scaled visual sim-to-real compute up to 64 GPUs. We discovered that for long-horizon tasks like loco-manipulation, large-scale simulation is strictly necessary for convergence and robustness. 3. Bridging the Reality Gap To handle complex hardware (like 3-fingered dexterous hands), we performed rigorous System Identification (SysID). The difference in physics matching was night and day. 4. The "Free Lunch" Sim-to-real is incredibly hard to build (it took us 6 months of infrastructure work). But once solved, you get generalization for free. VIRAL handles diverse spatial arrangements and visual variations without any real-world fine-tuning. Check out the full breakdown:  📄 Paper: https://lnkd.in/eZE6GzEd  🌐 Website: https://lnkd.in/euRajeVm A huge congratulations to the incredible team behind this work: Tairan He*, Zi Wang*, Haoru Xue*, Qingwei Ben*, Zhengyi Luo, Wenli Xiao, Ye Yuan, Xingye Da, Fernando Castañeda, Shankar Sastry, Changliu Liu, Guanya Shi. GEAR Leads: Jim Fan†, Yuke Zhu†

  • View profile for Ojas Shukla

    Exited my software co to SAP at 19, data for Citadel, humanoid robots at Gerra

    2,961 followers

    99.3% success in simulation. Face-plant by step three on real concrete. Everyone training humanoid policies knows the sim2real gap exists. 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝘁𝗵𝗶𝗻𝗴: Comprehensive per-joint characterization for full-size bipedal humanoids? Absent from the literature. So we ran the analysis. → 284,000 joint samples → 123 episodes → 61 minutes of real-world data What we found changes how you should think about domain randomization. 𝗨𝗻𝗶𝗳𝗼𝗿𝗺 𝗿𝗮𝗻𝗱𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗮𝗰𝗿𝗼𝘀𝘀 𝗮𝗹𝗹 𝗷𝗼𝗶𝗻𝘁𝘀 𝗶𝘀 𝘄𝗿𝗼𝗻𝗴. The full breakdown is in the article below: → Which joints you can trust → Which ones you can't → Exact DR ranges based on observed error → Why contact dynamics is the real bottleneck ♻️ Repost if you know someone training humanoid policies in sim.

  • OmniXtreme — Breaking the Generality Barrier in High-Dynamic Humanoid Control 🗒️ https://lnkd.in/gCNjveZh Dynamic whole-body behaviors such as sprinting, jumping, and rapid directional changes remain a central challenge for humanoid robotics due to the generality gap between specialized controllers and fluid real-world motion. OmniXtreme proposes a unified control framework that overcomes this divide by training high-dynamic locomotion policies that generalize across diverse movement regimes and demonstrate stable performance both in simulation and on hardware. ■ Key contributions: • Unified control for high-dynamic behaviors — running, jumping, and agile maneuvers under one model • Robust sim-to-real transfer across tasks that traditionally require task-specific tuning • Demonstrated generalization to unseen scenarios with minimal adaptation overhead This work signifies a step toward truly general-purpose humanoid controllers capable of handling a wide spectrum of dynamic tasks — a crucial milestone for embodied AI interacting in unpredictable environments. === 『人型ロボットの高動的制御の一般化🤖🤸♂️』 単一の制御モデルで多様な動作(走行・跳躍・急旋回など)を安定かつ滑らかに実行。 🔗 https://lnkd.in/gM2byi8c #humanoidrobot #ControlSystems #DynamicMotion #SimToReal #RoboticsResearch #EmbodiedAI #OmniXtreme

  • View profile for Marc Theermann

    Chief Strategy Officer and GTM Leader at Boston Dynamics (Building the world’s most capable mobile #robots and Embodied AI)

    65,673 followers

    Another robotics masterpiece from our friends from Disney Research! Recent progress in physics-based character control has improved learning from unstructured motion data, but it's still hard to create a single control policy that handles diverse, unseen motions and works on real robots. To solve this, the team at Disney proposes a new two-stage technique. In the first stage, an autoencoder is used to learn a latent space encoding from short motion clips. In the second stage, this encoding helps train a policy that maps kinematic input to dynamic output, ensuring accurate and adaptable movements. By keeping these stages separate, the method benefits from better motion encoding and avoids common issues like mode collapse. This technique has shown to be effective in simulations and has successfully brought dynamic motions to a real bipedal robot, marking an important step forward in robot control. You can find the full paper here: https://lnkd.in/d-kzexdJ What Markus Gross, Moritz Baecher and the rest of the gang are bringing to life is unbelievable!

  • View profile for Rangel Isaías Alvarado Walles

    Robotics & AI Engineer | AI Engineer | Machine Learning | Deep Learning | Computer Vision | Agentic AI | Reinforcement Learning | Self-Driving Cars | IoT | IIoT | AIOps | MLOps | LLMOps | DevOps | Cloud | Edge AI

    4,591 followers

    Articulated-Body Dynamics Network: Dynamics-Grounded Prior for Robot Learning Arxiv: https://lnkd.in/ePQe8ZuF Project: [Link not provided] 🔁 At a Glance 💡 Goal: Incorporate the dynamics structure of articulated robots into control policies to improve learning efficiency. ⚙️ Approach: - Inertia propagation: Adapted from the Articulated Body Algorithm, propagating inertial quantities. - Learnable parameters: Replace physical quantities with learnable ones. - Graph neural network: Embeds dynamic propagation physics into policy architecture. - Bottom-up message passing: Mimics forward dynamics accumulation. 📈 Impact (Key Results) 🧪 Sample efficiency & generalization: Outperforms baselines across diverse robots & tasks. - Validation on real robots shows robust sim-to-real transfer. 🔄 Robustness to dynamics shifts: Maintains performance with increased mass & different terrains. - Visualizations show learnt link representations capture meaningful physical relationships. 🤖 Model extensions & efficiency: - Compatible with model-based RL & dynamics prediction. - Computationally efficient inference suitable for real-time control. 🔬 Experiments 🧪 Benchmarks: Genesis, SAPIEN, MuJoCo, ManiSkill. 🎯 Tasks: Locomotion, velocity tracking, standing. 🦾 Setup: Sim-to-real on Unitree G1 & Go2, NVIDIA RTX 4090 hardware. 📐 Inputs: Proprioception, velocity commands, foot contacts, images (future work). 🛠 How to Implement 1️⃣ Extract robot kinematic tree. 2️⃣ Encode observations into link features. 3️⃣ Perform dynamics-inspired bottom-up message passing. 4️⃣ Decode actions from link representations. 5️⃣ Train with PPO & orthogonality regularization. 📦 Deployment Benefits ✅ Improved sample efficiency & robustness. ✅ Real-time inference on onboard hardware. ✅ Enhanced generalization to dynamics variations. ✅ Compatible with sim-to-real transfer pipelines. 📣 Takeaway This physics-grounded GNN architecture provides an effective inductive bias for articulated robot control. It captures inertial propagation, boosting learning speed, robustness, and transferability. Advances in physics-informed policies open new horizons for efficient, adaptable robot behaviors. Follow me to know more about AI, ML and Robotics!

  • View profile for Akshet Patel 🤖

    Robotics Engineer | Creator

    53,281 followers

    Cristiano Ronaldo, Kobe Bryant, LeBron James! What if humanoid robots could execute gymnastics‑like, whole‑body moves with natural agility, even when simulation falls short of reality? [⚡Join 2500+ Robotics enthusiasts - https://lnkd.in/dYxB9iCh] A team from Carnegie Mellon University and NVIDIA -Tairan He, Jiawei Gao, Wenli Xiao, Yuanhang Zhang, Zi Wang, Jiashun Wang, Zhengyi Luo, Guanqi He, Nikhil Sobanbabu, Chaoyi Pan, Zeji Yi, Guannan Qu, Kris Kitani, Jessica Hodgins, Linxi “Jim” Fan, Yuke Zhu, Changliu Liu, and Guanya Shi Introduces ASAP, a two‑stage framework that bridges sim‑to‑real physics for agile whole‑body skills. They pre‑train motion‑tracking policies in simulation using retargeted human motion data, then deploy these policies on a real Unitree G1 robot to collect rollouts. A delta action model is learned from real‑world data to compensate for dynamic mismatches. Finally, they fine‑tune the policy with this model embedded, enabling smooth, high‑agility motions that physically match real‑world performance better than SysID, domain randomisation, or baseline residual dynamics methods. ASAP consistently reduces tracking error and enables agile whole‑body movements, like jumps and kicks, previously unattainable on real humanoids. This marks a key advance in sim‑to‑real transfer, making bold humanoid motions a practical reality. If we can now transfer complex human-like athletic movements to real robots, what should agile humanoids focus on next? Paper: https://lnkd.in/eeNmWqnf Project Page & Code: https://lnkd.in/eehaJb4F #HumanoidRobotics #Sim2Real #WholeBodyControl #AgileRobots #RSS2025

  • View profile for Sid Gore
    Sid Gore Sid Gore is an Influencer

    Al & Robotics Systems Architect | Staff Engineer & Project Manager, Lockheed Martin | Leading complex system integration & test | Writing on robotics, simulation, and Al fluency

    3,834 followers

    A humanoid robot costs $90K to break once. AI lets you break thousands... and learn from every fall. My background is mechanical engineering, robotics, and integration & test. But this field is moving so fast with AI that reading articles wasn't cutting it anymore. I felt out of the loop, so... I recently upgraded my personal setup to support AI training workloads and ran my first experiment: Teaching a bipedal (two-legged) humanoid robot to navigate a custom parkour course using reinforcement learning in NVIDIA Isaac Lab 5.1. But before I share what I learned, let me explain what's actually happening under the hood. A GPU-accelerated AI agent runs thousands of virtual robots in parallel. Each one learns from its own falls and successes simultaneously. The AI develops a "control policy," which is the brain that tells a robot how to move through the physical world. Why does this matter? Because what once required million-dollar labs and months of physical testing can now run on a single AI-capable GPU in hours. Robotics R&D is becoming software-first. Here's what that looked like for this experiment: 76 minutes of CUDA-accelerated training time. 393 million training steps. 4,096 robots learning in parallel on my RTX 5080. So what did I learn so far? Three things stood out to me: 》The setup before you can hit "Run" is a challenge. It took me seven hours to troubleshoot versioning, packages, and dependencies before I could run anything. I forced myself to do it manually because I wanted to understand what's under the hood. YouTube tutorials hit their limit quickly, but thankfully the NVIDIA developer forums saved me. 》The cost case is undeniable. A Unitree H1 costs around $90K. I *virtually* crashed thousands of them. My damage bill? $0. Simulation lets you fail-forward at scale. This gets you to a solid starting point for physical testing, but... 》The Sim-to-Real gap is real. This policy works well in simulation, but I couldn't get a feel for stress points, sensor behavior, or true stability. Failure is not predictable and happens at the edges. The next step would be to transfer this policy to a physical robot, gather real-world data, and continuously aligning the simulation to close that gap. The key thing here is: Testing real hardware is expensive. Simulation in software is cheap. How can you leverage both, intelligently? The benefit isn't limited to cost savings. This workflow also compresses developmental cycles and allows you to field systems faster. Do you think virtual simulation is a game-changer that is here to stay, or a fad? How would you build confidence in a robotic control policy that is trained in a virtual world? #robotics #ai #nvidia #omniverse #isaaclab ~~~~~~~~ Citations: NVIDIA IsaacLab -> https://lnkd.in/ekVMDnDc RSL-RL -> https://lnkd.in/eJye3XTW Unitree H1-> unitree.com/h1/ Note: this is an educational personal project. Opinions are my own, no affiliation or endorsement.

  • View profile for Arpit Gupta

    Applied Scientist AI Robotics | Ex Boston Dynamics

    4,631 followers

    Simulation lets us train millions of trajectories. Reality tests whether any of them actually work. The Sim2Real gap isn’t one problem — it’s three: 1️⃣ Visual Shift (Real ≠ Synthetic) Real scenes have noise, clutter, glare, shadows, messy backgrounds. Sim rarely does. 2️⃣ Physics Shift (Approximation ≠ Reality) Small errors in friction, damping, mass, or latency → huge drift in behavior. 3️⃣ Embodiment Shift (Robot ≠ Robot-in-Sim) Morphology, joint limits, actuator dynamics — nothing matches perfectly. What works today? • Domain Randomization — vary textures, lights, physics, noise until the policy generalizes by force • Domain Adaptation — align real + sim feature distributions • System Identification — tune sim from real sensor measurements • Real-to-Sim Feedback Loops — use a tiny amount of real data to anchor the model As robotics foundation models scale, most of their data will come from simulation. Teams who master domain adaptation will be the ones who can actually deploy these models on physical robots — not just in demos. I added my favorite papers, frameworks, and tools in the comments 👇

  • View profile for Miguel Fierro

    I help people bridge the gap from learning AI theory to getting AI results using my method “Reverse Learning” • xMicrosoft • 4x AI Founder

    78,857 followers

    During my PhD I worked on full-body movements for humanoid robotics. People don't understand how difficult it is to make these moments. The development of Optimus in the last 4 years is absolutely crazy. Here is an analysis: 𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬: 𝐳𝐞𝐫𝐨-𝐬𝐡𝐨𝐭 𝐒𝐢𝐦2𝐑𝐞𝐚𝐥 𝐑𝐋 Optimus trains in simulation with RL, executing tasks like dancing without real-world tuning. It optimizes actions via rewards, achieving seamless real-world transfer. 𝐒𝐢𝐦𝐮𝐥𝐚𝐭𝐢𝐨𝐧: 𝐯𝐢𝐫𝐭𝐮𝐚𝐥 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐠𝐫𝐨𝐮𝐧𝐝 High-fidelity simulators mimic physics, enabling safe, rapid training. Optimus tests countless scenarios, mastering moves without hardware risks. 𝐁𝐫𝐢𝐝𝐠𝐢𝐧𝐠 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥𝐢𝐭𝐲 𝐠𝐚𝐩 Domain randomization varies parameters like friction, ensuring robustness. This helps Optimus handle real-world uncertainties. 𝐄𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠: 𝐟𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲 Optimus maps sensor data to actions directly, enabling adaptability for tasks like dancing or future applications. 𝐖𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬: 𝐬𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲, 𝐩𝐫𝐞𝐜𝐢𝐬𝐢𝐨𝐧 Sim2Real RL ensures agility and robustness, cutting costs and scaling skills for real-world tasks. 𝐍𝐨 𝐬𝐢𝐠𝐧𝐢𝐟𝐢𝐜𝐚𝐧𝐭 𝐡𝐚𝐫𝐝𝐰𝐚𝐫𝐞 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 If you are a robot builder, you know how crazy this is. If you are not a robot builder, trust me, it is very difficult. I'm really excited about the field of robotics in general! ____ #robotics #artificialintelligence #innovation

Explore categories