A humanoid robot costs $90K to break once. AI lets you break thousands... and learn from every fall. My background is mechanical engineering, robotics, and integration & test. But this field is moving so fast with AI that reading articles wasn't cutting it anymore. I felt out of the loop, so... I recently upgraded my personal setup to support AI training workloads and ran my first experiment: Teaching a bipedal (two-legged) humanoid robot to navigate a custom parkour course using reinforcement learning in NVIDIA Isaac Lab 5.1. But before I share what I learned, let me explain what's actually happening under the hood. A GPU-accelerated AI agent runs thousands of virtual robots in parallel. Each one learns from its own falls and successes simultaneously. The AI develops a "control policy," which is the brain that tells a robot how to move through the physical world. Why does this matter? Because what once required million-dollar labs and months of physical testing can now run on a single AI-capable GPU in hours. Robotics R&D is becoming software-first. Here's what that looked like for this experiment: 76 minutes of CUDA-accelerated training time. 393 million training steps. 4,096 robots learning in parallel on my RTX 5080. So what did I learn so far? Three things stood out to me: 》The setup before you can hit "Run" is a challenge. It took me seven hours to troubleshoot versioning, packages, and dependencies before I could run anything. I forced myself to do it manually because I wanted to understand what's under the hood. YouTube tutorials hit their limit quickly, but thankfully the NVIDIA developer forums saved me. 》The cost case is undeniable. A Unitree H1 costs around $90K. I *virtually* crashed thousands of them. My damage bill? $0. Simulation lets you fail-forward at scale. This gets you to a solid starting point for physical testing, but... 》The Sim-to-Real gap is real. This policy works well in simulation, but I couldn't get a feel for stress points, sensor behavior, or true stability. Failure is not predictable and happens at the edges. The next step would be to transfer this policy to a physical robot, gather real-world data, and continuously aligning the simulation to close that gap. The key thing here is: Testing real hardware is expensive. Simulation in software is cheap. How can you leverage both, intelligently? The benefit isn't limited to cost savings. This workflow also compresses developmental cycles and allows you to field systems faster. Do you think virtual simulation is a game-changer that is here to stay, or a fad? How would you build confidence in a robotic control policy that is trained in a virtual world? #robotics #ai #nvidia #omniverse #isaaclab ~~~~~~~~ Citations: NVIDIA IsaacLab -> https://lnkd.in/ekVMDnDc RSL-RL -> https://lnkd.in/eJye3XTW Unitree H1-> unitree.com/h1/ Note: this is an educational personal project. Opinions are my own, no affiliation or endorsement.
Simulation-Based Methods for Robot Training
Explore top LinkedIn content from expert professionals.
Summary
Simulation-based methods for robot training use virtual environments to teach robots new skills before deploying them in the real world. These techniques allow robots to learn through trial and error in software, reducing physical risks and speeding up development.
- Explore virtual testing: Try running robot control experiments in simulation to quickly identify strengths and weaknesses without risking expensive hardware.
- Utilize parallel learning: Train multiple robots simultaneously in software to speed up skill development and gather more useful data for improvement.
- Integrate human guidance: Use VR or telepresence platforms to demonstrate tasks to robots, making it easier to transfer real-world expertise into virtual training.
-
-
Tired of waiting hours for humanoids to learn to walk? Our new technical report shows how to train sim-to-real humanoid locomotion in 15 minutes with FastSAC and FastTD3! The full pipeline is open-source in the newly released Holosoma codebase. Project page: https://lnkd.in/dxiytcs9 Original FastTD3 showed strong off-policy RL potential but only on a 12-DoF T1 humanoid with "frozen" upper body. We scale up FastSAC & FastTD3 to full-body humanoid locomotion trained in 15 minutes -- significantly outperforming PPO. With careful design choices and minimalist reward functions, FastSAC and FastTD3 enable rapid end-to-end training of humanoid locomotion. Robots learn to walk in any direction and stay robust to pushes, all from just 15 minutes of end-to-end training on a single RTX 4090. Our results go beyond locomotion, we demonstrate the sim-to-real deployment of whole-body tracking with off-policy RL algorithms -- FastSAC can complete the full sequence of dancing motion that lasts more than 2 minutes! We see clear signs of scalability: performance improves with more parallel simulation and more gradient steps. Even better, the Holosoma codebase fully supports multi-GPU and multi-node training. We kept things intentionally simple, and every implementation is available in the Holosoma repo. Can’t wait to see what the community builds from our recipe! Work done at Amazon FAR with Younggyo Seo Juyue Chen Guanya Shi Rocky Duan Pieter Abbeel Arxiv link: https://lnkd.in/dYeGjtNe
-
🤖👓 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐑𝐨𝐛𝐨𝐭𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐌𝐞𝐭𝐚𝐯𝐞𝐫𝐬𝐞 𝐨𝐟 𝐌𝐨𝐭𝐢𝐨𝐧 What if teaching a robot to handle a wrench, stack a shelf, or guide a patient’s hand didn’t require lines of code—but instead, a headset? Across labs and factories, VR-empowered headsets are becoming a bridge between human expertise and robotic capability. Instead of manually programming every grasp or path, operators can step into immersive virtual environments, demonstrate the task naturally, and let the robot learn from their movements in real time. This approach isn’t just faster. It opens the door for: ⚡ Rapid skill transfer from human to machine 🧠 Better data for training embodied AI models 🌍 Remote collaboration—an expert in Detroit can “teach” a robot in Singapore 🦺 Safer learning, since robots can practice in virtual worlds before entering the real one As robots move into more complex, unstructured environments—construction sites, warehouses, even homes—the combination of VR and telepresence could be the key to scaling human-robot collaboration. We’re not just programming machines anymore. We’re coaching them. That’s a profound shift. 🎯 Selected Articles on the Topic: “𝐇𝐨𝐥𝐨-𝐃𝐞𝐱: 𝐓𝐞𝐚𝐜𝐡𝐢𝐧𝐠 𝐃𝐞𝐱𝐭𝐞𝐫𝐢𝐭𝐲 𝐰𝐢𝐭𝐡 𝐈𝐦𝐦𝐞𝐫𝐬𝐢𝐯𝐞 𝐌𝐢𝐱𝐞𝐝 𝐑𝐞𝐚𝐥𝐢𝐭𝐲” - A novel framework that lets a human teacher in VR teleoperate a robotic hand to collect demonstrations. The system learns dexterous tasks (in-hand rotation, bottle opening, etc.) from those demonstrations. (https://lnkd.in/ewQkvRmP) “𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐝𝐞𝐦𝐨𝐧𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧𝐬: 𝐀𝐧 𝐢𝐧𝐭𝐮𝐢𝐭𝐢𝐯𝐞 𝐕𝐑 𝐞𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭 𝐟𝐨𝐫 𝐢𝐦𝐢𝐭𝐚𝐭𝐢𝐨𝐧 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐨𝐟 𝐜𝐨𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐫𝐨𝐛𝐨𝐭𝐬” - Focuses on a VR setup for expert demonstration (via hand/pose tracking) to train construction robots using behavior cloning + RL. (https://lnkd.in/e2wTTRiy) “𝐕𝐑 𝐂𝐨-𝐋𝐚𝐛: 𝐀 𝐕𝐢𝐫𝐭𝐮𝐚𝐥 𝐑𝐞𝐚𝐥𝐢𝐭𝐲 𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐟𝐨𝐫 𝐇𝐮𝐦𝐚𝐧–𝐑𝐨𝐛𝐨𝐭 𝐃𝐢𝐬𝐚𝐬𝐬𝐞𝐦𝐛𝐥𝐲 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐚𝐧𝐝 𝐒𝐲𝐧𝐭𝐡𝐞𝐭𝐢𝐜 𝐃𝐚𝐭𝐚 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧” - Develops a VR training system for human-robot collaborative tasks (e.g. disassembly), bridging simulation and real robot control via ROS, body tracking, and predictive models. (https://lnkd.in/egQre5Na) “𝐎𝐧 𝐭𝐡𝐞 𝐄𝐟𝐟𝐞𝐜𝐭𝐢𝐯𝐞𝐧𝐞𝐬𝐬 𝐨𝐟 𝐕𝐢𝐫𝐭𝐮𝐚𝐥 𝐑𝐞𝐚𝐥𝐢𝐭𝐲-𝐛𝐚𝐬𝐞𝐝 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐟𝐨𝐫 𝐑𝐨𝐛𝐨𝐭𝐢𝐜 𝐒𝐞𝐭𝐮𝐩” - Compares VR training vs conventional training approaches in robotic setup tasks, showing that VR-trained participants had better spatial awareness and reproducibility. (https://lnkd.in/eeHArQFQ) 👉 I’d love to hear: where do you see VR-based robot training making the biggest impact first—manufacturing, healthcare, or somewhere unexpected?
-
Big shift in robotics: NVIDIA just open-sourced Isaac Sim and Isaac Lab. Isaac Sim has already been a cornerstone for high-fidelity robotics simulation—RTX-accelerated physics, realistic lidar/camera simulation, domain randomization, ROS/URDF support, and synthetic data pipelines. Now, it’s all on GitHub with full source access. But the real multiplier? The release of Isaac Lab—a modular, open reinforcement learning and robot control framework built directly on top of Isaac Sim. It comes with ready-to-use robots (Franka, UR5, ANYmal), training loops, and environments for manipulation, locomotion, and more. What’s different now: *You’re no longer limited to APIs—developers can modify physics, sensors, and control logic at the source level. *Isaac Lab provides a training-ready foundation for sim-to-real robotics, speeding up learning pipelines dramatically. *Debugging, benchmarking, and custom integrations are now transparent, flexible, and community-driven. *Collaboration across research and industry just got easier—with reproducible environments, tasks, and results. We’ve used Isaac Sim extensively, and this open-source release is going to accelerate innovation across the robotics community. GitHub: https://lnkd.in/gcyP9F4H
-
Build your first robot in simulation! 👾 📌 If you’re self-learning robotics, this is genuinely one of the better repos to save for later. NVIDIA Robotics released a "Getting Started with Isaac Sim" tutorial series covering everything from building your first robot to hardware-in-the-loop deployment. What's inside? → Building Your First Robot Explore the Isaac Sim interface, construct a simple robot model (chassis, wheels, joints), configure physics properties, implement control mechanisms using OmniGraph and ROS 2, integrate sensors (RGB cameras, 2D lidar), and stream sensor data to ROS 2 for real-time visualization in RViz. → Ingesting Robot Assets Import URDF files, prepare simulation environments, add sensors to existing robot models, and access pre-built robots to accelerate development. → Synthetic Data Generation Learn perception models for dynamic robotic tasks, understand synthetic data generation, apply domain randomization with Replicator, generate synthetic datasets, and fine-tune AI perception models with validation. → Software-in-the-Loop (SIL) Build intelligent robots, implement SIL workflows, use OmniGraph for robot control, master Isaac Sim Python scripting, deploy image segmentation with ROS 2 and Isaac ROS, and test with and without simulation. → Hardware-in-the-Loop (HIL) Understand HIL fundamentals, learn NVIDIA Jetson platform, set up the Jetson environment, and deploy Isaac ROS on Jetson hardware. The progression makes sense: start with basics (build a robot), add perception (sensors and data), generate training data (synthetic generation), develop software (SIL), then deploy to hardware (HIL). Each module builds on the previous one. For robotics teams, this is the path to faster iteration. Simulate first, validate in software-in-the-loop, generate synthetic training data at scale, then deploy to hardware with confidence. 🎓 If this helps at least one engineer to become more fluent in the world of robotics, means a lot to me! 🫶🏼 Here's the course (it's free): https://lnkd.in/dRYdkmdi ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
-
3-DOF Robotic Arm Kinematics & PID-Based Trajectory Tracking in MATLAB ➡ User-selectable trajectories: Infinity (∞), Circle, Rectangle, Helix ➡ Analytical Inverse Kinematics for efficient joint computation ➡ Forward Kinematics visualization with real-time 3D animation ➡ Dynamic joint angles & end-effector coordinate frame display ➡ Closed-loop PID control for accurate trajectory tracking ✨ Why this matters: In robotics, understanding the mapping between joint space and Cartesian space is fundamental for automation, pick-and-place operations, and intelligent robotic systems. This 3-DOF simulation demonstrates how precise kinematic modeling combined with PID control enables smooth and stable trajectory tracking. Beyond visualization, the model reinforces core concepts in control systems, error minimization, and manipulator motion planning — making it highly valuable for both academic learning and practical prototyping. 📊 Key Highlights: ✔ Analytical IK for fast computation and stability ✔ Smooth PID-based joint space control ✔ Realistic 3D animation with labeled links, joints & coordinate frames ✔ Continuous end-effector path tracing ✔ Adjustable link lengths (L1, L2, L3) ✔ Tracking error monitoring for performance evaluation 💡 Future Potential: This framework can be extended toward: ➡ Gravity compensation & dynamic modeling ➡ Computed torque or model-based control ➡ Jacobian-based velocity control ➡ ROS integration for hardware deployment ➡ AI-based trajectory optimization 🔗 For students, engineers & robotics enthusiasts: This simulation is a ready-to-use MATLAB project for learning, teaching, and prototyping advanced robotics concepts. 🔁 Repost to support robotics innovation & engineering learning! 🔁 #Robotics #MATLAB #Automation #3DOF #RobotArm #Kinematics #TrajectoryTracking #PIDControl #ControlSystems #Mechatronics #EngineeringProjects #Simulation #ForwardKinematics #InverseKinematics #3DAnimation #STEM #RoboticsEngineering #TechInnovation
-
Is real-world data still the bottleneck for robot learning? We just flipped the script. Zero real-world data. ➔ Autonomous humanoid loco-manipulation in reality. I’m excited to introduce VIRAL: Visual Sim-to-Real at Scale. The robotics community has long relied on expensive, slow, human-collected data. We took a different path. By training entirely inside NVIDIA Isaac Lab, we achieved 54 autonomous cycles (walk, stand, place, pick, turn) in the real world using a simple recipe: RL + Simulation + GPUs. Here is how we achieved photorealistic sim-to-real transfer without a single drop of real-world data: 1. The Pipeline (Teacher ➔ Student) We accelerate physics by 10,000x real-time. We train a privileged teacher with full state access in sim, then distill that into a vision-based student policy using DAgger and Behavior Cloning. 2. Scale is not "Optional" We scaled visual sim-to-real compute up to 64 GPUs. We discovered that for long-horizon tasks like loco-manipulation, large-scale simulation is strictly necessary for convergence and robustness. 3. Bridging the Reality Gap To handle complex hardware (like 3-fingered dexterous hands), we performed rigorous System Identification (SysID). The difference in physics matching was night and day. 4. The "Free Lunch" Sim-to-real is incredibly hard to build (it took us 6 months of infrastructure work). But once solved, you get generalization for free. VIRAL handles diverse spatial arrangements and visual variations without any real-world fine-tuning. Check out the full breakdown: 📄 Paper: https://lnkd.in/eZE6GzEd 🌐 Website: https://lnkd.in/euRajeVm A huge congratulations to the incredible team behind this work: Tairan He*, Zi Wang*, Haoru Xue*, Qingwei Ben*, Zhengyi Luo, Wenli Xiao, Ye Yuan, Xingye Da, Fernando Castañeda, Shankar Sastry, Changliu Liu, Guanya Shi. GEAR Leads: Jim Fan†, Yuke Zhu†
-
Can a single neural network policy generalize over poses, objects, obstacles, backgrounds, scene arrangements, in-hand objects, and start/goal states? Introducing Neural MP: A generalist policy for solving motion planning tasks in the real world 🤖 Quickly and dynamically moving around and in-between obstacles (motion planning) is a crucial skill for robots to manipulate the world around us. Traditional methods (sampling, optimization or search) can be slow and/or require strong assumptions to deploy in the real world. Instead of solving each new motion planning problem from scratch, we distill knowledge across millions of problems into a generalist neural network policy. Our Approach: 1) large-scale procedural scene generation 2) multi-modal sequence modeling 3) test-time optimization for safe deployment Data Generation involves: 1) Sampling programmatic assets (shelves, microwaves, cubbys, etc.) 2) Adding in realistic objects from Objaverse 3) Generating data at scale using a motion planner expert (AIT*) - 1M demos! We distill all of this data into a single, generalist policy Neural policies can hallucinate just like ChatGPT - this might not be safe to deploy! Our solution: Using the robot SDF, optimize for paths that have the least intersection of the robot with the scene. This technique improves deployment time success rate by 30-50%! Across 64 real-world motion planning problems, Neural MP drastically outperforms prior work, beating out SOTA sampling-based planners by 23%, trajectory optimizers by 17% and learning-based planners by 79%, achieving an overall success rate of 95.83% Neural MP extends directly to unstructured, in-the-wild scenes! From defrosting meat in the freezer and doing the dishes to tidying the cabinet and drying the plates, Neural MP does it all! Neural MP generalizes gracefully to OOD scenarios as well. The sword in the first video is double the size of any in-hand object in the training set! Meanwhile the model has never seen anything like the bookcase during training time, but it's still able to safely and accurately place books inside it. Since, we train a closed-loop policy, Neural MP can perform dynamic obstacle avoidance as well! First, Jim tries to attack the robot with a sword, but it has excellent dodging skills. Then, he adds obstacles dynamically while the robot moves and it’s still able to safely reach its goal. This work is the culmination of a year-long effort at Carnegie Mellon University with co-lead Jiahui(Jim) Yang as well as Russell Mendonca, Youssef Khaky, Russ Salakhutdinov, and Deepak Pathak The model and hardware deployment code is open-sourced and on Huggingface! Run Neural MP on your robot today, check out the following: Web: https://lnkd.in/emGhSV8k Paper: https://lnkd.in/eGUmaXKh Code: https://lnkd.in/e6QehB7R News: https://lnkd.in/enFWRvft
-
𝗛𝗨𝗠𝗔𝗡𝗢𝗜𝗗 𝗥𝗢𝗕𝗢𝗧𝗜𝗖𝗦: 𝗪𝗵𝗲𝗿𝗲 𝗦𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗕𝗲𝗴𝗶𝗻𝘀 𝗮𝗻𝗱 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗔𝘄𝗮𝗸𝗲𝗻𝘀 Deep Learning in 3D Simulation is not a lab exercise. It is the moment we begin to teach machines how to exist. Not to repeat motions. Not to merely follow code. But to learn, adapt, balance, reason, and act with purpose. In my project we are not just building robots. We are building a new class of intelligence that experiences the world before it ever touches reality. In these simulation environments, gravity does not remain constant. Terrain does not always cooperate. Obstacles change shape. Sensors lie. Friction shifts. And the humanoid must still stand, walk, grasp, adjust, optimize, and choose its next step. Domain randomization, reinforcement learning, hierarchical policies, and graph neural dependencies no longer sound like academic theory. They become survival tools. Machines begin to develop strategies. They learn how to carry payloads across unstable rubble. They learn energy discipline when battery is low and temperature is high. They learn trajectory planning not as geometry, but as survival logic. When you combine photorealistic environments from Isaac Sim, contact-perfect physics in MuJoCo, embodied navigation in Habitat, and emergent behavior in Unity, you begin to see something different. You see machines build experience. You see memory. You see policy retention. You see adaptation. You see the beginning of abstract perception where simulation is not just testing, but education. The difference between teaching a robot how to walk, and letting it discover how to navigate a collapsing environment with intelligence and intent. This is where humanoid robotics becomes human oriented. Robots that can open doors without templates. Carry supplies without pre-programmed routes. Coordinate convoys. Assist in evacuation. Make real time physical decisions aligned with mission objectives, not static instructions. Simulation gives us time compression. We can give a single humanoid what would have taken humans years of trial. We can compress thousands of failures into one informed policy. This is how we transform capability. Not automation. Cognitive autonomy. Not motion planning. Motion intelligence. Not digital twins. Learning twins. We are building humanoids that do not just survive the environment. They learn from it. If you are in advanced simulation, deep learning pipelines, physics engines, reinforcement learning, biomechanics, embodied cognition, ROS2, Isaac Sim, MuJoCo, Omniverse, Habitat, Unity, Unreal, LLM integration, perception or policy optimization… Then we should not be working apart. We should be building this together. And for those ready to build the next generation of thinking humanoids Singularity Systems is now accepting collaborators, researchers, engineers, architects, and visionaries. Let’s teach machines how to exist. #changetheworld #3D #unity
-
We believe personalization is the key to unlocking real-world adoption of wearable robotic exoskeletons. Just as shoes come in different sizes, exoskeletons shouldn’t be one-size-fits-all. Yet today, most exoskeleton controls are either generic or require long, resource-heavy calibration sessions. So how can we quickly extract user-specific information and generate meaningful, personalized data without expensive motion capture? Instead of relying on full-body motion capture, we used minimal motion data from a new user to generate a digital twin through physics informed biomechanical simulation. We then trained a speed-adaptive walking agent using adversarial imitation learning, creating a personalized virtual agent that walks like the user across a range of walking speeds. What’s powerful about this approach is not just its biomechanical plausibility, but the potential to use this synthetic user-specific motion data to personalize the underlying exoskeleton control. Key innovations: 1. A synthetic gait generator built from open-source biomechanics data, producing realistic joint trajectories at variable speeds using minimal user input. 2. A training pipeline that combines imitation learning with curriculum learning to create adaptable locomotion policies. 3. Agent that achieves not only kinematic but also kinetic plausibility, opening the door to training user-specific exoskeleton models. We’re now extending this work to more complex locomotor tasks (like stair ascent), refining biomechanical reward functions, and integrating this virtual agent into real exoskeleton control tuning pipelines. This project was led by Yi-Hung (Bernie) Chiu and Ung Hee Lee in collaboration with Manaen Hu and Changseob Song, presented at ICORR Consortium RehabWeek Paper Link: https://lnkd.in/e6nmnt3f #WearableRobotics #Exoskeleton #ImitationLearning #Simulation #Biomechanics #MetaMobilityLab
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development