Applying Digital Simulations to Robotics Development

Explore top LinkedIn content from expert professionals.

Summary

Applying digital simulations to robotics development means using computer-based environments to build, test, and teach robots before exposing them to the real world. This approach allows engineers to try out robot designs, train artificial intelligence, and collect vast amounts of data without risking expensive hardware or waiting for lengthy physical trials.

  • Experiment virtually: Build and test robot models in digital environments to quickly uncover flaws and refine designs without incurring real-world costs or delays.
  • Scale up training: Use simulated scenarios to generate large datasets, letting AI-powered robots learn from thousands of experiences that would be impossible or risky to repeat with physical machines.
  • Close the reality gap: After simulation, transfer learned behaviors to real robots, then gather real-world data and adjust the digital model to improve accuracy and reliability.
Summarized by AI based on LinkedIn member posts
  • View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    238,093 followers

    Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.  2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro  -> RoboCasa produces N (varying visuals)  -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK

  • View profile for Sid Gore
    Sid Gore Sid Gore is an Influencer

    Al & Robotics Systems Architect | Staff Engineer & Project Manager, Lockheed Martin | Leading complex system integration & test | Writing on robotics, simulation, and Al fluency

    3,833 followers

    A humanoid robot costs $90K to break once. AI lets you break thousands... and learn from every fall. My background is mechanical engineering, robotics, and integration & test. But this field is moving so fast with AI that reading articles wasn't cutting it anymore. I felt out of the loop, so... I recently upgraded my personal setup to support AI training workloads and ran my first experiment: Teaching a bipedal (two-legged) humanoid robot to navigate a custom parkour course using reinforcement learning in NVIDIA Isaac Lab 5.1. But before I share what I learned, let me explain what's actually happening under the hood. A GPU-accelerated AI agent runs thousands of virtual robots in parallel. Each one learns from its own falls and successes simultaneously. The AI develops a "control policy," which is the brain that tells a robot how to move through the physical world. Why does this matter? Because what once required million-dollar labs and months of physical testing can now run on a single AI-capable GPU in hours. Robotics R&D is becoming software-first. Here's what that looked like for this experiment: 76 minutes of CUDA-accelerated training time. 393 million training steps. 4,096 robots learning in parallel on my RTX 5080. So what did I learn so far? Three things stood out to me: 》The setup before you can hit "Run" is a challenge. It took me seven hours to troubleshoot versioning, packages, and dependencies before I could run anything. I forced myself to do it manually because I wanted to understand what's under the hood. YouTube tutorials hit their limit quickly, but thankfully the NVIDIA developer forums saved me. 》The cost case is undeniable. A Unitree H1 costs around $90K. I *virtually* crashed thousands of them. My damage bill? $0. Simulation lets you fail-forward at scale. This gets you to a solid starting point for physical testing, but... 》The Sim-to-Real gap is real. This policy works well in simulation, but I couldn't get a feel for stress points, sensor behavior, or true stability. Failure is not predictable and happens at the edges. The next step would be to transfer this policy to a physical robot, gather real-world data, and continuously aligning the simulation to close that gap. The key thing here is: Testing real hardware is expensive. Simulation in software is cheap. How can you leverage both, intelligently? The benefit isn't limited to cost savings. This workflow also compresses developmental cycles and allows you to field systems faster. Do you think virtual simulation is a game-changer that is here to stay, or a fad? How would you build confidence in a robotic control policy that is trained in a virtual world? #robotics #ai #nvidia #omniverse #isaaclab ~~~~~~~~ Citations: NVIDIA IsaacLab -> https://lnkd.in/ekVMDnDc RSL-RL -> https://lnkd.in/eJye3XTW Unitree H1-> unitree.com/h1/ Note: this is an educational personal project. Opinions are my own, no affiliation or endorsement.

  • View profile for Lukas M. Ziegler

    Robotics evangelist @ planet Earth 🌍 | Telling your robot stories.

    243,816 followers

    Build your first robot in simulation! 👾 📌 If you’re self-learning robotics, this is genuinely one of the better repos to save for later. NVIDIA Robotics released a "Getting Started with Isaac Sim" tutorial series covering everything from building your first robot to hardware-in-the-loop deployment. What's inside? → Building Your First Robot Explore the Isaac Sim interface, construct a simple robot model (chassis, wheels, joints), configure physics properties, implement control mechanisms using OmniGraph and ROS 2, integrate sensors (RGB cameras, 2D lidar), and stream sensor data to ROS 2 for real-time visualization in RViz. → Ingesting Robot Assets Import URDF files, prepare simulation environments, add sensors to existing robot models, and access pre-built robots to accelerate development. → Synthetic Data Generation Learn perception models for dynamic robotic tasks, understand synthetic data generation, apply domain randomization with Replicator, generate synthetic datasets, and fine-tune AI perception models with validation. → Software-in-the-Loop (SIL) Build intelligent robots, implement SIL workflows, use OmniGraph for robot control, master Isaac Sim Python scripting, deploy image segmentation with ROS 2 and Isaac ROS, and test with and without simulation. → Hardware-in-the-Loop (HIL) Understand HIL fundamentals, learn NVIDIA Jetson platform, set up the Jetson environment, and deploy Isaac ROS on Jetson hardware. The progression makes sense: start with basics (build a robot), add perception (sensors and data), generate training data (synthetic generation), develop software (SIL), then deploy to hardware (HIL). Each module builds on the previous one. For robotics teams, this is the path to faster iteration. Simulate first, validate in software-in-the-loop, generate synthetic training data at scale, then deploy to hardware with confidence. 🎓 If this helps at least one engineer to become more fluent in the world of robotics, means a lot to me! 🫶🏼 Here's the course (it's free): https://lnkd.in/dRYdkmdi ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com

  • View profile for Aaron Lax

    Founder of Singularity Systems Defense and Cybersecurity Insiders. Strategist, DOW SME [CSIAC/DSIAC/HDIAC], Multiple Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The DHS Threat

    23,826 followers

    𝗛𝗨𝗠𝗔𝗡𝗢𝗜𝗗 𝗥𝗢𝗕𝗢𝗧𝗜𝗖𝗦: 𝗪𝗵𝗲𝗿𝗲 𝗦𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗕𝗲𝗴𝗶𝗻𝘀 𝗮𝗻𝗱 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗔𝘄𝗮𝗸𝗲𝗻𝘀 Deep Learning in 3D Simulation is not a lab exercise. It is the moment we begin to teach machines how to exist. Not to repeat motions. Not to merely follow code. But to learn, adapt, balance, reason, and act with purpose. In my project we are not just building robots. We are building a new class of intelligence that experiences the world before it ever touches reality. In these simulation environments, gravity does not remain constant. Terrain does not always cooperate. Obstacles change shape. Sensors lie. Friction shifts. And the humanoid must still stand, walk, grasp, adjust, optimize, and choose its next step. Domain randomization, reinforcement learning, hierarchical policies, and graph neural dependencies no longer sound like academic theory. They become survival tools. Machines begin to develop strategies. They learn how to carry payloads across unstable rubble. They learn energy discipline when battery is low and temperature is high. They learn trajectory planning not as geometry, but as survival logic. When you combine photorealistic environments from Isaac Sim, contact-perfect physics in MuJoCo, embodied navigation in Habitat, and emergent behavior in Unity, you begin to see something different. You see machines build experience. You see memory. You see policy retention. You see adaptation. You see the beginning of abstract perception where simulation is not just testing, but education. The difference between teaching a robot how to walk, and letting it discover how to navigate a collapsing environment with intelligence and intent. This is where humanoid robotics becomes human oriented. Robots that can open doors without templates. Carry supplies without pre-programmed routes. Coordinate convoys. Assist in evacuation. Make real time physical decisions aligned with mission objectives, not static instructions. Simulation gives us time compression. We can give a single humanoid what would have taken humans years of trial. We can compress thousands of failures into one informed policy. This is how we transform capability. Not automation. Cognitive autonomy. Not motion planning. Motion intelligence. Not digital twins. Learning twins. We are building humanoids that do not just survive the environment. They learn from it. If you are in advanced simulation, deep learning pipelines, physics engines, reinforcement learning, biomechanics, embodied cognition, ROS2, Isaac Sim, MuJoCo, Omniverse, Habitat, Unity, Unreal, LLM integration, perception or policy optimization… Then we should not be working apart. We should be building this together. And for those ready to build the next generation of thinking humanoids Singularity Systems is now accepting collaborators, researchers, engineers, architects, and visionaries. Let’s teach machines how to exist. #changetheworld #3D #unity

  • View profile for Sigrid Adriaenssens

    Professor at Princeton University & Director Keller Center for Innovation in Engineering Education --- The postings on this site are my own.

    9,878 followers

    New publication from the Form Finding Lab, now available in Computer Methods in Applied Mechanics and Engineering. Our paper presents an accelerated simulation and design optimization framework for multi‑stable elastic rod networks (ERNs), with applications in adaptive structures, aerospace engineering, and soft robotics. ERNs exhibit rich nonlinear and multi‑stable behavior, but their proximity to unstable equilibria makes conventional simulation and optimization approaches computationally challenging. To address this, we introduce a spline‑based least‑squares formulation for solving the Kirchhoff rod boundary value problem, enabling robust and efficient simulations. The framework is applied to networks composed of bistable bigons assembled into articulated bigon arms. Benchmarks demonstrate significant improvements in computational efficiency and robustness compared to traditional boundary value problem solvers. Building on this, we introduce a physics‑based shape optimization method that allows ERNs to be optimized to approximate target curves and end‑plane constraints. The approach is validated through numerical experiments and physical prototypes. Article link: https://lnkd.in/egNEhktw Reference: Larsson, A., Hayashi, K., Adriaenssens, S. (2026). Accelerated simulation and design optimization of elastic rod networks with a spline‑based least‑squares formulation. Computer Methods in Applied Mechanics and Engineering, 456, 118925. https://lnkd.in/ezV3VWH6 Image credit: Axel Larsson

  • View profile for Jon Stresing

    At the intersection of the AI, our Partners and the Federal Government!

    19,555 followers

    For my PhysicalAI/Robotics/Autonomous Systems folks - Over the past week I’ve been digging into the NVIDIA Cosmos platform, and I’m genuinely excited by what it means for PhysicalAI, robotics, and autonomy. With our newly published NVIDIA Cosmos Cookbook, we now have a practical, scalable path to generate high-fidelity synthetic data for real-world robotics, autonomous vehicles, and sensor-based systems... All without needing millions of hours of real-world data collection. First, what is Cosmos? Cosmos is a platform purpose-built for physical AI, featuring state-of-the-art generative world foundation models (WFMs), guardrails, and an accelerated data processing and curation pipeline for autonomous vehicle (AV), robotics, and AI agent developers. You may be asking what a WFM is - Simply put, a World Foundation Model is a digital replica of the physical world where Physical AI can safely learn and practice. Why this matters? Synthetic data via Cosmos Transfer lets us vary background, lighting, weather, and other environmental parameters, (As seen in the GIF I uploaded), generating realistic video and sensor data “at will.” That means we can create rare or dangerous scenarios (hard to capture in the real world) in simulation: edge-case driving conditions, complex urban terrain, unusual lighting or weather — all of which matter for robust, safe AVs and robots. For anyone working robotics, autonomy, DoD-grade AI systems, or hybrid physical-digital AI pipelines: now is a great time to take a hard look at Cosmos. I see huge potential from UARCs/FFRDCs to Research Laboratories to the FSI community. NVIDIA Blog Here -> https://lnkd.in/e2_RP_mR Cosmos Github repo -> https://lnkd.in/exUjD4T9 Cosmos Hugging Face repo -> https://lnkd.in/eDyMm6Re NVIDIA Cosmos for Developers Homepage -> https://lnkd.in/ehSnkmGc

  • View profile for Arpit Gupta

    Applied Scientist AI Robotics | Ex Boston Dynamics

    4,631 followers

    Simulation lets us train millions of trajectories. Reality tests whether any of them actually work. The Sim2Real gap isn’t one problem — it’s three: 1️⃣ Visual Shift (Real ≠ Synthetic) Real scenes have noise, clutter, glare, shadows, messy backgrounds. Sim rarely does. 2️⃣ Physics Shift (Approximation ≠ Reality) Small errors in friction, damping, mass, or latency → huge drift in behavior. 3️⃣ Embodiment Shift (Robot ≠ Robot-in-Sim) Morphology, joint limits, actuator dynamics — nothing matches perfectly. What works today? • Domain Randomization — vary textures, lights, physics, noise until the policy generalizes by force • Domain Adaptation — align real + sim feature distributions • System Identification — tune sim from real sensor measurements • Real-to-Sim Feedback Loops — use a tiny amount of real data to anchor the model As robotics foundation models scale, most of their data will come from simulation. Teams who master domain adaptation will be the ones who can actually deploy these models on physical robots — not just in demos. I added my favorite papers, frameworks, and tools in the comments 👇

  • View profile for Srinivasan Vijayarangan

    Scientist (CMU) | Roboticist | Coach

    6,527 followers

    Faster simulation = faster robot intelligence. Not linearly. Exponentially. Here's what I mean. The Genesis simulator generated the simulation in this video. It can run a Franka arm simulation 430,000x faster than real-time. That's not a speed improvement. That's 430,000 parallel experiments — different grasps, different failures, different corrections — in the time a real arm attempts it once. Think about what that unlocks. Evolution spent millions of years perfecting human manipulation. Every dropped object. Every awkward grip. Every learned correction. That's the dataset nature needed to wire our hands. We're now compressing that loop. What mesmerizes me is the trajectory. Simulation used to be a last resort — slow, limited, reserved for things too risky to test in the real world. Now it's outpacing reality by orders of magnitude. And every time that multiplier grows, the learning curve for robots steepens dramatically. More loops. More data. More intelligence. The ceiling on robot manipulation isn't hardware anymore. It's how fast we can run the experiments. --- Interested in starting your robotics career? Check out our free robotics career guide to get you started: https://lnkd.in/gpPVTPKE

Explore categories