We are excited to share our latest work, "Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation", where a policy learns to adapt in the real world to unknown disturbances within 5 seconds, both with and without explicit state estimation, directly from visual features. Code released! PDF: https://lnkd.in/eZpWW7dS Project Page: https://lnkd.in/e5bnnipt Starting from a simple analytical dynamics model, the system continuously learns residual dynamics from real-world data and embeds the refined model into a differentiable simulator. This enables fast, gradient-based policy updates that are far more sample-efficient than classical #ReinforcementLearning. We demonstrate rapid adaptation in <5 seconds in agile quadrotor control under challenging conditions, including added payloads, wind disturbances, and large sim-to-real gaps. In real-world experiments, our method reduces hovering error by up to 81% compared to L1-MPC and 55% compared to PPO-based adaptive methods. It also operates directly from visual features without explicit state estimation. Reference: “Learning on the Fly: Rapid Policy Adaptation via Differentiable Simulation” IEEE Robotics and Automation Letters, 2026 PDF: https://lnkd.in/eZpWW7dS Video: https://lnkd.in/eSHeKdkr Code: https://lnkd.in/edidHJng Website: https://lnkd.in/e5bnnipt Kudos to Michael Pan, Jiaxu Xing, Rudolf Reiter, Daniel(Yifan) Zhai, Elie Aljalbout! UZH Department of Informatics, UZH.ai, University of Zurich, UZH Innovation Hub, European Research Council (ERC), AUTOASSESS
Adaptive Control Techniques in Robotics
Explore top LinkedIn content from expert professionals.
Summary
Adaptive control techniques in robotics allow robots to learn and adjust their movements in real time, helping them cope with unexpected changes or disturbances in their environment. These methods use data and feedback to continually refine how robots move and interact, making them more reliable in complex or unpredictable situations.
- Embrace flexibility: Encourage robots to use adaptive methods so they can handle new tasks or environments without retraining for every scenario.
- Monitor performance: Track how robots respond to changes, using feedback to update their control strategies and maintain stability during operation.
- Integrate learning tools: Combine adaptive control with data-driven approaches like machine learning to help robots quickly recover from errors and adjust to real-world challenges.
-
-
I am delighted to share an interesting example of stabilizing a Car-like mobile robot (CMR) using a Nonlinear Model predictive controller (NMPC) optimal controller, to avoid obstacles and overcome barrier limitations. This is achieved by integrating the Artificial Potential Field (APF) method, with extended Kalman Filter (EKF) to estimate longitudinal and lateral position and drive the mobile robot to track a given trajectory while adhering to environmental constraints. CMR is typically modeled using its kinematic equations, capturing its nonholonomic constraints and motion characteristics.this often involves a bicycle model, where the robot is simplified to two wheels, a steerable front wheel and a fixed rear wheel. The state variables usually include the robot's position, orientation, and steering angle, while the control inputs are the linear velocity and Ang velocity.This model is essential for designing controllers. NMPC works by repeatedly solving an optimization problem over a finite prediction horizon at each control step. For a car-like robot, this involves using a dynamic model of the robot to predict its future states, such as position, orientation, and velocity, based on the current state and a sequence of control inputs. 'fmincon' solver in MATLAB solve this constrained nonlinear optimization, it aims to minimize a cost function that typically includes terms for trajectory tracking error, control effort, and adherence to constraints like obstacle avoidance, actuator limits, or road boundaries. By solving this problem in real-time, NMPC generates optimal control actions that drive the robot toward its goal while respecting system constraints and adapting to changes in the environment, if the robot detects an obstacle, NMPC can replan its trajectory to avoid collisions while still progressing toward the target. The integration of APF ensures smooth obstacle avoidance, while EKF provides accurate state estimation for robust control. This combination makes NMPC highly effective for CMRs operating in dynamic or uncertain environments, ensuring safe, efficient, and precise navigation. Overall, this approach showcases a powerful framework for autonomous navigation and control of CMR. In the future, the proposed framework for stabilizing a CMR can be further enhanced by exploring alternative techniques and solvers. For instance, Reinforcement Learning or Deep Learning could be incorporated to improve obstacle avoidance and trajectory planning in highly dynamic environments, enabling the robot to learn from experience and adapt to complex scenarios. solvers like IPOPT or CasADi could be tested alongside `fmincon` to improve computational efficiency and scalability, especially for large-scale problems. These advancements would not only improve the performance and robustness of the CMR but also expand its applicability to more challenging environments, such as urban autonomous driving or multi-robot coordination.
-
+15
-
🚀 New paper from our lab: Sym2Real: a data-efficient way to train adaptive robot controllers. With only ~10 trajectories (80 seconds!!!) in total (most from a low-fidelity and untuned sim + 2–3 real runs), we achieve robust real-world control of a palm-sized drone and a 1/10 racing car. No expert priors on dynamics or heavy sim tuning needed. 📹 The video below shows the entire 80-second demo video, from the beginning to flying. The key intuition is to capture the shared core physics in a simplified setting with concrete equations (in our case, differential equations!), then adapt with a lightweight residual from just a handful of real-world samples. This continues our line of work on robot self-models for resilient behaviors: - Visual self-modeling of full bodies (Science Robotics 2022) - Self-modeling animatronic face control (ICRA 2021, Science Robotics 2024) 📄 Read our preprint: https://lnkd.in/eZrb6i3d 🎥 Video: https://lnkd.in/e3dMUs3J 🔬 Code + data: https://lnkd.in/e24ayhJ2 Led by our amazing Easop Lee at the General Robotics Lab at Duke University!
-
Humanoid robots need to adapt to different tasks, like moving around, handling objects while walking, and working on tables, each requiring a unique way to control the robot’s body. For instance, moving around focuses on tracking how fast the robot's base is moving, while working at a table relies more on controlling the robot's arm movements. Many current methods train robots with specific controls for each task, making it hard for them to switch between tasks smoothly. This new approach suggests using whole-body motion imitation to create a common base that can work for all tasks, helping robots learn general skills that apply to different types of control. With this idea, researchers developed HOVER (Humanoid Versatile Controller), a system that combines different control modes into one shared setup. HOVER allows robots to switch between tasks without losing the strengths needed for each one, making humanoid control easier and more flexible. This approach removes the need to retrain the robot for each task, making it more efficient and adaptable for future uses. The diverse team of researchers that developed HOVER come from: NVIDIA, Carnegie Mellon University, University of California, Berkeley, The University of Texas at Austin, and UC San Diego. 📝 Research Paper: https://lnkd.in/eMatAxMu 📊 Project Page: https://lnkd.in/eY4gzmme #robotics #research
-
When models are wrong or dynamics change, most controllers struggle — we need controllers that learn online 🧠. We propose a certainty-equivalent MPC scheme that adapts to online system changes and provides strong robustness and performance guarantees ✅. 📄 Preprint: https://lnkd.in/ervSdn2n 💻 Code: https://lnkd.in/e6aFCDjX We combine a certainty-equivalent MPC with a least-mean-square (LMS) adaptation. The proposed approach relies on a novel MPC formulation that is easy to implement and provides strong stability and robustness guarantees. Theoretical guarantees provide suitable bounds on the tracking error with respect to output references and on state-constraint violations, given (unknown) time-varying parameters, measurement noise, process noise, and nonlinear dynamics. The figure below shows how the adaptive MPC navigates through obstacles while self-correcting the dynamics, whereas an approach without adaptation simply becomes unstable. I want to thank my former colleagues at ICS - Intelligent Control Systems, in particular Melanie Zeilinger, for fruitful discussions during my time at ETH Zürich. Imperial College London; Imperial Mechanical Engineering #MPC #AdaptiveControl #LearningBasedControl #Robotics #AutonomousSystems
-
CaDeLaC - Context-Aware Deep Lagrangian Networks (DLN) for Model Predictive Control (MPC) Arxiv: https://lnkd.in/e37fcxTi DLN (example, not from this paper): https://lnkd.in/e66qn6b2 Steve Brunton DLN explanation (not from this paper): https://lnkd.in/eAyKrKx7 Physics + Learning + Real-time Control. CaDeLaC unifies Deep Lagrangian Networks (DeLaN) with online context adaptation and Model Predictive Control (MPC) to deliver zero-shot robust control under changing dynamics—like variable payloads. It’s a step toward agile, interpretable control in dynamic environments. 🦾 🔁 At a Glance 💡 Goal: Learn a physically consistent dynamics model that adapts to changing contexts in real time and integrates seamlessly with MPC. ⚙️ Approach: Extend DeLaN with residual learning to model only unknown dynamics Add context-awareness via a history-based LSTM encoder Integrate into MPC to control robots under time-varying external loads 📈 Impact (Key Metrics) 📉 39% reduction in end-effector tracking error vs. nominal model 🧠 Generalizes across 100+ payloads (0–4 kg, random CoM) 🚀 Zero-shot transfer to real robot (Franka Emika Panda) after sim-only training 🔁 Outperforms Extended Kalman Filter baseline on torque and position tracking 🔬 Experiments Robots: 7-DOF Franka Emika Panda Tasks: Joint trajectory tracking Pick-and-place with dynamic payload changes High-speed motion under varying loads (1kg–3kg) Benchmarks: MPC (nominal), EKF-MPC, CaDeLaC Metrics: Torque RMSE, position/velocity tracking, end-effector path deviation 🛠 How to Implement 1️⃣ Residual DeLaN Learn torque residuals between nominal model and real dynamics Predict mass/inertia-related errors with compact MLPs 2️⃣ Contextual Encoding Use LSTM to encode recent joint state & torque residuals Feed latent vector z into DeLaN to modulate model predictions 3️⃣ MPC Integration (CaDeLaC) Use Acados + HPIPM + CasADi for real-time optimization Update torque predictions at 50Hz, combined with 1kHz PD controller 4️⃣ Training Pipeline Collect 1M+ sim samples with randomized payloads in MuJoCo Use chirp signals for system excitation Jointly train DeLaN + LSTM offline, deploy online inference only 📦 Deployment Benefits ✅ Physics-informed learning with interpretable structure ✅ Context adaptation without retraining ✅ Data-efficient with residual modeling ✅ Real-time ready (latency <20ms) ✅ Robust to load variations, friction, and modeling mismatch 📣 Takeaway CaDeLaC is not just another neural MPC—it’s a physics-aware, context-adaptive control framework that tracks precisely even under changing dynamics. Train in sim, adapt in real-time. Control with confidence. Follow me to know more about AI, ML and Robotics!
-
Adaptive controller - The perfect fit for non-linear systems. In a previous post I have shown an example about identifying the drag coefficients of a vehicle. With a given mass and a roll out dataset it is possible to get a suitable model description: https://lnkd.in/eCNnRMzh On the following slides you will get a procedure for controller design for non-linear system behavior over wide operating point ranges. Vehicle motion control is a perfect example that pretty much everyone has experience with, as he or she is typically himself or herself the controller in the vehicle. The entire project is on Github: https://lnkd.in/eVMM7R_P Slide 1: With parameter identification, we obtain coefficients to condition our model. A limited torque can be transferred into maximum force along the drivetrain. Slide 2: The well-known equation of motion (type of Riccati ODE) is non-linear due to its quadratic term. In order to use the methods of classical control engineering, linearization around an operating point is the first choice. For a certain operating point, there is a fixed system behavior that can be described by a PT1-transfer function with a fixed time constant. To compensate the dynamic behavior exactly, a PI controller is the simplest choice. Slide 3: The open loop system can be designed in such a way that the time constant is eliminated. If you set the controller in this way, you get a behavior in the closed control loop that is similar to a PT1 system again. If you adjust the time constant to the current operating point, you will achieve a piecewise linearization that fits close to a tailor-made suit. Slide 4: A fixed operating point with a speed equal to 0 results in a rather sluggish behavior. Slide 5: At a fixed operating point with a speed equal to the maximum speed of the driving profile, we obtain a very aggressive system behavior. Slide 6: Thanks to adaptive control, in which the time constant is adjusted via the speed, the system behavior is fast and smooth. Slide 7: In direct comparison to the smallest fixed time constant at high speed, adaptive control has several advantages. ✅ lower stress on the drivetrain ✅ no power interruption due to controller overdrive ✅ slightly lower energy consumption ✅ more pleasant driving behavior Slide 8: Test this adaptive controller example with some modifications. This example is shared with Google colab: https://lnkd.in/eEnaEWkH The example can be used to test the fixed time constants, the adaptive controller based on PI controller as well as to expand to a PID controller. Conclusion: Adaptive controllers can be designed in such a way that they control non-linear systems as quasi-linear systems. The controller time period must be significantly smaller than the system time constant. By tracking the system parameters, the controller can be continuously adapted to the current operating point. #engineering #controlsystems #controltheory #math #python #google
-
Teaching robots to build simulations of themselves allows the robot to detect abnormalities and recover from damage. We naturally visualize and simulate our own movements internally, enhancing mobility, adaptability, and awareness of our environment. Robots have historically been unable to replicate this visualization, relying instead on predefined CAD models and kinematic equations. Free Form Kinematic Self-Model (FFKSM) allows the 𝗿𝗼𝗯𝗼𝘁 𝘁𝗼 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗲 𝗶𝘁𝘀𝗲𝗹𝗳: 1) Robots autonomously learn from their morphology, kinematics, and motor control directly from 𝗯𝗿𝗶𝗲𝗳 𝗿𝗮𝘄 𝘃𝗶𝗱𝗲𝗼 𝗱𝗮𝘁𝗮 -> Like humans observing their reflection in a mirror 2) Robots perform precise 3D motion planning tasks 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗿𝗲𝗱𝗲𝗳𝗶𝗻𝗲𝗱 𝗸𝗶𝗻𝗲𝗺𝗮𝘁𝗶𝗰 𝗲𝗾𝘂𝗮𝘁𝗶𝗼𝗻𝘀 -> Simplifies complex manipulation and navigation tasks 3) Robots 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀𝗹𝘆 𝗱𝗲𝘁𝗲𝗰𝘁 morphological changes or damage and rapidly recover by retraining with new visual feedback -> Significantly enhances resilience. The model is also 𝗵𝗶𝗴𝗵𝗹𝘆 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁, requiring minimal memory resources of just 333kB, making it broadly applicable for resource constrained robotic systems. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮𝗹𝘀𝗼 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗺𝗼𝗱𝗲𝗹 𝘁𝗼 𝗮𝗰𝗵𝗶𝗲𝘃𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝘀𝗲𝗹𝗳-𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝘂𝘀𝗶𝗻𝗴 𝗼𝗻𝗹𝘆 𝟮𝗗 𝗥𝗚𝗕 𝗶𝗺𝗮𝗴𝗲𝘀, 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗶𝗻𝗴 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗱𝗲𝗽𝘁𝗵-𝗰𝗮𝗺𝗲𝗿𝗮 𝘀𝗲𝘁𝘂𝗽𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗿𝗶𝗰𝗮𝘁𝗲 𝗰𝗮𝗹𝗶𝗯𝗿𝗮𝘁𝗶𝗼𝗻𝘀. I believe the next phase of robotic automation inevitably comes with self-awareness of robots. Self-reflection is a major part of how we as humans improve upon ourselves; as 'general purpose robots' emerge, so would their self-reflection. This enables robots to continuously monitor and update their internal models, thereby refining their performance in real time. This is a huge step towards robot self-awareness! Congratulations to Yuhang Hu, Jiong Lin, and Hod Lipson on this impressive advancement! Paper link: https://lnkd.in/gJ-bkU8N I post the latest and interesting developments in robotics—𝗳𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱!
-
HARRI: High-speed Adaptive Robot for Robust Interactions This video showcases some of the early testing footage of HARRI (High-speed Adaptive Robot for Robust Interactions), a next-generation proprioceptive robotic manipulator developed at the Robotics & Mechanisms Laboratory (RoMeLa) at UCLA. Designed for dynamic and force-critical tasks, HARRI leverages quasi-direct drive proprioceptive actuators combined with advanced control strategies such as impedance control and real-time model predictive control (MPC) to achieve high-speed, precise, and safe manipulation in human-centric and unstructured environments. Built with a lightweight, low-inertia structure and powered by highly back-drivable actuators, HARRI enables rapid, compliant interactions with its surroundings. By embedding proprioceptive sensing directly into the actuators, HARRI provides real-time feedback on position, velocity, and torque without relying on external sensors, greatly enhancing its adaptability and robustness in dynamic tasks. Demonstrations in this video include: • Catching a flying ball with high precision and compliant force control. • Catching a moving box, showcasing fast and adaptive manipulation of heavier and more irregular objects. • Safe direct physical interaction with a human, demonstrating compliant and controlled responses to intentional contact. • And plenty of blooper videos for fun! HARRI highlights the transition from traditional rigid position controlled robotic systems to agile, intelligent, and safe manipulators capable of working alongside humans. This research paves the way for future robotic systems that combine proprioception, real-time optimization, and adaptive control to handle increasingly complex and dynamic real-world challenges. https://lnkd.in/dR-Kpznb
HARRI: High-speed Adaptive Robot for Robust Interactions
https://www.youtube.com/
-
Exciting news from xLAB at the University of Pennsylvania! 🚀 Our latest publication in IEEE Robotics and Automation Letters (RA-L): "SIT-LMPC: Safe Information-Theoretic Learning Model Predictive Control for Iterative Tasks." Robots operating in complex, uncertain environments face a constant trade-off: how do you push for high performance without compromising safety? 🤖⚖️ Our team—including Zirui Zang, Ahmad Amine, Nick-Marios T. Kokolakis, Truong Xuan Nghiem, Ugo Rosolia — has developed SIT-LMPC, a new framework designed to solve this exact challenge for iterative tasks like autonomous racing and agile maneuvers. What makes SIT-LMPC a game-changer? *Safe & Optimal Learning:* We’ve introduced an adaptive penalty method that ensures robots robustly satisfy system constraints while iteratively improving their performance. *Richer Uncertainty Modeling:* By using Normalizing Flows to learn value functions from previous trajectories, our model captures complex uncertainties far more effectively than traditional Gaussian priors. *Blazing Fast Execution:* Designed for massive GPU parallelization, SIT-LMPC achieves 100Hz+ real-time control, even on embedded platforms like the NVIDIA Jetson Orin AGX. From benchmark simulations to punishing hardware experiments on 1/5th scale off-road vehicles, SIT-LMPC consistently outperforms existing methods like LMPC and ABC-LMPC in both speed and safety. Check out the full paper and supplementary material here: https://lnkd.in/ePqsTRQW Website: https://lnkd.in/eCGp3isG Special thanks to the US DoT Safety21 National University Transportation Center and the NSF for supporting this research. To be presented at #ICRA2026 #Robotics #ControlSystems #MachineLearning #AutonomousVehicles #UPenn #xLAB #IEEE #MPC #SITLMPC
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development