Using Uncertainty-Aware Priors in Robotics

Explore top LinkedIn content from expert professionals.

Summary

Using uncertainty-aware priors in robotics means designing robot systems that not only predict what will happen next but also understand how confident those predictions are, especially when data is imperfect or environments are unpredictable. This approach helps robots make safer, more reliable decisions by recognizing and planning for uncertainty during learning and control tasks.

  • Build for safety: Always include methods that identify when a robot’s data or model is uncertain so decisions prioritize safety over risky assumptions.
  • Adapt to changing data: Use adaptive models that adjust as new information comes in, which lets the robot handle unexpected situations without breaking down.
  • Model different uncertainties: Don’t just assume noise is simple—make sure your robot’s system can handle complex, real-world errors, not just the textbook cases.
Summarized by AI based on LinkedIn member posts
  • View profile for Samir Mir

    Electrical and Industrial Systems Control Engineer, |R&D| Battery Management Systems 🔋🔋🔋|| Nonlinear & Adaptive Control, State estimation.

    8,083 followers

    I am delighted to present an approach for enabling a Nonholonomic Wheeled Mobile Robot (WMR) to effectively handle unstructured system uncertainties. This is achieved by integrating a Performance Recovery Neuroadaptive Model Reference Adaptive Controller (PR-NMRAC) with Feedback Transformation (FT) into the optimal control problem. These techniques ensure that the mobile robot tracks a predefined trajectory, visiting all waypoints while adapting to environmental uncertainties and satisfying system constraints. Simulations validate the effectiveness of the proposed control strategy. WMR is typically modeled using its kinematic and dynamic equations, which capture its nonholonomic constraints and motion characteristics. This often involves a bicycle model, where the robot is simplified to two wheels: a steerable front wheel and a fixed rear wheel. The state variables generally include the robot’s position, orientation, and steering angle, along with a maximum allowable steering angle constraint. The control inputs consist of force and angular velocity, which are crucial for designing an effective controller. LQR is employed to modify the closed-loop poles of the nominal control (after Feedback Transformation) to achieve desired stability and performance objectives despite uncertainties and disturbances. This is formulated as an optimal control problem. To ensure a high level of performance, especially in dynamic environments, on-line tuning of the controller parameters is necessary. An adaptive control law adjusts these parameters automatically to compensate for changing conditions. Neuroadaptive control systems approximate unstructured system uncertainties. A key design challenge is ensuring that the closed-loop system trajectories remain within a defined set, satisfying the universal function approximation property while maintaining overall system stability. In this scheme, the weight norms of the consecutive neural network layers are tunable, reducing computational complexity. The tuning law is derived using Lyapunov stability theory, ensuring global stability and bounded trajectories. This approach combines (MRAC) with (PR) using an adaptive reference model. The underlying adaptive system is shown to be globally bounded for stable plants and to maintain bounded trajectories under certain initial conditions for unstable plants. The Adaptive Model Recovery mechanism introduces an additional degree of freedom, further enhancing closed-loop performance by suppressing adaptation learning errors. To maintain an optimally tuned regulator, a GA framework is employed to solve challenging optimization problems. GA is used to determine the optimal hyperparameters . While GA requires careful parameter tuning, it is highly valuable due to its ability to find near-optimal solutions by minimizing both the absolute error and virtual control effort all without relying on explicit gradient information makes them highly valuable.

    • +15
  • Exciting news from xLAB at the University of Pennsylvania! 🚀 Our latest publication in IEEE Robotics and Automation Letters (RA-L): "SIT-LMPC: Safe Information-Theoretic Learning Model Predictive Control for Iterative Tasks." Robots operating in complex, uncertain environments face a constant trade-off: how do you push for high performance without compromising safety? 🤖⚖️ Our team—including Zirui Zang, Ahmad Amine, Nick-Marios T. Kokolakis, Truong Xuan Nghiem, Ugo Rosolia — has developed SIT-LMPC, a new framework designed to solve this exact challenge for iterative tasks like autonomous racing and agile maneuvers. What makes SIT-LMPC a game-changer? *Safe & Optimal Learning:* We’ve introduced an adaptive penalty method that ensures robots robustly satisfy system constraints while iteratively improving their performance. *Richer Uncertainty Modeling:* By using Normalizing Flows to learn value functions from previous trajectories, our model captures complex uncertainties far more effectively than traditional Gaussian priors. *Blazing Fast Execution:* Designed for massive GPU parallelization, SIT-LMPC achieves 100Hz+ real-time control, even on embedded platforms like the NVIDIA Jetson Orin AGX. From benchmark simulations to punishing hardware experiments on 1/5th scale off-road vehicles, SIT-LMPC consistently outperforms existing methods like LMPC and ABC-LMPC in both speed and safety. Check out the full paper and supplementary material here: https://lnkd.in/ePqsTRQW Website: https://lnkd.in/eCGp3isG Special thanks to the US DoT Safety21 National University Transportation Center and the NSF for supporting this research. To be presented at #ICRA2026 #Robotics #ControlSystems #MachineLearning #AutonomousVehicles #UPenn #xLAB #IEEE #MPC #SITLMPC

  • View profile for Chenhao Li

    Doctoral fellow at ETH AI Center | Robot learning at RSL & LAS | Prev. Massachusetts Institute of Technology, ETH Zurich, Max Planck Institute for Intelligent Systems.

    5,632 followers

    🧠 World models can predict, but controlling real robots from imagination sees a long-standing failure due to hallucination. 🌎 Introducing Uncertainty-Aware RWM: a black-box, end-to-end neural dynamics model with long-horizon uncertainty propagation. 📢 Uncertainty-Aware Robotic World Model Makes Offline Model-Based Reinforcement Learning Work on Real Robots 👥 Chenhao Li, Andreas Krause, Marco Hutter 🎯 Project: https://lnkd.in/eeqYUixy 📄 Paper: https://lnkd.in/eSqyZDu2 ❗ World models hallucinate under distribution shift: in low-data regions, small errors compound over long autoregressive rollouts. Policy optimization then exploits these hallucinations, achieving high reward in imagination while failing catastrophically in the real world. 🌎 Uncertainty-Aware Robotic World Model (RWM-U) extends RWM by modeling not only what will happen, but how reliable those predictions are. We augment autoregressive world models with ensemble-based uncertainty estimation, explicitly capturing epistemic uncertainty from limited or biased offline data. 🧠 Each ensemble member in RWM-U predicts a Gaussian distribution over the next observation. The predicted variance captures aleatoric uncertainty, while disagreement across ensemble means estimates epistemic uncertainty from limited or biased offline data. ✅ We bring MOPO to long-horizon world models, and to PPO. MOPO-PPO trains policies entirely in imagination, penalizing uncertain transitions to avoid hallucination. No real environment interaction → fast, fully offline learning. Starting from pure offline data, with no online environment interaction (not even a simulator), we train policies that are directly deployable on real hardware. 🏛️ ETH AI Center, Robotic Systems Lab, Department of Computer Science (D-INFK), ETH Zürich, Department of Mechanical and Process Engineering (D-MAVT), ETH Zurich, ETH Zürich #ai #robotics #humanoids #embodied_ai #machine_learning #reinforcement_learning #representation_learning #dynamics_learning #world_model #modelbased_reinforcement_learning #computer_graphics #computer_science

  • View profile for Rangel Isaías Alvarado Walles

    Robotics & AI Engineer | AI Engineer | Machine Learning | Deep Learning | Computer Vision | Agentic AI | Reinforcement Learning | Self-Driving Cars | IoT | IIoT | AIOps | MLOps | LLMOps | DevOps | Cloud | Edge AI

    4,576 followers

    Robust Model Predictive Control Design for Autonomous Vehicles with Perception-based Observers Arxiv: https://lnkd.in/ekGktfkY Github (not this paper): https://lnkd.in/eTvuTw2U Video (not this paper): https://lnkd.in/eQF7tqXn How can autonomous vehicles maintain stable control when their perception modules (cameras, CNNs) suffer from biased, heavy-tailed, non-Gaussian noise? This work introduces a perception-driven tube-based MPC framework that uses constrained zonotopes for uncertainty modeling and reformulates the controller as a linear program (LP)—achieving real-time, stable, and safe performance under real-world perception noise. 🔁 At a Glance 💡 Goal: Ensure robust, perception-aware MPC that handles non-Gaussian sensor noise with provable stability. ⚙️ Approach: Perception-based observer: CNN estimates robot states from camera images. Zonotopic modeling: Captures biased + heavy-tailed noise beyond Gaussian assumptions. Tube-based MPC: Uses invariant sets to guarantee constraint satisfaction. LP reformulation: Minkowski–Lyapunov cost + slack variable avoids idle/deadbeat behavior. 📈 Impact (Key Metrics) 🧪 Simulation: Bounded estimation error under Laplace (heavy-tailed) noise. Outperforms Kalman-based Gaussian MPC in error stability. 🤖 Real-world (Husarion ROSbot XL + ROS2): Stable closed-loop control with perception-in-the-loop. Cost reduced by 26.8% vs. Gaussian-MPC (9019 vs. 12320). Control inputs respected constraints throughout. 🔬 Experiments 🦾 Robot: Husarion ROSbot XL with Jetson Orin Nano. 📐 Perception: Custom CNN (RobotPerceptionNet) trained on 3k samples → regresses 2D position from camera images. ⚡ Framework: ROS2-based pipeline fusing perception + MPC. 🛠 How to Implement 1️⃣ Train CNN-based perception module → outputs noisy state estimates. 2️⃣ Model perception + process noise as zonotopes. 3️⃣ Design observer gain L and feedback gain K to bound estimation error. 4️⃣ Solve LP-based MPC with tightened state/input sets + invariant terminal sets. 📦 Deployment Benefits ✅ Robust to biased, heavy-tailed perception noise. ✅ Real-time feasible via LP formulation. ✅ Ensures safe trajectory tracking under uncertainty. ✅ Generalizable to other AV + perception pipelines. Takeaway This framework proves that Gaussian assumptions are not enough for AV control. By combining zonotopic observers with perception-aware MPC, vehicles achieve safer, more reliable autonomy in the wild. Follow me to know more about AI, ML and Robotics!

Explore categories