New work on robot learning & teleop with force: FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning Paper + Code: https://lnkd.in/eqfvtRRE Force information is crucial for contact-rich tasks, but behavior cloning policies tend to ignore robot force input if added naively, as policies tend to overfit to vision input. Key Idea: FACTR is a behavior cloning training curriculum that corrupts vision input with decreasing intensity throughout training. This helps the policy to properly attend to force data. Our policies perform and generalize better: · 46% improvement in success rate for unseen objects · Exhibit emergent recovery behavior, unobserved in baselines We also add force-feedback to low-cost leader-follower teleop system, which is especially helpful for collecting data for contact-rich tasks: · 64.7% higher task completion rate · 37.4% reduced completion time · 83.3% improvement in the subjective ease of use With Jason Liu, Yulong Li, Kenneth Shaw, Tony Tao, Deepak Pathak
Force Control in Robotic Applications
Explore top LinkedIn content from expert professionals.
Summary
Force control in robotic applications refers to a robot’s ability to sense and adjust the amount of force it uses during tasks, allowing it to handle objects safely and adapt to unpredictable environments. This capability is crucial for tasks that require both precision and a gentle touch, such as assembling delicate components or interacting with humans.
- Integrate sensory feedback: Use tactile and force sensors to help robots adjust their grip and movement in real time, reducing the risk of damage or error.
- Train with human data: Gather and incorporate demonstrations from human experts to teach robots how much force to use in everyday manipulation tasks.
- Design adaptive control: Build control systems that can quickly adjust to changes, allowing robots to safely interact in both structured settings and unpredictable environments.
-
-
Presenting FEELTHEFORCE (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, they train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representa- tions. At execution, a PD controller modulates gripper closure to track predicted forces -enabling precise, force-aware control. This approach grounds robust low- level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. #research: https://lnkd.in/dXxX7Enw #github: https://lnkd.in/dQVuYTDJ #authors: Ademi Adeniji, Zhuoran (Jolia) Chen, Vincent Liu, Venkatesh Pattabiraman, Raunaq Bhirangi, Pieter Abbeel, Lerrel Pinto, Siddhant Haldar New York University, University of California, Berkeley, NYU Shanghai Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces.
-
Robots tighten screws faster than humans for one reason: feedback speed. Robotic hands run 1–5 kHz control loops. Humans react in ~200 ms. Fascinating? By the time a human feels resistance, a robot has already corrected 200+ times. Electric actuators accelerate, brake, and reverse instantly. Force + tactile sensors measure torque in real time and stop within ±1–2% accuracy. Vision systems align parts in sub-millimeter precision, even when assemblies aren’t perfect. Machine-learning policies trained on thousands of assemblies detect slip, recover from cross-threading, and flag anomalies as they happen. No fatigue. No hesitation. No variation between the first screw and the millionth. The factory advantage: • Higher throughput • Fewer defects • Less repetitive strain for humans The future of manufacturing isn’t stronger robots. It’s faster feedback loops. #Robotics #AI via @autopoiesis.ai #Manufacturing #Automation #IndustrialAI #FactoryOfTheFuture #ControlSystems #DigitalManufacturing
-
Forget backflips. Watch the glass. This is Figure's latest demo running Helix 02—a fully autonomous humanoid loading a dishwasher. No teleoperation, no speed-ups, just raw feedback loops handling fragile glass. This represents a massive shift in how we define robotic capability: The "hard skills" (lifting heavy boxes) are solved. The "soft touch" was the bottleneck. Here is why this specific motion matters: ✔️ Pixels-to-Torque Control In the past, robots followed rigid coordinates. If a glass was 1mm off, it shattered. Helix 02 connects camera pixels directly to motor torque. It doesn't just "see" the glass; it learns the physics of fragility and modulates force in real-time. ✔️ Whole-Body Intelligence Watch the hips. The robot isn't just moving an arm; it stabilizes its entire frame to support the hand's precision. This is "System 0" at work—a unified neural network managing balance and manipulation simultaneously, replacing 100,000+ lines of hard code. ✔️ The "Messy" Reality Industrial robots need structured assembly lines. Domestic robots need to handle the chaos of a sink. By mastering the dishwasher—a task with high variability and high consequences for failure—we are moving from "automation" (repeating a task) to "autonomy" (adapting to a task).
-
#Safety is crucial in human-robot interaction, especially for #mobile #robots. Without safety, #certification is impossible, and real-world applications are unfeasible. To address this, alongside our work on machine learning (which, despite their huge potential, are not yet certifiable), we use advanced #passivity and #powerbased control strategies to ensure optimal performance and safety. Recently, together with Theodora Kastritsi, we proposed a control strategy that decouples desired #dynamics from unintentional motion. This ensures changes in one direction do not affect the other. In the unintentional space, admittance parameters remain constant, while in the intended motion direction, inertia and damping gains adjust to provide compliance to the human user. We designed these variable terms to ensure a consistent response and perceived behavior, guaranteeing #strict #passivity under human force input for stable manipulation. In this video you can observe how smooth and robust the behavior of the proposed controller is in various trajectories and in comparison to advanced baseline controllers. Also, here is a link to our (open access) work: https://lnkd.in/dGfi7mJX
-
🦿 Can legged robots learn to control force and position… without force sensors? [📍 bookmark paper for later] This new work introduces a unified policy that enables legged robots to handle loco-manipulation tasks by learning both force and position control without using force sensors. It estimates contact forces from motion history and adapts in real time. Why this matters ✅ Jointly learns force and position control in one policy ✅ Works without force sensors by estimating forces from past states ✅ Handles complex tasks like force tracking and compliant behaviors ✅ Boosts imitation learning success by ~39.5% in contact-rich tasks Learn more 📄 Paper: https://lnkd.in/d2VnU4uE 📂 Project: https://lnkd.in/die5gyRA This brings us one step closer to agile, adaptable legged robots that can walk, push, and manipulate; All through a single, sensor-free policy.
-
I often get asked — how can robots sense and control the pressure needed to grab different objects? The answer lies at the intersection of vision models, VLA systems, and tactile sensing. SpikeATac combines two complementary types of sensing: ✨ Dynamic sensing — using a PVDF film that detects rapid pressure changes (the instant of contact). ✨ Static sensing — using capacitive sensors to measure sustained forces (the firmness of a grip). The result? A fingertip that can distinguish between brushing against glass and holding it firmly — just like a human finger. In tests, this system could grasp fragile materials (like seaweed sheets) at high speed without damage — something traditional pressure sensors fail to do. Even more impressive: the robot hand was trained using reinforcement learning with human feedback. A base policy learned from demonstrations, then refined its tactile sensitivity through human-labeled feedback (“good” vs “bad” grasps) — learning over time to make softer, more adaptive grips. This kind of breakthrough is what bridges the gap between perception and action — a step closer to robots that don’t just see, but also feel.
-
Robotics has a vision problem. We’ve spent years giving robots better cameras—but eyes alone aren’t enough. Vision can guide a robot to a part. But can it tell when a connector seats? When two parts bind? When it’s holding one item—or two? Is the package soft or hard? That’s where force sensing comes in. ATI Industrial Automation’s next-generation 6-axis robotic force sensor brings a new level of touch awareness—built for industrial applications. It’s faster--think Ethernet & EtherCAT fast. It’s 5x more sensitive. It includes an IMU for weigh-in-motion and dynamic force tracking. And it integrates seamlessly with robots from Fanuc, Yaskawa, KUKA, ABB, UR, and more—right inside their control environments. This unlocks powerful applications: * Bin picking with weigh and jam detection * Part grasping--soft or hard material? * Precision assembly with connection confirmation * Automated product testing * Weight checks on the fly If you’re still relying on vision alone, it might be time to give your robot a sense of touch. #robotics
-
HARRI: High-speed Adaptive Robot for Robust Interactions This video showcases some of the early testing footage of HARRI (High-speed Adaptive Robot for Robust Interactions), a next-generation proprioceptive robotic manipulator developed at the Robotics & Mechanisms Laboratory (RoMeLa) at UCLA. Designed for dynamic and force-critical tasks, HARRI leverages quasi-direct drive proprioceptive actuators combined with advanced control strategies such as impedance control and real-time model predictive control (MPC) to achieve high-speed, precise, and safe manipulation in human-centric and unstructured environments. Built with a lightweight, low-inertia structure and powered by highly back-drivable actuators, HARRI enables rapid, compliant interactions with its surroundings. By embedding proprioceptive sensing directly into the actuators, HARRI provides real-time feedback on position, velocity, and torque without relying on external sensors, greatly enhancing its adaptability and robustness in dynamic tasks. Demonstrations in this video include: • Catching a flying ball with high precision and compliant force control. • Catching a moving box, showcasing fast and adaptive manipulation of heavier and more irregular objects. • Safe direct physical interaction with a human, demonstrating compliant and controlled responses to intentional contact. • And plenty of blooper videos for fun! HARRI highlights the transition from traditional rigid position controlled robotic systems to agile, intelligent, and safe manipulators capable of working alongside humans. This research paves the way for future robotic systems that combine proprioception, real-time optimization, and adaptive control to handle increasingly complex and dynamic real-world challenges. https://lnkd.in/dR-Kpznb
HARRI: High-speed Adaptive Robot for Robust Interactions
https://www.youtube.com/
-
Dexterity is coming closer! 🪬 What a start to the new year. During CES, Sharpa announced a tactile VLA for “last millimeter” manipulation called CraftNet. Sharpa says the real bottleneck in robotics is hands. Robots can dance, box, and run. But the moment they touch an object, they become clumsy, because most policies are trained on trajectories with no force or tactile feedback. Their answer is CraftNet, a hierarchical VTLA model (Vision–Tactile–Language–Action) built specifically for fine manipulation. Their architecture splits control into 3 layers: → System 2 (Reasoning Brain): vision + language planning (~1 Hz) → System 1 (Motion Brain): approach + pre-contact motion (~10 Hz) → System 0 (Interaction Brain): tactile + force-based micro-control during contact (~100 Hz) That last part is the key. They’re explicitly targeting what they call the “Last Millimeter Challenge”, those tiny contact adjustments humans do automatically: tightening grip, sliding fingers, re-grasping, correcting force. CraftNet tries to turn widely available video, sim or tele-op data into manipulation data with tactile signals, so training doesn’t get stuck in the “data drought.” One of the most awesome robotics dexterity showcases 🤯 ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development