Happy to share our latest paper, "Enabling Novel Mission Operations and Interactions with ROSA: The Robot Operating System Agent". This work was led by Rob R. in collaboration with Marcel Kaufmann, Jonathan Becktor, Sangwoo Moon, Kalind Carpenter, Kai Pak, Amanda Towler, Rohan Thakker and myself. Please find the #OpenSource code, paper, and video demonstration linked below. Operating autonomous robots in the field is often challenging, especially at scale and without the proper support of Subject Matter Experts (SMEs). Traditionally, robotic operations require a team of specialists to monitor diagnostics and troubleshoot specific modules. This dependency can become a bottleneck when an SME is unavailable, making it difficult for operators to not only understand the system's functional state but to leverage its full capability set. The challenge grows when scaling to 1-to-N operator-to-robot interactions, particularly with a heterogeneous robot fleet (e.g., walking, roving, flying robots). To address this, we present the ROSA framework, which can leverage state-of-the-art Vision Language Models (VLMs), both on-device and online, to present the autonomy framework's capabilities to operators in an intuitive and accessible way. By enabling a natural language interface, ROSA helps bridge the gap for operators who are not roboticists, such as geologists or first responders, to effectively interact with robots in real-world missions. In our video, we demonstrate ROSA using the NeBula Autonomy framework developed at NASA Jet Propulsion Laboratory to operate in JPL's #MarsYard. Our paper also showcases ROSA's integration with JPL's EELS (Exobiology Extant Life Surveyor) robot and the NVIDIA Carter robot in the IsaacSim environment (stay tuned for ROSA IssacSim extension updates!). These examples highlight ROSA's ability to facilitate interactions across diverse robotic platforms and autonomy frameworks. Paper: https://lnkd.in/g4PRjF4V Github: https://lnkd.in/gwWXmmjR Video: https://lnkd.in/gxKcum27 #Robotics #Autonomy #AI #ROS #FieldRobotics #RobotOperations #NaturalLanguageProcessing #LLM #VLM
Robotics Engineering for Advanced Human-Robot Interaction
Explore top LinkedIn content from expert professionals.
Summary
Robotics engineering for advanced human-robot interaction focuses on designing robots that can communicate and collaborate naturally and safely with people, often using intuitive interfaces and human-inspired behaviors. This field combines engineering, artificial intelligence, and human-centered design to make robots more adaptable and responsive in everyday environments.
- Design intuitive interfaces: Create robot controls that use natural language or gestures so people without technical backgrounds can easily operate and communicate with robots.
- Model human behavior: Integrate learning systems that allow robots to recognize and adapt to the actions, moods, and safety needs of their human partners in real time.
- Embed safety measures: Incorporate planning and decision-making processes that help robots anticipate risks and prioritize safe interactions when working alongside people.
-
-
Researchers at Osaka University have developed a new technology enabling androids to express mood states, such as "excited" or "sleepy," through dynamic facial movements modeled as overlapping, decaying waves. Traditional androids have relied on pre-programmed scenarios to convey emotions, often resulting in unnatural, disconnected expressions that can make human interactions uncomfortable. The new method, led by Hisashi Ishihara, uses "waveform-based" synthesis to create real-time facial expressions. Individual waveforms represent gestures like blinking, yawning, and breathing, which are propagated across the android’s face and overlaid to produce complex, fluid movements. This eliminates the need for pre-configured action scenarios and minimizes abrupt movement transitions. Additionally, waveform modulation allows the robot’s internal state to influence facial expressions instantly, reflecting changes in mood and enhancing emotional communication. Senior researcher Koichi Osuka emphasizes that this innovation can help robots interact with humans in a more expressive and natural manner. Ishihara envisions future androids with deeply integrated emotional cues, making them feel more lifelike and capable of meaningful connections. This advancement has the potential to greatly improve communication robots in various settings. Read more: https://lnkd.in/eRwTGGne
-
How can we enable robots to fluently collaborate with humans on physically demanding tasks? In our #HRI2025 paper, we focus on the task of human-robot collaborative transport, where a human and a robot work together to move an object to a goal pose. In the absence of explicit or a priori coordination, critical decisions such as navigating obstacles or determining object orientations become especially challenging. Our key insight is that a human and a robot can coordinate fluently by leveraging the transported object as a communicative medium. By encoding subtle, communicative signals into actions that affect the state of the transported object, the robot could effectively convey its intended strategy and role. To this end, we designed an inference mechanism that probabilistically maps observations of joint actions executed by the human and the robot to a set of joint strategies of workspace traversal, drawing from topological invariance. Integrated into a model predictive controller (IC-MPC), this mechanism enables a robot to estimate the uncertainty of its human partner over a traversal strategy, and take proactive corrective actions balancing uncertainty minimization and task efficiency. We deployed IC-MPC on a mobile manipulator (Hello Robot Stretch) and evaluated it in a within-subjects lab study (N = 24). IC-MPC enables greater team performance and empowers the robot to be perceived as a significantly more fluent and competent partner compared to baselines lacking a communicative mechanism. My fantastic PhD student, Elvin Yang, will present this work in the 1A: Human-Robot Collaboration Session on Tuesday, and in the X-HRI workshop today! paper: https://lnkd.in/gidfgq4W code: https://lnkd.in/gR8gAEud video: https://lnkd.in/gzkktKzf #robotics #humanrobotinteraction #artificialintelligence University of Michigan Robotics Department
-
Special thanks to Prof. Ahmed H. Qureshi for the personalized tour of his CORAL Lab at Purdue University. CORAL’s research spans machine learning, robot planning and control, and my personal favorite area, safe human–robot collaboration. Together, these threads tackle one of the hardest problems in robotics today: how autonomous systems can learn, plan, and act effectively while operating alongside people in real, unstructured environments. What stood out to me is how the lab integrates learning and decision-making with explicit attention to safety, interaction, and shared spaces. CORAL explores how robots can reason about uncertainty, model human behavior, and adapt their plans in ways that support collaboration rather than conflict. This includes work on risk-aware planning, learning-based control, and interaction-aware decision-making that directly addresses how robots should behave around and with humans. Several of my favorite papers from the lab dive deeply into these themes, including work on safe and interactive planning, human-aware risk representations, and learning frameworks that support trustworthy collaboration between humans and robots. I’ve shared links to a few of these papers below for anyone who wants to explore further. As robots increasingly leave controlled environments and enter factories, hospitals, warehouses, and public spaces, this kind of research becomes foundational. Autonomy that ignores the human context will struggle to scale. Autonomy that understands and respects it has the potential to truly transform how we work and live. Many thanks again to Ahmed and the CORAL team for the warm welcome and the great conversations (remember: replace the banana with a beer bottle for social impact! 😉 ). It was energizing to see research that so clearly connects theory, algorithms, and real-world human impact. Purdue Computer Science ---- For those interested in going deeper, here are a few of my favorite papers from the CORAL Lab that really capture the breadth and impact of their work: 🔹 Safe and interactive planning for human–robot collaboration https://lnkd.in/eMMqED3r https://lnkd.in/eQJtSinY 🔹 Risk-aware representations and decision-making around humans https://lnkd.in/ep4dRPHM 🔹 Learning and control frameworks that enable safe, trustworthy interaction https://lnkd.in/eVjcQBpa https://lnkd.in/eSH-_CrV These papers do a great job of connecting learning, planning, and control with the realities of shared human–robot environments. Highly recommend a read if you’re working in robotics, autonomy, or human–robot interaction.
-
Presenting FEELTHEFORCE (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, they train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representa- tions. At execution, a PD controller modulates gripper closure to track predicted forces -enabling precise, force-aware control. This approach grounds robust low- level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. #research: https://lnkd.in/dXxX7Enw #github: https://lnkd.in/dQVuYTDJ #authors: Ademi Adeniji, Zhuoran (Jolia) Chen, Vincent Liu, Venkatesh Pattabiraman, Raunaq Bhirangi, Pieter Abbeel, Lerrel Pinto, Siddhant Haldar New York University, University of California, Berkeley, NYU Shanghai Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces.
-
The dark matter of robotics is “physical commonsense.” It’s everywhere, yet hard to pin down. From gently nudging an object to make space for fingers to grasp, to placing down a slipping object to get a better grip—these tiny corrections, recoveries, and “obvious” actions are subtle and automatic. We rarely notice them, but together they account for much of our extraordinary human ability to manipulate the physical world. This is the intelligence behind dexterity. And it can be learned on robots with data—but only if it’s the right data. Much of robot data today comes from remote control teleoperation, which often breaks the human sensorimotor loop: latency, limited tactile feedback, and unnatural interfaces push operators away from fast, reactive control (System 1 thinking) and towards slow, deliberate planning (System 2 thinking) e.g. “put one finger here… then another finger there…” The resulting trajectories are stiff, stilted, and slow. The exception is data collection so seamless that it preserves natural human behavior — as though the mind of the operator can act directly through instincts refined over millions of years. At Generalist, our foundation models like GEN-0 are trained on data from lightweight handheld, ergonomic devices that let people manipulate objects almost as they would with their own hands. These devices feel balanced, and the force feedback is there — after a few minutes of doing a task, operators stop “thinking” and start reacting. The results look different. People knit, peel potatoes, paint miniatures. Not only does it expand what tasks are possible to get robot data on—the data itself captures reflexes, micro-corrections, and real-time recovery. Our models trained on this data produce robot behaviors that people consistently describe as “human-like.” This is no accident. And it is scaling. Robots that ship with physical commonsense will be better at just about everything. I wrote about why it’s been so hard for machines to acquire physical commonsense, and why large-scale, real-world physical interaction data may finally change that. Read the full article here 👉 https://lnkd.in/gCjHP-qQ
-
'A roadmap for AI in robotics' - our latest article (https://rdcu.be/euQNq) published in Nature Machine Intelligence, offers an assessment of what artificial intelligence (AI) has achieved for robotics since the 1990s and proposes a research roadmap with challenges and promises. Led by Aude G. Billard, current president of IEEE Robotics and Automation Society, this perspective article discusses the growing excitement around leveraging AI to tackle some of the outstanding barriers to the full deployment of robots in daily lives. It is argued that action and sensing in the physical world pose greater and different challenges for AI than analysing data in isolation and therefore it is important to reflect on which AI approaches are most likely to be successfully applied to robots. Questions to address, among others, are how AI models can be adapted to specific robot designs, tasks and environments. It is argued that for robots to collaborate effectively with humans, they must predict human behaviour without relying on bias-based profiling. Explainability and transparency in AI-driven robot control are essential for building trust, preventing misuse and attributing responsibility in accidents. Finally, the article close with describing the primary long-term challenges, namely, designing robots capable of lifelong learning, and guaranteeing safe deployment and usage, as well as sustainable development. Happy to be co-author of this great piece led by Aude G. Billard, with contributions from Alin Albu-Schaeffer, Michael Beetz, Wolfram Burgard, Peter Corke, Matei Ciocarlie, Danica Kragic, Ken Goldberg, Yukie NAGAI, and Davide Scaramuzza Nature Portfolio IEEE #robotics #robots #ai #artificial #intelligence #sensors #sensation #ann #roadmap #generativeai #learning #perception #edgecomputing #nearsensor #sustainability
-
Just collecting manipulation data isn’t enough for robots - they need to be able to move around in the world, which has a whole different set of challenges from pure manipulation. And bringing navigation and manipulation together in a single framework is even more challenging. Enter HERMES, from Zhecheng Yuan and Tianming Wei. This is a four-stage process in which human videos are used to set up an RL sim-to-real training pipeline in order to overcome differences between robot and human kinematics, and used together with a navigation foundation model to move around in a variety of environments. To learn more, join us as Zhecheng Yuan and Tianming Wei tell us about how they built their system to perform mobile dexterous manipulation from human videos in a variety of environments. Watch Episode #45 of RoboPapers today, hosted by Michael Cho and Chris Paxton! Abstract: Leveraging human motion data to impart robots with versatile manipulation skills has emerged as a promising paradigm in robotic manipulation. Nevertheless, translating multi-source human hand motions into feasible robot behaviors remains challenging, particularly for robots equipped with multi-fingered dexterous hands characterized by complex, high-dimensional action spaces. Moreover, existing approaches often struggle to produce policies capable of adapting to diverse environmental conditions. In this paper, we introduce HERMES, a human-to-robot learning framework for mobile bimanual dexterous manipulation. First, HERMES formulates a unified reinforcement learning approach capable of seamlessly transforming heterogeneous human hand motions from multiple sources into physically plausible robotic behaviors. Subsequently, to mitigate the sim2real gap, we devise an end-to-end, depth image-based sim2real transfer method for improved generalization to real-world scenarios. Furthermore, to enable autonomous operation in varied and unstructured environments, we augment the navigation foundation model with a closed-loop Perspective-n-Point (PnP) localization mechanism, ensuring precise alignment of visual goals and effectively bridging autonomous navigation and dexterous manipulation. Extensive experimental results demonstrate that HERMES consistently exhibits generalizable behaviors across diverse, in-the-wild scenarios, successfully performing numerous complex mobile bimanual dexterous manipulation tasks Project Page: https://lnkd.in/e-aEbQzn ArXiV: https://lnkd.in/eemU6Pwa Watch/listen: Youtube: https://lnkd.in/erzbkYjz Substack: https://lnkd.in/e3ea76Q8
Ep#45: HERMES: Human-to-Robot Embodied Learning From Multi-Source Motion Data for Mobile Dexterous Manipulation
robopapers.substack.com
-
This article is a writeup of collaborative learning and doing with the brilliant Robotics Engineer Dr. Karthika Balan. For past year or so, I have been doing this to uplevel myself in a new and niche field of Physical AI + Robotics. Why? Because I like to stretch myself and believe in continous upleveling. Besides, it is an awesome experience working + learning with Dr Balan. The fundamental question I focused on was: How do we design Humanoid Robots that evolve alongside exponential AI breakthroughs rather than becoming obsolete with each advancement? The traditional approach to Robotics—design, build, deploy, replace—creates an inherent disconnect between AI's rapid evolution and hardware's static nature. Organizations are forced into an impossible choice: wait for the "perfect" AI before building, or build now and accept rapid obsolescence. 90% of Today's Humanoid Robots Will Be Obsolete by 2027. The AI revolution is leaving robotics behind. While AI capabilities double every few months, the approach to Humanoid Robots remains stuck in the past. We design, build, deploy, and replace—creating billion-dollar investments that become outdated before they leave the lab. What if Humanoid Robots could evolve as quickly as the AI that powers them? Dr. Balan and I brainstormed and developed EVOLVE—a revolutionary framework that transforms robots from static products into living platforms that continuously absorb AI breakthroughs. The results? Organizations implementing this Progressive Systems Design approach could see 80% ROI over five years versus just 30% with traditional methods. We've proven it works. Our fledgling work on Project COMPANION - Humanoid Robot Design Companions to tackle the loneliness epidemic in elders promises to achieve what was previously impossible: Robots that form genuine emotional connections, reducing loneliness by 43% and improving wellbeing by 37% among seniors. The future belongs not to Humanoid Robots designed as machines, but to those designed as continuously evolving products.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development