User Interaction with Robotic Technology

Explore top LinkedIn content from expert professionals.

Summary

User interaction with robotic technology refers to how people communicate, control, and collaborate with robots—both physically and through software—shaping the way robots operate in daily life and specialized environments. This area combines intuitive interfaces, social behaviors, and assistive features to make robots accessible and beneficial for everyone, including those with different abilities.

  • Prioritize accessibility: Design robot interfaces and control systems so that people of varying backgrounds and physical abilities can easily operate and interact with robots.
  • Encourage collaboration: Integrate features that support teamwork between humans and robots, such as natural language communication, adaptable behaviors, and transparent decision-making.
  • Build trust and empathy: Use friendly designs and clear social cues to help people feel comfortable and safe around robots, fostering positive emotional connections and acceptance.
Summarized by AI based on LinkedIn member posts
  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,723 followers

    I love this. This is rmy real focus: not just Humans + AI as AI augmenting human capabilities in tasks, but designing interactions to give humans greater skills and capabilities that endure and grow over time. The outcome is smarter people. "We introduce a novel conceptual framework for human-AI interaction: extraheric AI. We define “extraherics” as a mechanism that fosters users’ higher-order thinking skills during the course of task completion. Extraheric is based on the Latin word “extraho” (to draw forth or pull out), and we use this term to suggest that AI can draw forth people’s higher order thinking skills and thus promote their cognitive potential. Rather than replacing or augmenting human cognitive abilities, extraheric AI encourages users to engage in higher-order thinkingduring task completion." The interaction strategies to evoke higher-order thinking skills suggested in the paper include: 💡 Suggesting & Recommending: The AI proposes ideas, viewpoints, or solutions, prompting users to evaluate and choose from multiple suggestions. 📝 Explaining: The AI provides detailed explanations, focusing on the 'why' and 'how' to help users deepen their understanding of the task, rather than providing direct solutions. 🎯 Nudging: The AI subtly influences user behavior by presenting additional information or perspectives indirectly, encouraging exploration without overtly recommending a specific path. 🗣️ Debating & Discussing: Users engage in debates or discussions with AI agents, which present different opinions and arguments, encouraging users to explore diverse perspectives and think critically. ❓ Questioning: The AI asks open-ended, thought-provoking questions to stimulate cognitive engagement, expanding users’ thinking by challenging their assumptions or viewpoints. 🛠️ Scaffolding: The AI offers temporary support or guidance through complex tasks, allowing users to focus on specific aspects while gradually removing the assistance as users become more competent. 🎮 Simulating: The AI simulates different scenarios or perspectives, helping users practice skills or experience situations from a different point of view, such as role-playing or rehearsing responses. 👀 Demonstrating: The AI acts as a model, showcasing behavior or task completion, allowing users to observe and learn implicitly through vicarious learning by watching the AI perform. I look forward to many others building on this work and integrating these concepts into enterprise software.

  • View profile for Shehryar Khattak

    Director of Technology @ FieldAI | Ex-NASA JPL | Ex-ETH Zurich

    6,207 followers

    Happy to share our latest paper, "Enabling Novel Mission Operations and Interactions with ROSA: The Robot Operating System Agent". This work was led by Rob R. in collaboration with Marcel Kaufmann, Jonathan Becktor, Sangwoo Moon, Kalind Carpenter, Kai Pak, Amanda Towler, Rohan Thakker and myself. Please find the #OpenSource code, paper, and video demonstration linked below. Operating autonomous robots in the field is often challenging, especially at scale and without the proper support of Subject Matter Experts (SMEs). Traditionally, robotic operations require a team of specialists to monitor diagnostics and troubleshoot specific modules. This dependency can become a bottleneck when an SME is unavailable, making it difficult for operators to not only understand the system's functional state but to leverage its full capability set. The challenge grows when scaling to 1-to-N operator-to-robot interactions, particularly with a heterogeneous robot fleet (e.g., walking, roving, flying robots). To address this, we present the ROSA framework, which can leverage state-of-the-art Vision Language Models (VLMs), both on-device and online, to present the autonomy framework's capabilities to operators in an intuitive and accessible way. By enabling a natural language interface, ROSA helps bridge the gap for operators who are not roboticists, such as geologists or first responders, to effectively interact with robots in real-world missions. In our video, we demonstrate ROSA using the NeBula Autonomy framework developed at NASA Jet Propulsion Laboratory to operate in JPL's #MarsYard. Our paper also showcases ROSA's integration with JPL's EELS (Exobiology Extant Life Surveyor) robot and the NVIDIA Carter robot in the IsaacSim environment (stay tuned for ROSA IssacSim extension updates!). These examples highlight ROSA's ability to facilitate interactions across diverse robotic platforms and autonomy frameworks. Paper: https://lnkd.in/g4PRjF4V Github: https://lnkd.in/gwWXmmjR Video: https://lnkd.in/gxKcum27 #Robotics #Autonomy #AI #ROS #FieldRobotics #RobotOperations #NaturalLanguageProcessing #LLM #VLM

  • View profile for Rajan Kumar

    ReNew || || Manufacturing & Operations || || Solar-Cell || || Medical Devices || || Lean Six Sigma || ||Ex- NSV| || Ex- ESP SAFETY || || Ex- UKB Electronics || || Electronics Manufacturing || || Information Technology ||

    17,206 followers

    Harnessing Technology for Inclusivity: A Lesson from Japan Innovation is most impactful when it uplifts lives, and Japan continues to set remarkable examples. This café employs individuals with physical disabilities to remotely control robot servers, ensuring they have financial independence despite mobility limitations. This is not just about robotics; it's about human-centered technology and assistive innovation. By leveraging telepresence robotics, individuals with severe disabilities can interact with the world, contribute to society, and maintain their dignity. Why do we often see such developments in Japan? 1️⃣ Human-first Innovation – Japan’s technological advancements always align with societal needs, ensuring accessibility and inclusivity. 2️⃣ Collective Responsibility – Japanese culture emphasizes community well-being, where businesses, governments, and individuals support each other. 3️⃣ Aging Society Adaptation – With an aging population, automation and assistive technologies are prioritized to enhance quality of life. Technical Insight: How It Works Telepresence Robots: These robots act as digital avatars, controlled remotely by individuals who cannot be physically present. AI & IoT Integration: They use machine learning and remote-control interfaces to navigate spaces and interact with customers. Haptic Feedback & Assistive Tech: Some setups allow for limited physical control via eye movements or brainwave sensors, enhancing accessibility. This is technology with empathy, ensuring that no one is left behind. Let’s take inspiration from such initiatives and explore how tech-driven inclusivity can be implemented worldwide. What are your thoughts on this approach? Could similar initiatives thrive in our 🇮🇳 countries? Let's discuss!

  • View profile for Andrea L. Thomaz

    Founder CEO Diligent Robotics Inc.

    3,919 followers

    Robotics is certainly having its moment. Reading this Financial Times piece, I couldn’t help but reflect on how far we’ve come—and the exciting challenges ahead. The rapid advancements in physical AI highlighted resonate deeply with both my work at Diligent and my academic research into human-robot interaction. There are many inspiring demonstrations and examples of robot dexterity in the article (presented beautifully by the way). And while it’s fun to see robots flipping pancakes or tying shoelaces, as demonstrations of how functional these new capabilities are, the real test will come when the rubber hits the road in a real environment. Speaking from experience in bringing research to product - the truly complex things that mobile manipulation robots need to handle are in their ability to navigate dynamic, unpredictable environments and work with humans, not just for them. This is where some of the most interesting aspects of embodied AI come into play. At Diligent, we’ve spent years honing Moxi’s ability to operate seamlessly alongside healthcare teams in busy hospitals—managing lab and pharmacy workflows, badging into secure areas, and even riding elevators with people autonomously. These environments aren’t static. They require robots that can adapt in real-time, respond to people in the environment, and handle the complexities of the physical world. At Diligent we focus not on building robots that imitate humans—but instead on designing general purpose robots that complement and enhance what humans do best. This is where I see the future of robotics being most exciting -- the future is People + Robots. The article’s discussion of advances in teaching robots dexterity really highlights the importance of robust datasets in shaping these capabilities. At Diligent, every delivery our fleet of Moxi robots make contributes to a growing knowledge base that informs smarter, more adaptable systems. This ability to leverage a fleet of robots in the world is going to be a key factor in putting dexterous robots to work in the real world. Having been in the field of Robot Learning for years, it’s incredibly exciting to see the surge of advances in dexterity and robot capability today. For companies and founders working to productize these advances, the focus needs to remain clear: creating technology that serves human needs with precision, empathy, and practicality. https://lnkd.in/gC6nvf3T

  • View profile for Aaron Prather

    Director, Robotics & Autonomous Systems Program at ASTM International

    84,970 followers

    Last week, during heavy flooding in Los Angeles, a delivery robot struggling through water unexpectedly became the subject of sympathy online. People weren’t just watching — they were rooting for it. (see my post about the little guy - https://lnkd.in/e8a6VAUa) And that moment says a lot about where robotics is heading. For years, robots lived mostly in industrial spaces — separated from daily human interaction. But as robots move into sidewalks, homes, hospitals, and public environments, the real challenge is no longer just autonomy or hardware performance. It’s human-robot interaction (HRI). Developers are now intentionally designing robots to feel approachable — rounded shapes, expressive eyes, movement cues, even personality traits. Delivery robots make eye contact to signal intent. Companion devices behave like pets or characters. Designers are learning what psychologists have known for years: humans instinctively assign meaning, emotion, and agency to machines. And that changes everything. Because deployment success isn’t just about navigation accuracy or battery life anymore — it’s about: 👉 Trust and perceived safety 👉 Social signaling and intent communication 👉 Emotional response and attachment 👉 Transparency about what the robot is (and isn’t) As we enter what many are calling the “Physical AI” decade, HRI may become one of the most critical engineering and governance challenges facing robotics. Cute design can increase acceptance, but it also raises new questions around dependency, expectations, and responsibility. In other words, robotics is no longer just about machines operating in human spaces. It’s about machines operating within human psychology. And that means HRI isn’t a design afterthought..... it’s becoming a core engineering discipline. Read more: https://lnkd.in/eiSkt4xv

  • View profile for Jiafei Duan

    Robotics & AI PhD student at University of Washington, Seattle

    6,901 followers

    🚀 RoboCade: Gamifying Robot Data Collection is out on arXiv — and I’m thrilled to share this collaborative work with the community! One of the biggest bottlenecks in robotics today is scaling human demonstration data for imitation learning. Traditional collection is costly, tedious, and limited to experts with access to hardware. So we asked: 👉 Can we make robot data collection accessible, engaging, and scalable — even for non-experts? That’s where RoboCade comes in: 🎮 A gamified remote teleoperation platform that transforms robot demo collection into an interactive game-like experience. 👥 Designed to engage general users — with visual feedback, progress bars, badges, leaderboards, and more — while still generating useful data for downstream policy training. Key results: ✔️ Remote players collected data that, when co-trained with traditional demos, boosted policy success on real tasks (+16 – 56%). ✔️ In user studies, beginners found RoboCade significantly more enjoyable and motivating than standard interfaces (+24%). ✔️ We also propose principles for gamified task design so the collected data actually helps with real manipulation challenges. Why this matters: 🔹 Broadening participation in robotics research beyond labs and experts 🔹 Intrinsic motivation rather than paying for data labeling 🔹 A scalable crowd-sourced pipeline for future robot learning systems Huge thanks to Suvir Mirchandani, Mia Tang, Jubayer Ibn Hamid, Michael Cho, and Dorsa Sadigh for the collaboration. 🔧🤝 Read the full paper on arXiv — and check out our demo videos at https://lnkd.in/gjyE6A5S #Robotics #ImitationLearning #HumanAI #Crowdsourcing #Gamification #MachineLearning

  • View profile for Paul Schmitt

    I realize new robotics technology. Generating value via leading cross-functional teams and architecting software, mechanical, electrical solutions to exceed customer and safety needs.

    3,141 followers

    Special thanks to Prof. Ahmed H. Qureshi for the personalized tour of his CORAL Lab at Purdue University. CORAL’s research spans machine learning, robot planning and control, and my personal favorite area, safe human–robot collaboration. Together, these threads tackle one of the hardest problems in robotics today: how autonomous systems can learn, plan, and act effectively while operating alongside people in real, unstructured environments. What stood out to me is how the lab integrates learning and decision-making with explicit attention to safety, interaction, and shared spaces. CORAL explores how robots can reason about uncertainty, model human behavior, and adapt their plans in ways that support collaboration rather than conflict. This includes work on risk-aware planning, learning-based control, and interaction-aware decision-making that directly addresses how robots should behave around and with humans. Several of my favorite papers from the lab dive deeply into these themes, including work on safe and interactive planning, human-aware risk representations, and learning frameworks that support trustworthy collaboration between humans and robots. I’ve shared links to a few of these papers below for anyone who wants to explore further. As robots increasingly leave controlled environments and enter factories, hospitals, warehouses, and public spaces, this kind of research becomes foundational. Autonomy that ignores the human context will struggle to scale. Autonomy that understands and respects it has the potential to truly transform how we work and live. Many thanks again to Ahmed and the CORAL team for the warm welcome and the great conversations (remember: replace the banana with a beer bottle for social impact! 😉 ). It was energizing to see research that so clearly connects theory, algorithms, and real-world human impact. Purdue Computer Science ---- For those interested in going deeper, here are a few of my favorite papers from the CORAL Lab that really capture the breadth and impact of their work: 🔹 Safe and interactive planning for human–robot collaboration https://lnkd.in/eMMqED3r https://lnkd.in/eQJtSinY 🔹 Risk-aware representations and decision-making around humans https://lnkd.in/ep4dRPHM 🔹 Learning and control frameworks that enable safe, trustworthy interaction https://lnkd.in/eVjcQBpa https://lnkd.in/eSH-_CrV These papers do a great job of connecting learning, planning, and control with the realities of shared human–robot environments. Highly recommend a read if you’re working in robotics, autonomy, or human–robot interaction.

  • View profile for Omar M. Khateeb

    Helping Medtech Attract Investors & Craft Markets|🎙️ Host of MedTech’s #1 Podcast | Proud Husband & Father | Avid Reader | Jiu Jitsu @Carlson Gracie | Mentor | Coach

    48,286 followers

    Here's something most people dont know about surgical robotics marketing 👇️ The way you coach a surgeon to interact with your technology on camera makes a huge difference in persuasion. In my tenure as Mazor Robotics (acq. by Medtronic for $1.3B) US marketing manager, I discovered that video testimonials are indispensable in bridging the gap between advanced technology and human connection. In 2014, competition was pretty pathetic and everyone had the same style video. It's 2025 and I still feel this way but thats a post for another day. One thing every medical device company missed was lacking humanization in their presentations; often, testimonials feature devices in isolation, devoid of human interaction. This approach can inadvertently alienate potential users who seek a tangible connection to the technology. So I took a different approach. When I worked with surgeons I had them physically engage with the robotic systems—touching, holding, and demonstrating their use. This subtle yet powerful shift transforms the narrative: the robot transitions from an impersonal machine to an extension of the surgeon’s expertise and care. This human-robot interaction not only showcases the technology’s capabilities but subconsciously builds trust and relatability among prospective users. Recent studies underscore the significance of humanizing technology. For instance, research indicates that when service robots utilize social-oriented language, consumers respond more favorably, especially during stressful times . This finding aligns with the psychological principle that personalizing technology enhances user acceptance and satisfaction.  Moreover, in surgical robotics, the integration of human elements has shown to improve outcomes. Surgeons perceive robotic assistants as extensions of themselves, providing unwavering precision and support during procedures . This synergy between human skill and robotic assistance exemplifies the potential of human-robot collaboration in healthcare.  So next time you do a testimonial video, try having your surgeon hold and interact with the technology. 𝑨𝒓𝒆 𝒚𝒐𝒖 𝒂 𝒎𝒂𝒓𝒌𝒆𝒕𝒆𝒓 𝒐𝒓 𝒇𝒐𝒖𝒏𝒅𝒆𝒓 𝒍𝒐𝒐𝒌𝒊𝒏𝒈 𝒕𝒐 𝒍𝒆𝒗𝒆𝒍 𝒖𝒑 𝒚𝒐𝒖𝒓 𝒎𝒂𝒓𝒌𝒆𝒕𝒊𝒏𝒈? 𝑰 𝒉𝒐𝒔𝒕 𝒂 𝒇𝒓𝒆𝒆 𝒘𝒆𝒆𝒌𝒍𝒚 𝒛𝒐𝒐𝒎 𝒄𝒂𝒍𝒍 𝒄𝒐𝒗𝒆𝒓𝒊𝒏𝒈 𝒕𝒉𝒆 𝒍𝒂𝒕𝒆𝒔𝒕 𝒎𝒂𝒓𝒌𝒆𝒕𝒊𝒏𝒈 𝒔𝒕𝒓𝒂𝒕𝒆𝒈𝒊𝒆𝒔 𝒘𝒊𝒕𝒉 𝒍𝒊𝒗𝒆 𝑸&𝑨. 𝑰𝒇 𝒚𝒐𝒖 𝒘𝒂𝒏𝒕 𝒕𝒐 𝒋𝒐𝒊𝒏 𝒄𝒐𝒎𝒎𝒆𝒏𝒕 "𝑱𝑶𝑰𝑵" 𝒂𝒏𝒅 𝑰𝒍𝒍 𝒔𝒆𝒏𝒅 𝒚𝒐𝒖 𝒂𝒏 𝒊𝒏𝒗𝒊𝒕𝒆. #medtech #medicaldevices #medicaldevice #medicaldevicesales #medicalsales #digitalhealth

  • View profile for Robert Smak

    Automate Advocate | Industry AI

    42,834 followers

    When Touch Becomes Control ☝ Once, robots were cold, untouchable machines. Today—they can “feel.” Thanks to haptics from Force Dimension and STÄUBLI Robotics, operators can now sense every movement and resistance of the robot as if it were an extension of themselves. Here’s how the omega.x family is redefining the way we interact with robots: 🔹 With devices like the omega.3, operators feel every nuance: resistance, texture, and fine detail. It’s a powerful sensory experience, as though you’re physically guiding the robot’s movements with precision and ease. 🔹 For complex tasks, omega.6 and omega.7 offer unparalleled fluidity. They mimic the motion of your hand, making even the most intricate 2D and 3D tasks feel intuitive, transforming advanced controls into instinctive movements. 🔹 From medical procedures to hazardous environments and intricate assembly tasks, this haptic technology is a game-changer, delivering levels of accuracy and tactile feedback essential in high-stakes industries.

  • View profile for Ivan Poupyrev, Dr.

    Founder and CEO, Archetype AI | Scientist and Engineer | Technology and Innovation Executive | National Design Award Winner

    10,061 followers

    Text is just one way to communicate with an AI system. But Physical AI should understand the world the way humans do; for example, through spatial, non-verbal interaction in real environments. When we communicate about the physical world, we naturally use spatial cues like pointing and framing areas of interest. In this experiment, we explore how gestures can become a spatial “prompt” that is interpreted by Newton, our Physical AI foundation model. That way, Newton can understand what the user is interested in. Using only a standard camera, our Head of Product Design, Lauren Bedal, directs Newton’s attention in real time by pointing at herself, framing objects in the scene, and moving in and out of focus. Newton applies reasoning only to the highlighted areas. The key idea: interaction itself is a modality for AI. Prompts don’t have to be static or textual; they can be spatial, continuous, and interactive. No special hardware required. Just Newton, a camera, and the way people already show things to each other. #PhysicalAI #FoundationModel

Explore categories