Mobile Robotics Challenges

Explore top LinkedIn content from expert professionals.

Summary

Mobile robotics challenges refer to the technical, operational, and safety obstacles faced when designing and deploying robots that can move and operate autonomously in dynamic or complex environments. These hurdles include issues around safe human-robot interaction, reliable communication, adapting to real-world unpredictability, and building robots that can both learn and function independently.

  • Address safety risks: Prioritize autonomous safety features and keep up with evolving safety standards to ensure mobile robots can operate securely alongside humans.
  • Build robust connectivity: Invest in communication networks and edge computing solutions so your robots can stay connected and perform their tasks even in remote or challenging locations.
  • Focus on adaptable intelligence: Develop robots that can learn from experience, handle new or changing environments, and recover from unexpected situations without heavy human oversight.
Summarized by AI based on LinkedIn member posts
  • View profile for Cam Stevens
    Cam Stevens Cam Stevens is an Influencer

    Safety Technologist & Chartered Safety Professional | AI, Critical Risk & Digital Transformation Strategist | Founder & CEO | LinkedIn Top Voice & Keynote Speaker on AI, SafetyTech, Work Design & the Future of Work

    13,308 followers

    I'm continuously fascinated by the evolving landscape of automation and robotics; it's why I work part-time as the Safety Innovation Lead at the Australian Automation and Robotics Precinct . With the rapid advancements in automation and robotics technology, the shift towards highly automated systems is inevitable, particularly in mining, but it also brings forth significant challenges and opportunities in managing health and safety. One of the significant challenges of safely integrating mobile machine automation into high risk industries is the inherent limitation of relying solely on human oversight as a risk control for autonomous systems. The resulting human work contains risks of boredom, confusion, cognitive limitations, loss of situational awareness, and automation bias which all contribute to degradation in human and organisational performance. These psychosocial risk factors highlight the urgent need for machines that can manage safety autonomously. At the Australian Automation & Robotics Precinct, we provide a unique sandbox for testing automation technologies. This environment allows us to push regulatory boundaries and innovate safely, ensuring that our advancements in automation are both effective and aligned with global safety standards. I've spent some time exploring robotics & automation in Europe over the past couple of years and will be visiting automation centres in the UK this week. Europe has consistently been at the forefront of machinery safety regulation. The recent publication of the updated EU Machinery Regulation 2023/1230 which becomes legally binding on January 20, 2027, is designed to ensure safe interaction between humans and machines, adapting continuously to technical developments (especially modern AI technologies). It sets a high standard that greatly influences global safety practices. Meanwhile, in Australia, while we rely on the AS/NZS 4024 series first published in the mid-1990s, there’s a growing need to update our standards to reflect the current technological landscape. If you're interested in learning more about the safety of mobile autonomous systems check out the paper titled "A comprehensive approach to safety for highly automated off-road machinery under Regulation 2023/1230" in the latest issue of Safety Science. And stay tuned for the official opening of the Australian Automation & Robotics Precinct HQ later in the year. #Automation #Robotics #MachineSafety #AI #SafetyInnovation #SafetyTechNews #SafetyTech

  • View profile for Muhammad M.

    Tech content creator | Mechatronics engineer | open for brand collaboration

    15,692 followers

    Nav2Bot: ROS 2 Autonomous Navigation in Ignition Gazebo ➡ Differential drive robot simulation using ROS 2 Humble ➡ Autonomous navigation using Nav2 stack ➡ LiDAR-based obstacle detection and environment perception ➡ AMCL-based localization for accurate robot positioning ➡ Global and local path planning with real-time execution ➡ Complete TF tree (map → odom → base_link → lidar_link) ➡ RViz visualization for costmaps, paths, and robot pose ➡ Keyboard teleoperation support for manual control ✨ Why this matters: Autonomous navigation is one of the core challenges in robotics, where a robot must perceive its environment, determine its position, and plan a safe path to a goal without human intervention. This project demonstrates a complete ROS 2 Nav2 pipeline that integrates localization, planning, and control into a unified system. By combining LiDAR data, odometry, and costmaps, the robot can intelligently navigate through unknown environments while avoiding obstacles in real time. These principles are widely used in real-world robotics applications such as autonomous vehicles, warehouse automation systems, delivery robots, and service robotics. 📊 Key Highlights: ✔ Full ROS 2 Navigation Stack (Nav2) integration ✔ LiDAR-based perception and obstacle avoidance ✔ AMCL localization for accurate positioning ✔ Global and local path planning ✔ Real-time costmap generation ✔ Gazebo simulation with realistic robot behavior ✔ RViz-based monitoring and debugging 💡 Future Potential: This framework can be extended to: ➡ Multi-robot navigation systems ➡ SLAM + Nav2 integration for unknown environments ➡ AI-based dynamic obstacle detection ➡ Reinforcement learning for path optimization ➡ Real-world deployment on mobile robots 🔗 For students, engineers & robotics enthusiasts: This project provides a complete hands-on implementation of autonomous navigation using ROS 2, making it ideal for understanding how intelligent robots perceive, plan, and act in real environments. 🔁 Repost to support robotics research & engineering education! #ROS2 #Nav2 #Robotics #AutonomousSystems #Gazebo #Mechatronics #EngineeringProjects #Lidar #RViz #Automation #Navigation #AI #STEM #EngineeringEducation #RobotSimulation

  • View profile for Ashish Kapoor

    Co-Founder & CEO at General Robotics | Building Intelligence GRID for Physical AI

    11,346 followers

    7 lessons from AirSim: I ran the autonomous systems and robotics research effort at Microsoft for nearly a decade and here are my biggest learnings. Complete blog: https://sca.fo/AAeoC 1. The “PyTorch moment” for robotics needs to come before the “ChatGPT moment”. While there is anticipation towards Foundation Models for robots, scarcity of technical folks well versed in both deep ML and robotics, and a lack of resources for rapid iterations present significant barriers. We need more experts to work on robot and physical intelligence. 2. Most AI workloads on robots can primarily be solved by deep learning. Building robot intelligence requires simultaneously solving a multitude of AI problems, such as perception, state estimation, mapping, planning, control, etc. We are increasingly seeing successes of deep ML across the entire robotics stack. 3. Existing robotic tools are suboptimal for deep ML. Most of the tools originated before the advent of deep ML and cloud and were not designed to address AI. Legacy tools are hard to parallelize on GPU clusters. Infrastructure that is data first, parallelizable, and integrates cloud deeply throughout the robot’s lifecycle is a must. 4. Robotic foundation mosaics + agentic architectures are more likely to deliver than monolithic robot foundation models. The ability to program robots efficiently is one of the most requested use cases and a research area in itself. It currently takes a technical team weeks to program robot behavior. It is clear that foundation mosaics and agentic architecture can deliver huge value now. 5. Cloud + connectivity trumps compute on edge – Yes, even for robotics! Most operator-based robot enterprises either discard or minimally catalog the data due to a lack of data management pipelines and connectivity. Given that robotics is truly a multitasking domain – a robot needs to solve for multiple tasks at once. Connection to the cloud for data management, model refinement, and the ability to make several inference calls simultaneously would be a game changer. 6. Current approaches to robot AI Safety are inadequate Safety research for robotics is at an interesting crossroads. Neurosymbolic representation and analysis is likely an important technique that will enable the application of safety frameworks to robotics. 7. Open source can add to the overhead As a strong advocate for open-source, much of my work has been shared. While open-source offers many benefits, there are a few challenges, especially for robotics, that are less frequently discussed: Robotics is a fragmented and siloed field, and likely initially there will be more users than contributors. Within large orgs, the scope of open-source initiatives may also face limits. AirSim pushed the boundaries of the technology and provided a deep insight into R&D processes. The future of robotics will be built on the principle of being open. Stay tuned as we continue to build @Scafoai

  • View profile for Jan Zizka

    Founder and CEO @ Brightpick | Founder @ Photoneo (acquired by Zebra Technologies) | Multi-purpose AI robots for warehouses 🤖

    10,299 followers

    Many have tried mobile robotic picking before and failed. Remember the Fetch Mobile Manipulator or IAM Robotics’ Swift? None of them succeeded commercially. Why? They all tried to pick items directly from shelves – just like humans. But here’s why that approach doesn’t work: 𝟏. 𝐔𝐧𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐇𝐚𝐧𝐝𝐥𝐢𝐧𝐠: Items on shelves can easily fall, making the process unpredictable and error-prone. 𝟐. 𝐂𝐨𝐦𝐩𝐥𝐞𝐱 𝐌𝐚𝐧𝐢𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧: Picking freestanding objects requires advanced 6-axis robots, driving up costs. 𝟑. 𝐈𝐧𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐑𝐞𝐩𝐥𝐞𝐧𝐢𝐬𝐡𝐦𝐞𝐧𝐭: Inventory placement becomes a logistical nightmare, with items on shelves needing precise positioning for #robots to pick them. That’s why we decided to take a radically different approach at Brightpick. We went back to first principles and designed a mobile robotic picker that picks vertically from totes instead of horizontally from shelves, using proven bin picking technology and AI from our sister company Photoneo. The result? Brightpick Autopicker is today the only commercially-viable mobile manipulator on the market, with almost 100 already deployed with customers on long-term contracts. #technology #innovation

  • View profile for Brian Baumgartner

    Product | Programs | Systems | Robotics | Autonomy | Physical AI | Applied AI

    5,849 followers

    I spent the last year and a half building autonomous systems for orchards at Bonsai Robotics. The biggest surprise? Connectivity is the infrastructure problem nobody talks about. Everyone focuses on the robotics—the perception systems, the path planning, the manipulation. But when you're operating in a 500-acre almond orchard in Australia or the Central Valley, you're dealing with spotty cellular coverage, dust that degrades signal quality, and distances that make WiFi impractical. The robots can see. They can navigate. They can make decisions. But if they can't reliably communicate with fleet management systems or push telemetry data for analysis, you're running blind. This isn't just an ag problem. I've seen similar challenges in all off-road and remote applications, including marine robotics with Wave Gliders operating thousands of miles offshore, army tanks on the frontlines, and rail vehicles and trucks in rural ODDs. The solution isn't just "add more cellular towers." It requires edge computing architectures that let vehicles operate autonomously when connectivity drops, smart data prioritization that pushes critical telemetry first, and mesh networking between vehicles to create resilient communication networks. Connectivity infrastructure is as important as the autonomy stack itself. You can't deploy at scale without solving both. What connectivity challenges have you seen in deploying hardware in remote environments?

  • View profile for Aaron Lax

    Founder of Singularity Systems Defense and Cybersecurity Insiders. Strategist, DOW SME [CSIAC/DSIAC/HDIAC], Multiple Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The DHS Threat

    23,823 followers

    𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗡𝗲𝘅𝘁 𝗘𝗿𝗮 𝗼𝗳 𝗥𝗼𝗯𝗼𝘁𝗶𝗰𝘀 Reinforcement Learning has become the intelligence engine behind the next generation of autonomous machines. It allows robots to learn through experience, adapt to complex environments, and make decisions in real time. Researchers across the world are pushing this field forward, and the progress made between 2023 and 2025 has transformed what we thought robots could do. Modern systems now learn from high-dimensional sensory data like vision, tactile signals, and proprioception. They no longer rely on brittle rules or hand-designed controllers. Instead, they build internal models of the world and use them to plan, predict, and act with remarkable precision. Transformative breakthroughs like Dreamer world models, transformer-driven action policies, diffusion-based decision systems, and hybrid model-based control have allowed robots to move, grasp, manipulate, and navigate with a sophistication that simply didn’t exist a few years ago. Robots today learn faster, require fewer human demonstrations, and succeed in dynamic, contact-rich tasks that were once thought impossible. They can adapt their strategies on the fly when the environment changes. They can infer hidden states, anticipate future outcomes, and recover from failures with very little supervision. High-resolution tactile sensing, latent-space world models, and large-scale datasets of real robot behavior have made this evolution inevitable. Yet even with all this progress, several challenges still define the frontier. Robots must close the gap between simulation and the real world, learn to operate safely around people, build long-horizon memory, and coordinate with swarms of peers under partial observability. These problems are the heart of the next leap in autonomy. They will define which systems are capable of real mission-scale reasoning instead of short-horizon actions. The coming years will belong to hybrid systems that combine world models, foundation models, and real-time control. They will continuously update their understanding of the world as sensors age, as hardware wears, and as environments become unpredictable. They will rely on new forms of tactile intelligence, more efficient learning pipelines, and architectures that blend imagination with grounded physics. Every major advance in robotics over the past decade has moved toward one goal. Autonomy that is resilient. Autonomy that adapts. Autonomy that learns at the speed of the world itself. Singularity Systems is moving this space.

  • View profile for Adam Sadilek

    CEO @ AIM | Terraforming powered by Physical AI | Former Google[x]

    4,218 followers

    Most of us are impressed when we see a Waymo car navigate city streets on its own — understanding traffic, navigating the streets. But what if you asked that same car to plow snow – move and control itself along with the snow, not just the vehicle? That’s the leap from navigation to earthmoving autonomy. In city driving, the principal source of complexity is the long tail of events that may happen around the robots. Spidermen and Princess Leia crossing the road on Halloween, tricky interactions of many vehicles and people at a busy intersection. However, the roads themselves are relatively static. You can map that world and assume the asphalt surface doesn’t change much minute to minute. But on a minesite or a jobsite, the environment changes constantly. Every pass of a dozer or excavator reshapes the terrain — indeed that’s the sole purpose of earthmoving machines! The ground itself is the challenge — amorphous, hard to predict, always in flux, dynamic with weather, and never quite the same from one moment to the next. Dry soil is quite different from frozen terrain, and that is different from wet material… Therein lies a key difference between true autonomy in construction and mining versus autonomy on the road. In earthmoving, there’s no road map given to follow, the robots have to build it, keep it up to date, and safely localize themselves in it as they go. That’s an example challenge we solve at AIM. landing systems that learn to shape earth — terraform.

  • View profile for Srinivasan Vijayarangan

    Scientist (CMU) | Roboticist | Coach

    6,518 followers

    This is how a robot sees the world! Let me deconstruct what is going on in the background. It might look obvious and naive when you look at it. But that is a consequence of a lot of complex things going on in the background. Mapping the Environment: The robot uses LiDAR to create a 3D map of its surroundings, represented as a cloud of dots. One challenge is aligning multiple scans to form a clear, accurate structure without errors like ghosting (misaligned overlapping scans). Finding Safe Footing: It then detects flat surfaces where it can step. These appear as white polygons. The robot also checks the slope of each surface to ensure it won’t slip or lose balance. Planning the Path: If a surface isn’t flat, the robot treats it as a temporary stepping point rather than a stable landing area. Flat, walkable zones are highlighted in green. Placing Its Feet: A humanoid robot must carefully plan where its feet will land. The darker green footprints you see are the calculated landing spots. Tracking Its Own Movement: To execute these steps accurately, the robot continuously estimates its position in 3D space, ensuring it knows exactly where it is at all times. This also allows it to be visualized in the virtual model. Despite all this, you can see that sometimes the point cloud is a little jittery and the robot feet does not align with where it planned to land. These are errors in sensing and mapping. The planner has to compensate for these to enable smooth and natural motion we see from the outside. This is just a glimpse into the complexity behind seemingly effortless robotic motion!

  • View profile for Nina Lu

    Investor & Builder

    5,921 followers

    🤖 The robotics space is heating up with huge fundraising rounds from humanoid robot companies Figure1X and robotics foundation model companies Physical Intelligence and Skild AI! What will it take to get to a GPT3 moment in robotics? How far are we from living in the Jetsons? 🤖 In this week's episode of First Commit, I talk to Kyle Morgenstein, a roboticist and PhD student at UT Austin, about his journey from studying rovers on Mars to humanoid robots in our homes, and the surprising challenges that come with making robots safe for human spaces. Key takeaways: 🏠 The home is the next extreme terrain: Kyle's transition from studying extraterrestrial robotics at NASA JPL to home robotics isn't as crazy as it sounds. While Mars has cliff faces and craters, homes have stairs, clutter, and something even more challenging — humans. The safety requirements shift from "protect the $3 billion rover" to "protect the irreplaceable child." 🦾 Do humanoids even make sense? Yes! Beyond the anthropomorphic appeal, Kyle emphasizes the practical advantages: two arms unlock exponentially more tasks, plus "the world is built for us" and maybe we can repurpose egocentric human data for training? 🔩 Hardware design needs to align with the use case: Most humanoid companies (Figure, Tesla) build heavy metal factory robots while claiming home deployment. Only 1X designs with soft materials for human safety. People won't feel comfortable with Terminator-esque machines in living rooms. 📊 Robotics' data challenge is different from LLMs: Unlike LLMs that had the entire internet to pretrain on, robotics doesn't have that luxury. The GPT3 moment in robotics might require a step-function improvement in sim2real, large scale teleoperation data, cross-embodiment learning from egocentric YouTube videos, and/or world models generating synthetic training scenarios. ⚡ Bridging AI and robotics research communities: There's a huge divide right now between "The Bitter Lesson"-variety AI researchers approaching robotics through the lens of building large foundation models (more data! more compute!) that are optimized for complex / long-run task planning but run painfully slow (demos often sped up 10-15x) and robotics engineers focused on smooth, fast actuation and point A-to-B trajectory control. Huge promise in these two schools of thought meeting somewhere in the middle! 🤗 Robots need to learn subtlety to come across as "polite": Humans communicate intent through tiny gestures - a step back at a doorway, eye contact that says "you first." Robots need to master these micro-interactions, not just avoid all human contact. In addition to teaching robots the rote actions of folding laundry, there's an equally large challenge emerging of how to make humans feel safe around them. Technical safety and perceived safety are interlinked but orthogonal. Kyle's timeline: ~5 years for real home deployment (but expect less threatening miniature Disney character robots for kids first!)

  • View profile for Chris Paxton

    AI + Robotics Research Scientist

    8,921 followers

    Just collecting manipulation data isn’t enough for robots - they need to be able to move around in the world, which has a whole different set of challenges from pure manipulation. And bringing navigation and manipulation together in a single framework is even more challenging. Enter HERMES, from Zhecheng Yuan and Tianming Wei. This is a four-stage process in which human videos are used to set up an RL sim-to-real training pipeline in order to overcome differences between robot and human kinematics, and used together with a navigation foundation model to move around in a variety of environments. To learn more, join us as Zhecheng Yuan and Tianming Wei tell us about how they built their system to perform mobile dexterous manipulation from human videos in a variety of environments. Watch Episode #45 of RoboPapers today, hosted by Michael Cho and Chris Paxton! Abstract: Leveraging human motion data to impart robots with versatile manipulation skills has emerged as a promising paradigm in robotic manipulation. Nevertheless, translating multi-source human hand motions into feasible robot behaviors remains challenging, particularly for robots equipped with multi-fingered dexterous hands characterized by complex, high-dimensional action spaces. Moreover, existing approaches often struggle to produce policies capable of adapting to diverse environmental conditions. In this paper, we introduce HERMES, a human-to-robot learning framework for mobile bimanual dexterous manipulation. First, HERMES formulates a unified reinforcement learning approach capable of seamlessly transforming heterogeneous human hand motions from multiple sources into physically plausible robotic behaviors. Subsequently, to mitigate the sim2real gap, we devise an end-to-end, depth image-based sim2real transfer method for improved generalization to real-world scenarios. Furthermore, to enable autonomous operation in varied and unstructured environments, we augment the navigation foundation model with a closed-loop Perspective-n-Point (PnP) localization mechanism, ensuring precise alignment of visual goals and effectively bridging autonomous navigation and dexterous manipulation. Extensive experimental results demonstrate that HERMES consistently exhibits generalizable behaviors across diverse, in-the-wild scenarios, successfully performing numerous complex mobile bimanual dexterous manipulation tasks Project Page: https://lnkd.in/e-aEbQzn ArXiV: https://lnkd.in/eemU6Pwa Watch/listen: Youtube: https://lnkd.in/erzbkYjz Substack: https://lnkd.in/e3ea76Q8

    Ep#45: HERMES: Human-to-Robot Embodied Learning From Multi-Source Motion Data for Mobile Dexterous Manipulation

    Ep#45: HERMES: Human-to-Robot Embodied Learning From Multi-Source Motion Data for Mobile Dexterous Manipulation

    robopapers.substack.com

Explore categories