Advancements in Third-Generation Robotics

Explore top LinkedIn content from expert professionals.

Summary

Advancements in third-generation robotics describe the latest innovations in intelligent robots that can learn, adapt, and operate in complex real-world environments, often using powerful AI models and sophisticated physical designs. Unlike earlier robots, these systems combine advanced reasoning, real-time learning, and versatile abilities to tackle tasks that demand both precision and flexibility.

  • Embrace task-focused robots: Consider deploying specialized robots for specific jobs, as they deliver immediate results and are easier to justify than costly humanoid prototypes.
  • Utilize adaptive AI: Integrate robots equipped with AI that can learn on the fly, so they can handle unexpected situations and improve performance in dynamic environments.
  • Explore multi-modal mobility: Look into robots that can walk, jump, and fly, which are especially useful for search-and-rescue missions and navigating challenging terrains where traditional machines struggle.
Summarized by AI based on LinkedIn member posts
  • View profile for Bernard Marr
    Bernard Marr Bernard Marr is an Influencer

    📖 Internationally Best-selling #Author🎤 #KeynoteSpeaker🤖 #Futurist💻 #Business, #Tech & #Strategy Advisor

    1,560,853 followers

    For me, three major advancements defined the AI landscape in 2025. First, the rise of agentic AI. We have moved well beyond chat interfaces to AI systems that can retrieve information, reason through options and take action across enterprise tools. Agent designers, orchestration layers and enterprise-grade frameworks mean organisations can now deploy AI that assists with real work, from sales preparation to financial analysis to HR case resolution. This shift has lowered the barrier to meaningful adoption and is pushing companies to rethink workflows, skills and operating models. Second, the emergence of world models. These models give AI a richer understanding of context, space, time and causality. They can simulate how the world works rather than just predict the next token. This unlocks more reliable planning, better judgment and far safer autonomy. It also lays the foundation for AI that can coordinate tasks, operate machinery and reason about complex multi-step processes. In many ways, world models are the missing link between today’s narrow AI systems and the more general capabilities we expect in the future. Third, the acceleration of physical AI, especially humanoid robots. We have seen huge progress in locomotion, manipulation and cost efficiency. Several prototypes are already being tested in factories, logistics centres and retail environments. What is changing is not just the hardware, but the intelligence that drives it. Combining robotics with advanced foundation models and world models brings us much closer to general-purpose robots that can adapt, learn and operate safely alongside humans. Taken together, these developments show how rapidly AI is moving from generating content to understanding the world, taking action and working in physical space. It feels like a genuine step toward AI that is more capable, more useful and more aligned with real-world needs. What has been the key AI advancement for you in 2025? #LinkedInNewsEurope

  • View profile for Aaron Prather

    Director, Robotics & Autonomous Systems Program at ASTM International

    84,964 followers

    Humanoids may dominate the headlines—pouring coffee, doing backflips, and walking across factory floors—but they’re not where the real money is. The quiet winners? Specialized, task-focused robots. From Unbox Robotics boosting warehouse efficiency by 25% to Zipline delivering life-saving medical supplies, these single-task machines are quietly transforming industries. Okibo and Canvas are tackling drywall finishing, while Moxi roams hospital halls delivering supplies so nurses can spend more time with patients. Investors love them for one simple reason: ROI is clear and immediate. They’re cheaper to build, faster to deploy, and easier to justify on a balance sheet than flashy humanoids still stuck in pilot programs. The comparison is simple: forklifts changed the world, not Iron Man suits. The next wave of robotics won’t be about building machines that look like us—it will be about building machines that do the job better than us.

  • View profile for Ravi Samrat Mishra

    Empowering Leaders, Entrepreneurs & Brands to Thrive on LinkedIn | Helping Founders Build Authority & Audience Growth | Spreading Positivity 🌟

    552,670 followers

    Researchers at EPFL have unveiled an innovative robot bird that blends terrestrial and aerial locomotion through advanced physics and engineering principles. Inspired by the biomechanics of avian species, it features lightweight, robust materials and multifunctional legs that store and release energy efficiently, enabling powerful jumps for rapid takeoffs. These legs are modeled to mimic the spring-like motion of tendons and muscles, leveraging principles of elastic potential energy to convert stored energy into kinetic energy during liftoff. This allows for faster, more energy-efficient flight initiation compared to traditional propeller-driven systems, which rely on continuous motor operation to achieve lift. The robot also integrates advanced aerodynamics for stable flight, utilizing biomimetic wing designs that optimize lift-to-drag ratios. Its ability to walk and hop over obstacles stems from precision actuators and sensors that calculate optimal force and trajectory, ensuring smooth transitions between ground and air mobility. These features make it highly adaptive to complex terrains, from rocky landscapes to dense forests, where conventional drones and robots would struggle. Future prospects for this #technology are promising. Its multi-modal capabilities could be applied in search-and-rescue missions, where navigating through collapsed structures or dense vegetation requires both ground movement and aerial maneuverability. In planetary exploration, it could traverse rugged terrains on Mars or the Moon, combining the efficiency of walking with the flexibility of flight. Further advancements may include incorporating solar-powered systems for extended autonomy, swarm robotics for collaborative tasks, and machine learning algorithms to enhance decision-making and obstacle avoidance. This groundbreaking #design not only bridges the gap between terrestrial and aerial robotics but also sets the stage for a new era of versatile, energy-efficient robotic systems capable of tackling a wide range of environmental and industrial challenges. 🎥@EPFL Video rights are reserved for the respective owner. #innovation #whatinspiresme

  • View profile for Joanna Lichter

    Early-Stage Investor @ Emerson Collective

    5,271 followers

    The pace of progress in robotics right now is genuinely hard to keep up with. New models, company launches, real-world deployments, and funding announcements are emerging almost daily.  Starting this week I'm sharing what actually mattered👇 Issue #1: (Mar 23-29) The story this week isn't any single headline - it's divergence. VLAs vs. world models. Teleop vs. egocentric data. Sim vs. real-world data. Teams are making genuinely different bets. The field hasn't converged, and that's exactly when investing gets interesting. 💰 Funding & Deals Unitree Robotics filed for a ~$610M IPO. Revenue >10x'd in two years, 60% gross margins - the first real public market test for humanoids. Amazon made two acquisitions in the past week: Amazon RIVR (last-mile delivery) and Fauna Robotics (consumer humanoids), pushing automation beyond the warehouse and onto the customer’s doorstep.  Physical Intelligence is in talks to raise $1B at an ~$11B valuation, doubling the company’s $5.6B valuation in just 4 months. 🔍 Research & Demos LeWorldModel - Yann LeCun et al. proves JEPA world models can be trained E2E from raw pixels, without representation collapse. Not robotics-specific, but another signal that world models could be a viable path toward general-purpose robotics. Fast-WAM - reframes the advantage of world models: keep video prediction as a training objective, skip future generation at inference. Result: ~4x faster control with similar success rates. World models may matter more for learning good representations vs. simulating the future at test time.  NVIDIA EgoVerse - a new ecosystem for capturing and learning from egocentric human data using glasses + an iPhone app. Key insight: small amounts of aligned human–robot data can bridge large-scale human datasets to real robot performance. OmniReset - uses large-scale RL and diverse resets to learn contact-rich manipulation. No demos, no reward engineering. Zero-shot transfer to the real world from RGB. Reset broadly -> scale environments -> let RL do the work. AirVLA - really fun work by Stanford MSL that fine-tunes Pi's π0 to fly a drone. Vision-language priors transfer; low-level control still needs adaptation. A useful probe of how far foundation model representations actually generalize. 📖 Deep Dives "What Are World Models?" by Chris Paxton - the clearest explanation I've found of what world models actually mean in robotics, and where they're likely to matter and where they won't. "Robotics Needs Fewer Roboticists" by Jacob Zietek - less academic caution, more SpaceX chops. Ship early, give junior engineers real responsibility, learn from deployment. Robotics data infra market map by Emily Yu. Core thesis: the winners in data infra will align modalities with model architectures, and evolve alongside both. If this week is any indication, the biggest question isn’t who’s ahead today, but which paradigm actually scales. What are you paying attention to that I missed?Links in comments 👇

  • View profile for Vedant Nair

    Co-Founder @ Miru (YC S24) | RobotOps Software Infra

    14,551 followers

    After building general base models, real-world RL is the endgame. Robots need to be able to quickly adapt to new situations and fix their mistakes on the fly. A base model that can pick up a screwdriver is great, but it's only valuable in production if it can consistently align with a tiny screw at submillimeter precision. Today's models can't do that. Physical Intelligence introduced RL Tokens (RLT), a method that lets a small RL policy sit on top of their base VLA model and refine just the precise, critical phase of a task. No need to fine-tune; instead, the robot can learn from hours (or even minutes) of real-world practice directly on board. The results showed that the RL policy actually executed faster than human teleoperation on half the trials. Across all four tasks they tested, RLT sped up the hardest phases by up to 3x. This is exciting because it provides a pathway for foundation models to achieve production-grade reliability. A robot that can learn in real time can adapt to dynamic conditions in the real world. Interested to see who's first to ship something like this in a real production line.

  • View profile for Franz Gilbert

    Global Growth Leader for Human Capital Strategy and Innovation responsible for our Ecosystems and Alliances, Emerging Businesses, and Inorganic activity.

    17,878 followers

    Robots are leaving the lab. In our Tech Trends 2026 report, I was privilege to be one of the co-authors of the Physical AI chapter (with Jim Rowan, Tim Gaus)—looking at how vision‑language‑action models, onboard NPUs, and modern robotics are pushing autonomous systems from pilots into production. What’s changing: • Physical AI turns robots into adaptive machines that perceive, reason, and act in real time—far beyond preprogrammed automation.  • Onboard compute allows split‑second decisions without cloud dependency, which is critical for safety‑critical environments.  • Economics are improving fast: component commoditization and advanced manufacturing are bringing reliability and scale. Where it’s real: • Amazon’s millionth robot—coordinated by DeepFleet AI—improved fleet travel efficiency ~10%.  • BMW plants have vehicles driving themselves through testing and finishing routes.  • Waymo has passed 10 million paid robotaxi rides; Aurora is hauling freight driverlessly between Dallas and Houston.  • Cities are using AI‑powered drones for bridge inspections; Detroit launched an accessible autonomous shuttle service. Humanoids on the horizon: UBS estimates ~2 million humanoids in workplaces by 2035 and a US$30–50B TAM—driven first by logistics and health care use cases, then consumer scenarios as cost curves fall. What still needs work: Sim‑to‑real training gaps, comprehensive safety governance, cybersecurity for connected fleets, and orchestration across heterogeneous robots. The next 18–24 months will be defined by organizations that tackle these fundamentals. https://lnkd.in/esiAtMN6 Firms like Agility RoboticsApptronikFigureSanctuary AI1XCobotTesla OptimusBoston DynamicsDiligent RoboticsNVIDIA are paving the way to the future. #PhysicalAI #Robotics #Humanoids #Logistics #Manufacturing #Healthcare #SmartCities

  • View profile for Daniel Seo

    Researcher @ UT Robotics | MechE @ UT Austin

    1,650 followers

    Teaching robots to build simulations of themselves allows the robot to detect abnormalities and recover from damage. We naturally visualize and simulate our own movements internally, enhancing mobility, adaptability, and awareness of our environment. Robots have historically been unable to replicate this visualization, relying instead on predefined CAD models and kinematic equations. Free Form Kinematic Self-Model (FFKSM) allows the 𝗿𝗼𝗯𝗼𝘁 𝘁𝗼 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗲 𝗶𝘁𝘀𝗲𝗹𝗳: 1) Robots autonomously learn from their morphology, kinematics, and motor control directly from 𝗯𝗿𝗶𝗲𝗳 𝗿𝗮𝘄 𝘃𝗶𝗱𝗲𝗼 𝗱𝗮𝘁𝗮 -> Like humans observing their reflection in a mirror 2) Robots perform precise 3D motion planning tasks 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗿𝗲𝗱𝗲𝗳𝗶𝗻𝗲𝗱 𝗸𝗶𝗻𝗲𝗺𝗮𝘁𝗶𝗰 𝗲𝗾𝘂𝗮𝘁𝗶𝗼𝗻𝘀 -> Simplifies complex manipulation and navigation tasks 3) Robots 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀𝗹𝘆 𝗱𝗲𝘁𝗲𝗰𝘁 morphological changes or damage and rapidly recover by retraining with new visual feedback -> Significantly enhances resilience. The model is also 𝗵𝗶𝗴𝗵𝗹𝘆 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁, requiring minimal memory resources of just 333kB, making it broadly applicable for resource constrained robotic systems. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮𝗹𝘀𝗼 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗺𝗼𝗱𝗲𝗹 𝘁𝗼 𝗮𝗰𝗵𝗶𝗲𝘃𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝘀𝗲𝗹𝗳-𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝘂𝘀𝗶𝗻𝗴 𝗼𝗻𝗹𝘆 𝟮𝗗 𝗥𝗚𝗕 𝗶𝗺𝗮𝗴𝗲𝘀, 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗶𝗻𝗴 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗱𝗲𝗽𝘁𝗵-𝗰𝗮𝗺𝗲𝗿𝗮 𝘀𝗲𝘁𝘂𝗽𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗿𝗶𝗰𝗮𝘁𝗲 𝗰𝗮𝗹𝗶𝗯𝗿𝗮𝘁𝗶𝗼𝗻𝘀. I believe the next phase of robotic automation inevitably comes with self-awareness of robots. Self-reflection is a major part of how we as humans improve upon ourselves; as 'general purpose robots' emerge, so would their self-reflection. This enables robots to continuously monitor and update their internal models, thereby refining their performance in real time. This is a huge step towards robot self-awareness! Congratulations to Yuhang Hu, Jiong Lin, and Hod Lipson on this impressive advancement! Paper link: https://lnkd.in/gJ-bkU8N I post the latest and interesting developments in robotics—𝗳𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱!

  • View profile for Thomas Wolf

    Co-founder at 🤗 Hugging Face – Angel

    183,169 followers

    Impressive work by the new Amazon Frontier AI & Robotics team (from Covariant acquisition) and collaborators! This research enable mapping long sequences of human motion (>30 sec) on robots with various shapes as well as robots interacting with objects (box, table, etc) of different size nd in particular different from the size in the training data. This enable easier in-simulation data-augmentation and zero-shoot transfer. This is impressive and a huge potential step for reducing the need for human teleoperation data (which is hard to gather for humanoids) The dataset trajectories is available on Hugging Face at: https://lnkd.in/eygXVVHx The full code framework is coming soon. Check out the project page which has some pretty nice three.js interactive demos: https://lnkd.in/e2S-6K2T And kudos to the authors on open-sourcing the data, releasing the paper and (hopefully soon) the code. This kind of open-science projects are game changers in robotics.

  • View profile for Nethra Sambamoorthi, M.A, M.Sc., PhD

    Institute of Analytics. NW Univ- IL (Data Sci) and UNT Health(PharmacoTherapy)-Develop AI/ML Automation and SaaS Products - LLMs, Vision, NLP Agents, and Cloud for Health, Education, and Financial Services, ... !

    13,593 followers

    Robots are now capable of changing their own batteries, marking a significant step forward in autonomous systems. This advancement reduces downtime, minimizes human intervention, and allows robots to operate for longer periods without disruption. In industrial environments like manufacturing floors, warehouses, and logistics hubs, self-managed power systems can directly improve efficiency, reliability, and scalability. As robotics continues to evolve, innovations like autonomous battery replacement highlight the shift from simple automation to intelligent, self-sustaining operations. These developments are setting the foundation for more resilient and continuously running systems across industries.

  • View profile for Lalit Wadhwa

    EVP & CTO, Encora | Architecting AI-Driven Growth for Global Enterprises | Enterprise AI, Cloud Ecosystems, Data Modernization | Transforming Tech Roadmaps into Measurable Business Value

    6,955 followers

    While industrial robots have revolutionized manufacturing, from auto plants to semiconductor fabs, we're approaching a far more profound transformation: the emergence of true general-purpose robots. The distinction is crucial. Today's industrial robots excel at specific, repetitive tasks in controlled environments. They can weld car frames or place semiconductor chips with incredible precision. But they're fundamentally limited. Each robot does one thing extremely well. General-purpose robots represent something entirely different: machines that can adapt to various tasks and environments, much like humans do. The implications are staggering: Instead of programming robots for specific tasks, we'll “teach” them general principles of interaction with the physical world. Rather than being confined to factory floors, these robots will operate in dynamic, unpredictable environments. The focus is already shifting from repetitive task execution to adaptive problem-solving. Future robots may be able to drive your car as easily as chop vegetables in your kitchen as easily as teach your child the principles of trigonometry. The transition from specific to general-purpose robotics mirrors the evolution we've seen in AI: from narrow, task-specific systems to more flexible, adaptive ones. But the physical nature of robotics adds layers of complexity that make this challenge fascinating. #Robotics #Innovation #FutureOfTech #ThoughtLeadershipFromEncora

Explore categories