Humanoid Robot Development

Explore top LinkedIn content from expert professionals.

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778,961 followers

    In just ONE year, humanoid robots at the CCTV Spring Festival Gala went from “cool machines” to something that felt… human. What do you think? 2025 → 2026. The difference? Not incremental. Exponential. What changed in 12 months? 📊 The Data Behind the Leap: • AI model capability has been doubling at unprecedented rates (training compute for frontier models has grown >10x in short cycles). • Latency in edge AI systems is now measured in single-digit milliseconds — enabling real-time motion response. • Actuator precision and torque density in humanoid robotics improved significantly, enabling smoother micro-movements. • Multimodal AI (vision + audio + spatial awareness) accuracy has crossed 90%+ benchmarks in controlled environments. • Reinforcement learning in simulation can now compress “years” of physical training into weeks. Result? 2025: Pre-programmed choreography. 2026: Real-time adaptive interaction. We are witnessing the shift from: 🔹 Robots as automation to 🔹 Robots as embodied AI platforms And here’s the bigger implication: When physical AI converges with high-performance edge compute, robotics stops being hardware-centric… and becomes software-defined. The real revolution isn’t the robot you saw on stage. It’s the AI stack running inside it. If this is the progress visible in public within 12 months, imagine what’s happening inside R&D labs right now. Humanoids are no longer a science experiment. They are becoming infrastructure. 2026 is the year robotics started to feel personal. #AI #Robotics #PhysicalAI #Humanoids #DeepLearning #EdgeAI #Innovation

  • View profile for Shivam Gupta

    Helping founders win with AI, social media marketing, and personal branding | Favikon Top 30 Creator in India | Trusted by 800+ brands

    62,668 followers

    A truth the robotics world had to face this week: Even the most advanced machines can act unpredictably. At the 2025 World Humanoid Robot Sports Games in Beijing, a robot reportedly sneak attacked a referee during a freestyle match. What was meant to showcase progress in humanoid design instead sparked global debate on control, safety, and responsibility. Here are lessons leaders in robotics (and beyond) can take away: Design for Safety First Innovation is exciting, but reliability and safety must always lead. A single incident can overshadow years of progress. Build Transparent Systems AI decisions should be explainable. Without clarity, trust collapses the moment something goes wrong. Test in Real Environments Lab success ≠ field readiness. Simulations can’t replace stress tests under real-world unpredictability. Create Clear Human Overrides Humans must remain in control. Build instant shutdown protocols that are easy and universal. Anticipate the “What Ifs” Scenario planning is non-negotiable. Ask: “What happens if the system fails?” → then design for that failure. Balance Progress with Responsibility Pushing boundaries matters, but so does earning public trust. Tech leaders must communicate risks as clearly as benefits. Rule of thumb the industry now faces: If it can surprise you in the lab → it will embarrass you on the stage. The takeaway? Innovation is not just about how far machines can go, but how safely humans can stand beside them. If you’re building in robotics, AI, or any frontier tech, ask yourself: Are you designing for performance alone, or for trust as well? ♻️ Repost to spark more conversations on responsible innovation. 🔔 Follow for insights on AI, robotics, and the future of technology.

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini’s Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    14,546 followers

    Yesterday, we explored Synthetic Interoception and how robots might gain self-awareness. Today, we shift focus to physical intelligence: how robots can achieve the touch and finesse of human hands. Rigid machines are precise but lack delicacy. Humans, on the other hand, easily manipulate fragile objects, thanks to our bodies' softness and sensitivity. Soft-body Tactile Dexterity Systems integrate soft, flexible materials with advanced tactile sensing, granting robots the ability to: ⭐ Adapt to Object Shapes: Conform to and securely grasp items of diverse forms. ⭐ Handle Fragile Items: Apply appropriate force to prevent damage. ⭐ Perform Complex Manipulations: Execute tasks requiring nuanced movements and adjustments. Robots can achieve a new level of dexterity by emulating the compliance and sensory feedback of human skin and muscles. 🤖 Caregiver: A soft-handed robot supports elderly individuals and handles personal items with gentle precision. 🤖 Harvester: A robot picks ripe tomatoes without bruising them in a greenhouse, using tactile sensing to gauge ripeness. 🤖 Surgical Assistant: In the OR, a robot holds tissues delicately with soft instruments, improving access and reducing trauma. These are some recent relevant research papers on the topic: 📚 Soft Robotic Hand with Tactile Palm-Finger Coordination (Nature Communications, 2025): https://lnkd.in/g_XRnGGa 📚 Bi-Touch: Bimanual Tactile Manipulation (arXiv, 2023): https://lnkd.in/gbJSpSDu 📚 GelSight EndoFlex Hand (arXiv, 2023): https://lnkd.in/g-JTUd2b These are some examples of translating research into real-world applications: 🚀 Figure AI: Their Helix system enables humanoid robots to perform complex tasks using natural language commands and real-time visual processing. https://lnkd.in/gj6_N3MN 🚀 Shadow Robot Company: Developers of the Shadow Dexterous Hand, a robotic hand that mimics the human hand's size and movement, featuring advanced tactile sensing for precise manipulation. https://lnkd.in/gbpmdMG4 🚀 Toyota Research Institute's Punyo: Introduced 'Punyo,' a soft robot with air-filled 'bubbles' providing compliance and tactile sensing, combining traditional robotic precision with soft robotics' adaptability. https://lnkd.in/gyedaK65 The journey toward widespread adoption is progressing: 1–3 years: Implementation in controlled environments like manufacturing and assembly lines, where repetitive tasks are structured. 4–6 years: Expansion into dynamic healthcare and domestic assistance settings requiring advanced adaptability and safety measures. Robots are poised to perform tasks with unprecedented dexterity and sensitivity by integrating soft materials and tactile sensing, bringing us closer to seamless human-robot collaboration. Next up: Cognitive World Modeling for Autonomous Agents.

  • View profile for Tom Emrich 🏳️‍🌈
    Tom Emrich 🏳️🌈 Tom Emrich 🏳️‍🌈 is an Influencer

    Building the platform for physical AI at Springcraft | Hiring founding engineers | 17+ years in spatial computing | Ex-Meta, Niantic

    72,945 followers

    Lately, my TikTok feed has been filled with videos of food delivery robots in distress and humans swooping in to save them. This role switch is the complete opposite of the robot story we are used to seeing in movies. In watching scenes of robots tipped over on sidewalks or taking the wrong turn and heading into traffic, what strikes me the most in these videos is not the failure of the tech, but the reaction of the people nearby. Almost every clip shows strangers rushing in to help. They lift the robot back onto its wheels, guide it across the street, or stop cars to let it pass. It got me thinking about how quickly we can form attachments to machines. Is it because they have a lifelike design? Delivery bots with eyes, like those from Serve Robotics, or legs, like those from RIVR, can feel more like pets than vehicles or appliances. And their clear intent on completing a task gives the impression that the machine is somehow intelligent, even sentient. Add a touch of personality, like cute noises that react to your voice or actions, and our instinct to help is easily triggered. These videos are not just about saving someone's order of burritos. It is about the social contracts we’re beginning to build with machines in public spaces. As robots become part of our daily lives, we are finding out that trust and care flow both ways. We expect them to work safely and reliably, and when they stumble, it turns out we’re surprisingly willing to lend a hand. #robotics #physicalAI #empathy #spatialcomputing

  • View profile for Sid Gore
    Sid Gore Sid Gore is an Influencer

    Al & Robotics Systems Architect | Staff Engineer & Project Manager, Lockheed Martin | Leading complex system integration & test | Writing on robotics, simulation, and Al fluency

    3,835 followers

    A humanoid robot costs $90K to break once. AI lets you break thousands... and learn from every fall. My background is mechanical engineering, robotics, and integration & test. But this field is moving so fast with AI that reading articles wasn't cutting it anymore. I felt out of the loop, so... I recently upgraded my personal setup to support AI training workloads and ran my first experiment: Teaching a bipedal (two-legged) humanoid robot to navigate a custom parkour course using reinforcement learning in NVIDIA Isaac Lab 5.1. But before I share what I learned, let me explain what's actually happening under the hood. A GPU-accelerated AI agent runs thousands of virtual robots in parallel. Each one learns from its own falls and successes simultaneously. The AI develops a "control policy," which is the brain that tells a robot how to move through the physical world. Why does this matter? Because what once required million-dollar labs and months of physical testing can now run on a single AI-capable GPU in hours. Robotics R&D is becoming software-first. Here's what that looked like for this experiment: 76 minutes of CUDA-accelerated training time. 393 million training steps. 4,096 robots learning in parallel on my RTX 5080. So what did I learn so far? Three things stood out to me: 》The setup before you can hit "Run" is a challenge. It took me seven hours to troubleshoot versioning, packages, and dependencies before I could run anything. I forced myself to do it manually because I wanted to understand what's under the hood. YouTube tutorials hit their limit quickly, but thankfully the NVIDIA developer forums saved me. 》The cost case is undeniable. A Unitree H1 costs around $90K. I *virtually* crashed thousands of them. My damage bill? $0. Simulation lets you fail-forward at scale. This gets you to a solid starting point for physical testing, but... 》The Sim-to-Real gap is real. This policy works well in simulation, but I couldn't get a feel for stress points, sensor behavior, or true stability. Failure is not predictable and happens at the edges. The next step would be to transfer this policy to a physical robot, gather real-world data, and continuously aligning the simulation to close that gap. The key thing here is: Testing real hardware is expensive. Simulation in software is cheap. How can you leverage both, intelligently? The benefit isn't limited to cost savings. This workflow also compresses developmental cycles and allows you to field systems faster. Do you think virtual simulation is a game-changer that is here to stay, or a fad? How would you build confidence in a robotic control policy that is trained in a virtual world? #robotics #ai #nvidia #omniverse #isaaclab ~~~~~~~~ Citations: NVIDIA IsaacLab -> https://lnkd.in/ekVMDnDc RSL-RL -> https://lnkd.in/eJye3XTW Unitree H1-> unitree.com/h1/ Note: this is an educational personal project. Opinions are my own, no affiliation or endorsement.

  • View profile for Jack Pearson

    Investing in robotics and physical AI

    12,080 followers

    The Ball-and-Socket Challenge 🤖 Why do humanoid robots still move like... robots? One major reason: we haven't cracked the ball-and-socket joint. Human shoulders and hips are engineering marvels that provide 3-degree-of-freedom motion in incredibly compact packages. Replicating these would unlock human-like arm manipulation and true bipedal walking. The Challenge: - 3 independent actuators in minimal space - Handle massive loads without backlash - Precise coordination across all axes Current Approaches: 🔧 Spherical Gears - Soccer ball with gear teeth controlled by 3 motors. Precise but complex manufacturing. 🚀 NASA Ultrasonic - Piezoelectric waves drive the joint at kilohertz frequencies. Ultra-compact but requires sophisticated control. 💨 Variable Stiffness - 3D-printed joints that switch from flexible to rigid via air pressure. Great for medical robots. 💪 Artificial Muscles - Heated polymer fibers contract like real muscle. Bio-inspired but slow response times. The Reality: No clear winner yet. Each trades off precision vs simplicity, power vs size, speed vs bio-mimicry. The race to solve ball-and-socket joints could be THE breakthrough that makes humanoids truly human-like in their movement. When will we crack this engineering puzzle? 🤔

  • View profile for Ronald van Loon

    CEO & Principal Analyst, Intelligent World | Global Top10 AI Influencer | Helping Leaders Navigate GenAI & Agentic AI Decisions

    106,756 followers

    Boston Dynamics unveiled its new Atlas humanoid in Las Vegas, and this is not a flashy demo, it is a strategic signal. Atlas is now fully electric, with no hydraulic systems. That alone changes reliability, maintenance, and scalability for enterprise use. More importantly, it runs on a four-hour battery that the robot can replace on its own. That unlocks something leaders care deeply about, continuous operation without human intervention. Atlas is designed for industrial tasks today, but its agility and form factor make household, logistics, and infrastructure use cases feel inevitable. When machines can move like humans and operate non-stop, workforce models start to change. Any system capable of this level of autonomy and physical capability can be adapted beyond commercial use. Governance, ethics, and policy will not be optional add-ons, they will be core strategy. For executives, the takeaway is simple. Humanoid robots are no longer a distant R&D bet. They are moving into an execution phase. The organizations that win will be the ones preparing now, thinking about integration, safety, skills, and regulation before these systems arrive at scale. What opportunities, or risks, stand out to you? #Artificialintelligence #AI #Robotics #Automation #Innovation #Technology

  • View profile for Ilir Aliu

    AI & Robotics | 150k+ | 22Astronauts

    106,371 followers

    Humans don’t look at the ground every step. They rely on balance, reflexes, and a sense of their own body. This walking test from Foundation explores whether a humanoid robot can do something similar. Their robot, Phantom, is tested without cameras. Instead of vision, it relies on a reinforcement learning controller using internal sensors: IMUs across the body and torque sensors in the feet. The team then runs it through a series of intentionally messy obstacle courses. Legos. Marbles. Mouse traps. Fly paper. Even banana peels. The robot is guided forward with a PlayStation controller, but the controller only sets direction. The hard part, staying upright on unpredictable terrain, is handled entirely by the learned balance policy. What makes this interesting is the focus on proprioception. In robotics, vision often gets the spotlight. But before a robot can reason about the world, it needs a stable sense of its own body. Phantom estimates its center of mass and gravity vector in real time using its internal sensors, allowing it to react to slipping or shifting surfaces without seeing them first. There’s also a hardware constraint here. Humans have more than twenty muscles in each leg to maintain balance. Phantom achieves comparable stabilization with just six motors per leg. That puts much more pressure on the control algorithm. The broader challenge behind experiments like this is the sim-to-real gap. Policies are trained in simulation through millions of reinforcement learning trials. The real test is whether those policies hold up when the world becomes messy, noisy, and unpredictable. By deliberately pushing the robot into failure cases, the team is mapping where today’s humanoid control systems still break and where they’re starting to hold. For humanoid robotics, that boundary is exactly where the next breakthroughs usually happen. Great to see what you accomplished, Sankaet, Patrick and the entire team!!!

  • View profile for Hisham Dakkak

    Head of AI-Driven Commercial Growth at Likecard | Founder: Toolsworld.ai, Grow50X.ai, Mission50X.ai | AI Entrepreneur & Growth Strategist | Scaling B2B Revenue Through Automation | Creators HQ Premium Member

    16,660 followers

    Just an ordinary day at a robotics company. Progress looks like chaos. We watch the viral backflips and perfect precision. We rarely see the thousands of slips, collisions, and face-plants that happen on the lab floor to get there. This isn't clumsy engineering; it's the "Sim-to-Real" gap in action. The difference between code and concrete is the most valuable data a robotics company possesses: ✔️ Reinforcement Learning (The Grind): In a simulation, a robot can train for 1,000 years in a single day. But real-world physics is unforgiving. Every one of these falls is a high-fidelity data point that refines the neural network's balance policy. ✔️ Resilience over Perfection: The goal isn't to build a robot that never falls. It's to build a system that can recover from failure in milliseconds, autonomously, without human intervention. ✔️ Domain Randomization: You see chaos; the algorithm sees variety. Kicking the robot, slippery floors, and random obstacles are features, not bugs. They force the model to generalize beyond its training set.

  • View profile for Rakesh Mali

    Chief Industrialization Officer | Technology Leader | Robotics evangelist | Automotive | IMD/IML/IME | Plant Operations | Printed Electronics | Industry 4.0 / 5.0 | Six Sigma Black Belt | Greenfield development |

    3,904 followers

    Humanoids just got a human sense of touch — and it’s a game-changer. We’ve all seen robots walk, grasp, and even dance. But third-generation humanoids like Optimus, Figure, and XPENG are now moving from “motion” to “feeling.” The breakthrough? Ultra-thin tactile electronic skin — flexible fabrics thinner than 0.2 mm that can be tailored like a custom suit over the robot’s body. These aren’t simple pressure sensors. They detect gram-level touches, sense textures, feel objects slipping before they drop, and even map warmth and pressure in real time. Watch the demo 👇 A robot gets patted on the shoulder, hugged, and responds with live pressure mapping. The same fabric tech works as a smart mat that instantly visualizes every touch on a laptop screen. Why it matters: • Industrial dexterity jumps to a new level (no more dropped boxes) • Robots become safe enough for homes, hospitals, and eldercare • High-density sensor arrays (dozens per cm²) are now the new standard Market projection: The global flexible sensor industry is headed toward ~$4 billion by 2030. This is the final piece that turns robots from tools into true collaborative partners. The future isn’t just smarter robots — it’s robots that feel.

Explore categories