At Humanoid Robotics Technology, we love highlighting some of the latest frameworks coming out of leading research institutes that are pushing humanoid robotics forward. Here are a few that recently caught my eye. HUSKY introduces physics-aware whole-body control for humanoid skateboarding. By modelling the humanoid–skateboard system as a hybrid dynamical process and linking board tilt directly to steering, it enables informed policy learning and coordinates pushing, balance, and turning through unified reinforcement learning to achieve stable motion. This work was developed by researchers across Shanghai Jiao Tong University, The University of Hong Kong, the University of Electronic Science and Technology of China and ShanghaiTech University. GentleHumanoid focuses on safe, compliant whole-body control for robots operating around people. Rather than purely maximising performance, it introduces adjustable force limits and upper-body compliance to preserve natural motion while reducing physical risk in real-world interaction. Developed by researchers at Stanford University, it highlights how human-centered control is becoming foundational for deployment. Holosoma provides an infrastructure layer for scalable humanoid reinforcement learning. It delivers an end-to-end framework for training, deploying, and retargeting policies across different humanoid platforms, shifting the field away from isolated demos toward reusable learning pipelines. The framework was released by Amazon’s Frontier AI & Robotics (FAR) team. RPL (Robust Perceptive Locomotion) enables humanoids to walk robustly across complex real-world terrain using onboard depth perception instead of privileged simulation data. By distilling terrain-aware expert policies into a single transformer-based controller, it maintains stability across stairs, slopes, sparse footholds, and partial visual input. Developed by researchers from Amazon FAR, Carnegie Mellon University Robotics Institute, Stanford University, and University of California, Berkeley, it underscores how perception-driven locomotion is rapidly becoming essential for real-world humanoid deployment. On a side note, it's hard to ignore how frequently Unitree Robotics' G1 is used across these projects and more, reinforcing its role as one of the most trusted and widely adopted robots for real-world humanoid learning today. #humanoid #humanoids #humanoidrobot #humanoidrobots
Key Technologies Driving Open Source Humanoids
Explore top LinkedIn content from expert professionals.
-
-
Whole-body control framework that reduces end-to-end latency of teleoperation to 50 ms. [📍it’s open source] Teleoperating a humanoid usually feels like driving through lag. This brings it close to real-time. A new control framework called ExtremControl cuts latency to about 50 ms. Older systems sit around ~200 ms, which is enough delay to break balance and fine manipulation. Instead of translating the entire human body into robot joints, it directly maps hands and feet in space. That tiny change removes most of the processing overhead. They also predict motion velocity, not only position. So the robot starts moving where you’re going, not where you were. Result: movements feel continuous instead of delayed And it works with any policy or teleop stack because the control layer is independent. If you care about dexterity, data collection, or safe remote operation, latency is often the real bottleneck. Reducing it changes what tasks are even possible. 📍Code: https://lnkd.in/d2nvH-Zx Project: https://lnkd.in/dM9e2k6N —— if it matters in AI or Robotics you'll read it here first: 22astronauts.com
-
NVIDIA just called yesterday's announcement "the ChatGPT moment for robotics." Jensen Huang wasn't exaggerating. Here's what actually happened at CES 2026 that you need to understand: NVIDIA released open models that let robots reason, plan, and act in the physical world. Not in a lab. In the real world. These are robots that can identify objects or follow pre-programmed paths, and we’ve seen some of his before. But the insane thing is that these machines can now understand context, make decisions, and adapt like humans do. Three things matter here for AI geeks: 1. NVIDIA Cosmos Reason 2 - an open vision language model that gives robots the ability to see, understand, and act. This is the cognitive layer that machines have always lacked. 2. Isaac GR00T N1.6 - purpose-built for humanoid robots with full body control. This is the foundation model for physical AI. And it's open source (!!) 3. Jetson T4000 - Blackwell-powered chip delivering 4x performance for $1,999 (only, considering what it is capable of). The economics of robotics just shifted dramatically. But here's the part that actually changes everything: it's all open source on Hugging Face. NVIDIA just democratized robotics development for 2 million robotics developers and 13 million AI builders. The barrier to entry for building reasoning robots just collapsed. The real-world impact is already visible. Boston Dynamics, Caterpillar, LG Electronics are launching AI-driven robots. NEURA unveiled a Porsche-designed Gen 3 humanoid. Surgical robots are getting autonomous capabilities (I’m waiting for what intuitive surgical could do with this). This is the shift from expensive, single-task robots to reasoning generalist-specialist robots that can learn and adapt across contexts.
-
🚀The world’s first Open Foundation Model for generalist humanoid robots was just launched during NVIDIA’s GTC, and it’s nothing short of exciting! My take is, this new model, designed for diverse manipulation tasks, will be performing in open-ended environments, where “new, unseen data” will be coming in on the fly! I’m hoping we’re surmounting the hurdles seen with autonomous vehicles, as we fine tune this foundational model into many sub-versions. Making it open source is a major strength, in my opinion. Researchers around the world will be thinking about ways to fine tune using innovative reinforcement learning techniques, given that Omniverse and and Cosmos provides a space to explore synthetic data while removing the constraints of human-annotated data. Nonetheless, here are the quick facts about Groot N1: 🔹Vision-Language-Action (VLA) Architecture: Combines a vision-language model for reasoning (System 2) with a diffusion transformer for real-time motor actions (System 1). 🔹Trained on Heterogeneous Data: Uses a structured data pyramid like human videos, synthetic simulations, and real-robot demonstrations. 🔹Cross-Embodiment Generalization: Supports multiple robot types, from simple arms to full humanoid robots. 🔹High-Frequency Control: Processes perception at 10Hz and generates motor actions at 120Hz on an NVIDIA L40 GPU. 🔹State-of-the-Art Learning: Outperforms imitation learning baselines in both simulation and real-world humanoid benchmarks. 🔹Open-Source Availability: Model weights, datasets, and simulation environments are accessible on GitHub & Hugging Face. Hope you’re as excited as I am about this new frontier, and what’s coming next! #genai #technology #artificialintelligence
-
Humanoid robots are evolving at a pace we never imagined - just look at Unitree Robotics’s latest model. What’s driving this rapid progress? Let’s break it down: 𝟭/ 𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗥𝗟) – Instead of pre-programming every movement, the robot learns through trial and error. Unitree’s USP 👉 High-quality motion capture data. They invest heavily in capturing real human movement, enabling their AI to refine actions based on biomechanics—maximizing efficiency and minimizing instability, just like how humans learn through practice. 𝟮/ 𝗦𝗶𝗺𝟮𝗥𝗲𝗮𝗹 𝗧𝗿𝗮𝗻𝘀𝗳𝗲𝗿 – Training a robot directly in the real world is slow and risky. Instead, Unitree first trains the AI in simulated environments, allowing it to perfect movements virtually. Once optimized, these learned behaviors transfer to the physical robot, reducing real-world testing time and hardware wear. 𝟯/ 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗠𝗼𝘁𝗶𝗼𝗻 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 – Unlike traditional humanoids with stiff movements, H1 uses real-time adaptive control to balance, pivot, and adjust dynamically—resulting in fluid, human-like motion. That’s why it can dance with impressive synchronization and agility. 𝟰/ 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗿𝗼𝗯𝗼𝘁𝗶𝗰 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 - we’re also seeing rapid advancement in vision-language-action (VLA) models, like Gemini 2.0, which now incorporates physical actions as an output modality. This means robots can process images, videos, and sound—but also understand and control their own physical movements using 𝗲𝗺𝗯𝗼𝗱𝗶𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 (ER). Imagine it as AI not just seeing and hearing but truly understanding its own body - a major step toward autonomous, 𝘴𝘦𝘭𝘧-𝘢𝘸𝘢𝘳𝘦 𝘳𝘰𝘣𝘰𝘵𝘪𝘤𝘴. 👉 We compare robots to ourselves as a benchmark – but for how long? With Unitree’s latest humanoid H1, we might witness the moment robots surpass that benchmark as early as this year – I can’t even repeat its stunts anymore. #AI #Robotics #ReinforcementLearning #Unitree #Humanoid
-
Unitree Robotics just open-sourced key reinforcement learning (RL) frameworks behind its G1 humanoid robot, the same bot that recently went viral for its Bruce Lee-style spinning kicks. Their unitree_rl_gym repository on GitHub is where the magic happens. It includes: 🔵 Mimicry Learning & RL Training – The foundation of G1’s martial arts movements, allowing it to learn complex, human-like actions. 🔵 Simulation Environments – Crucial for training robots in virtual spaces before deploying learned behaviors in the real world. 🔵 Support for Multiple Models – The same RL techniques apply to Unitree’s quadruped Go2 and bipedal H1, making this an adaptable framework for robotic motion. This approach mirrors what DeepSeek and Meta’s LLaMA did for AI—opening up proprietary advancements to fuel community-driven progress. Robotics has traditionally been a closed ecosystem, but Unitree is taking a different path. The question now: Will open-source RL accelerate humanoid robotics the same way it did AI? And what happens when we combine these advanced motion models with next-gen AI agents? Check out the repo in the comments. #AI #Robotics #OpenSource #ReinforcementLearning #Unitree #Humanoids #DeepSeekMoment
-
🚀 Humanoids are here, and they are only going to get better. Figure recently launched #Helix - a humanoid robot (based on Vision Language Action Model - VLM) sorting objects using computer vision and agentic AI. At first glance, it looks slow—taking over 2 minutes for a task a human can do in 20 seconds. Some are even questioning why we are "complicating" simple tasks. But this is a huge milestone in AI development. 👉 So what really is happening under the hood - Helix is using 2 key technologies to perform its tasks: 1️⃣ Computer Vision – The robot doesn’t "see" like we do. Instead, cameras capture images, and deep learning models process them to identify shapes, textures, and categories. Unlike pre-programmed robots on factory floors, this one is classifying objects it has never seen before and deciding what to do with them in real time. 2️⃣ Agentic AI – This is where things get exciting. Traditional AI models are passive—they analyze data and give outputs when prompted. But agentic AI acts based on goals. It takes in visual data, makes decisions, and plans a sequence of actions without needing human intervention at every step. If you are wondering why this is a significant milestone, well this is the first step toward blending machines into our physical world. AI is great at processing data in the virtual domain, but bringing intelligence into real-world interactions is a whole different challenge. Jensen Huang calls this "Physical AI"—where machines don’t just compute but interact, adapt, and assist us in real-world tasks. Yes, this is just a prototype. But so were self-driving cars a decade ago. AI evolves fast. Soon, we’ll see humanoids becoming faster, smarter, and more useful—augmenting human work rather than replacing it. 🌟 The future isn’t just digital. It’s physical AI in action. I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence PS: All views are personal Vignesh Kumar
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development