Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data. 2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro -> RoboCasa produces N (varying visuals) -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK
Applications of Robotics
Explore top LinkedIn content from expert professionals.
-
-
If you told farmers 10 years ago that robots powered by sunlight would replace chemicals, it would sound ridiculous. These solar-powered rovers use vision AI to identify and remove weeds at the plant level. No herbicides, no operators… not even lasers or anything crazy. Just going back to the original method of dealing with weeds, pulling them out, just done by machines now. The hard part is not building a robot that works in one field. It is building one that works in every field. Different crops, different soil, different weeds, different growth stages, different geographies. Farming has no standard environment. So Aigen trained their system using NVIDIA Cosmos foundation models and Isaac Sim pipelines to simulate millions of agricultural scenarios before deploying anything in the real world. On the ground, each rover runs inference using NVIDIA Jetson Orin to distinguish crops from weeds while moving. These systems need to become cheaper and more accessible than traditional methods. Once that happens, adoption becomes obvious, especially as demand for food keeps scaling globally. Farmers spend billions on herbicides. If robots can replace even part of that, you change both the cost structure and the environmental footprint at the same time. Follow Endrit Restelica for more.
-
🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy
-
Through collaborations with Georgia Institute of Technology, Ground Control Robotics is developing a new locomotion paradigm for cluttered and confined environments. Wheels and long legs can't meet the demands of tough terrain, but these elongate mutilegged robots can. The robots have applications in agriculture, defense, search and rescue, and pest control. They're focusing first on the multi-billion dollar specialty agriculture market, where unstructured terrain makes traditional robots unusable.
-
What happens when a robot loses a leg mid-mission? Most robots would fail immediately. But watch this one figure out how to walk again in just a few tries. The researcher deliberately damages the robot. Cuts off a leg. Adds weights. Attaches wheels to limbs. Each time, the robot experiments with different gaits until it finds one that works. This is omni-bodied intelligence. The software doesn't panic when the hardware changes. It adapts. Here's why this matters: we talk about robots in homes and factories, but we rarely talk about what happens after six months of use. Parts break. Joints wear out. Sensors fail. If robots can't handle imperfection, they'll never leave the lab. This approach treats adaptability as a core feature, not an edge case. That's the difference between a demo and a tool you can actually rely on. Video credits: SkildAI --- Interested in starting your robotics career? Check out our free robotics career guide to get you started: https://lnkd.in/gpPVTPKE
-
AUTONOMOUS ROBOTICS ARE COMING I’ve been deeply involved in the field of autonomous robotics, and it's truly inspiring to witness how we're moving from theoretical concepts to real, tangible solutions. Take, for example, the valet robots now being deployed—they're not just a tech novelty; they're actually boosting parking efficiency by up to 60% in crowded urban environments. It’s incredible to see how AI and LiDAR technology are coming together to tackle the everyday challenges we face in our cities. As someone who believes in the power of innovation, I can tell you that it’s the practical applications that really resonate. The hype can be exciting, sure, but it’s the real-world impacts that capture my attention and spark my enthusiasm for the future of this field. This is where the true potential of technology shines! #innovation
-
Will robots replace humans at Amazon? It’s a question I’m often asked, and one I discussed with Jimmy McLoughlin OBE on Jimmy's Jobs of the Future Podcast. There’s this perception that robots will take jobs away, but the reality is much more nuanced - and in many ways far more promising for Amazon and beyond. When we first introduced robots into warehouses, some outside the company raised questions about job losses. But over the years, we’ve found that robotics has unlocked new opportunities. Not only did they double our capacity and boost productivity at Amazon, but they have also increased employment. In fact, our warehouses (or what we call fulfilment centres) with robotics employ 50% more people than traditional sites. The reason? Robotics has created demand for new roles that didn’t exist before. Yes, we still need people for picking, packing, and shipping, but now we also need robotic engineers, technicians, and specialists who can operate, maintain, and improve these systems. As technology advances, the future of fulfilment isn’t about replacing people - it’s about expanding possibilities and creating more varied and specialised roles than ever before while delivering more for customers. #Amazon #Innovation #Robotics
-
These students were challenged to build a robot capable of scaling a vertical wall in record time, a task that mirrors real engineering problems faced by aerospace, manufacturing, and autonomous robotics teams worldwide. Will you be able to win? To succeed, each group had to master a full engineering cycle: 🔹 Mechanical design: calculating torque, motor ratios, surface grip, and center of gravity 🔹 Material selection: optimizing weight-to-strength ratios (aluminum, carbon fiber, 3D-printed composites) 🔹 Control algorithms: PID tuning, sensor feedback loops, and stability control 🔹 Energy efficiency: maximizing battery output and motor load under vertical stress 🔹 Failure analysis: testing, measuring, iterating, and rebuilding And this isn’t just academic. Challenges like this reflect real-world robotics breakthroughs: 📌 NASA’s Valkyrie robot uses similar balance and grip logic for climbing unstable surfaces in disaster response missions. 📌 Boston Dynamics spent over 10 years perfecting the control systems students experiment with on a smaller scale. 📌 Industrial robots used in warehouses face the same physics constraints — friction, payload, torque, and trajectory planning. 📌 Spacecraft design teams use identical modeling principles to ensure robots can maneuver on asteroids with extremely low gravity. And student innovation is accelerating fast: 🚀 University robotics teams report up to 40% faster prototype cycles thanks to rapid 3D printing. 🚀 High-school robotics programs now routinely use LIDAR, machine vision, and ROS, tools once limited to major research labs. 🚀 Over 90% of global robotics firms hire from hands-on competition pipelines like FIRST, VEX, and Eurobot. 🚀 The educational robotics market is growing 17% annually, driven by demand for engineers who can build, code, and troubleshoot under real conditions. Competitions like this create the mindset industry needs: not memorization, but building, breaking, fixing, optimizing — the same loop that drives innovation at the world’s leading tech companies. One student prototype at a time, the future of automation, AI, and robotics is already climbing upward. 🚀🤝 #Engineering #Robotics #STEM #Innovation #Education #AI #Automation #FutureOfWork #NextGenTech
-
AI that counts sheep. Not the kind that helps you sleep. This footage shows AI models counting and tracking sheep with accuracy that would take humans hours to achieve manually. Agriculture is being transformed by computer vision that can detect, count, and monitor livestock at scale. Farmers managing thousands of animals can now get precise counts instantly instead of manual tallies that are always approximate. But the applications extend far beyond counting. The same technology detects health issues by identifying animals moving differently. → Tracks growth rates. → Monitors feeding patterns. → Identifies animals that need veterinary attention before visible symptoms appear. This is precision agriculture enabled by AI that can process visual information faster and more consistently than human observation. The technology applies to crops as well. → Detecting disease in plants. Identifying optimal harvest timing. → Monitoring soil conditions. → Tracking equipment across vast properties. Agriculture has always been about managing biological systems at scale. AI gives farmers tools to observe and respond to those systems with precision that was never possible before. The revolution is giving farmers capabilities to manage complexity that overwhelmed manual observation. What other industries have observation problems that computer vision could solve at scale?
-
I've been following Vitestro for years. They have been developing robotic devices to collect patients' blood samples. They have big news now as the results of a multicenter ADOPT clinical trial have been published in Clinical Chemistry. That is the first peer-reviewed multicenter study of a fully autonomous robotic system performing diagnostic venous blood draws. Key takeaways: 📌94.5% overall first-stick success rate, with strong results across key subgroups including elderly patients, patients with obesity, and those with self-reported difficult venous access 📌0.3% hemolysis rate and 0.6% adverse event rate (all mild), both below published rates for manual phlebotomy 📌 90% of patients reported less or similar pain vs. a manual draw, and 82% would prefer Aletta® or had no preference for their next visit The company said this could represent an important step toward establishing autonomous phlebotomy as the new standard for diagnostic blood collection. At least, this is a crucial step in making such an advanced technology part of evidence-based medicine. Read the full paper: https://lnkd.in/dDjVmRHc
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development