Advancing Robotics Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    238,080 followers

    Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.  2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro  -> RoboCasa produces N (varying visuals)  -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK

  • View profile for Dr. Martha Boeckenfeld

    Human-Centric AI & Future Tech | Keynote Speaker & Board Advisor | Healthcare + Fintech | Generali Ch Board Director· Ex-UBS · AXA

    150,889 followers

    Surgical robots cost $2 million. Beijing just built one for $200,000. Watch it peel a quail egg: Shell removed. Inner membrane intact. Submillimeter accuracy that matches da Vinci at 90% less cost. Think about that. Most hospitals can't afford surgical robots. Rural clinics? Forget it. Patients travel hundreds of miles for robotic surgery or settle for traditional operations with higher risks. Beijing's Surgerii Robotics just broke that equation. Traditional Surgical Robotics: ↳ $2 million purchase price ↳ $200,000 annual maintenance ↳ Only major hospitals qualify ↳ Patients travel or wait Chinese Innovation Reality: ↳ $200,000 total cost ↳ Same precision standards ↳ Reaches district hospitals ↳ Surgery comes to patients But here's what stopped me cold: Professor Samuel Au left da Vinci to build a network of surgical robots. Engineers from Medtronic and GE walked away from Silicon Valley salaries to build this. They're not chasing profit margins. They're chasing one vision: "Every hospital should have one." The egg demonstration proves what matters: Precision doesn't require premium pricing. The robot's multi-backbone continuum mechanisms deliver the same submillimeter accuracy whether peeling eggs or operating on hearts. What This Enables: ↳ Thoracic surgery in rural hospitals ↳ Urological procedures locally ↳ Reduced surgical trauma everywhere ↳ Surgeon shortage solutions The Multiplication Effect: 1 affordable robot = 10 hospitals equipped 100 deployed = provincial healthcare transformed 1,000 units = surgical access democratized At scale = geography stops determining survival Traditional robotics kept precision exclusive. Surgerii makes it accessible. We're not watching price competition. We're watching healthcare democratisation. Because that farmer needing heart surgery shouldn't die waiting for a $2 million robot his hospital will never afford. Follow me, Dr. Martha Boeckenfeld for innovations that put patients before profit margins. ♻️ Share if surgical precision should be accessible, not exclusive. #healthcare #innovation #precisionmedicine

  • View profile for Matija Kopić

    Tech Founder → Regenerative Farmer | Founder & Chief Ranching Officer at Borovača

    11,208 followers

    The world of Robotics just changed overnight. And it’s been in the works for years. I still remember how excited we were when NVIDIA’s Jensen Huang gave a unique shout-out to Gideon at #GTC21 (check out the video below), hinting at what would eventually happen when the worlds of AI and Robotics collide. Jensen back then: "The signs are clear: accelerated computing doing AI at data center scale will give a giant boost in simulation performance." Jensen today: "Everything that moves in the future will be robotic." NVIDIA Robotics just announced a series of robotics breakthroughs at NVIDIA GTC, with a clear aim of democratizing the building of AI Robots with game-changing foundational components and tools: • Isaac Manipulator, a collection of state-of-the-art motion generation and modular AI capabilities for robotic arms, • Isaac Perceptor, Visual AI for Autonomous Mobile Robot (watch out if you’re building smart AMRs!), • GR00T, a general-purpose foundation model for humanoid robot learning, • a new Jetson Thor-based computer for humanoid robots, built on the NVIDIA Thor SoC, • Isaac Lab for robot learning, • Isaac OSMO for hybrid-cloud workflow orchestration. Mindblowing. 😮 It validates what we at Gideon have believed in for the past 7 years: the future of flexible robots will be powered by advanced visual perception and AI. If you want to build meaningful robotics companies, there’s never been a better time. And it’s never been more important to: 1. Listen to your early customers and focus on adding value to them from day one. Build long-term relationships with their People and help them solve their top problems. 2. Specialize! Focus on solving one specific problem at a time. Do not build universal platforms, trying to tackle many problems at once. When customers hear about your company, they should immediately know you’re the best in the world to solve a specific problem they have. 3. Do not reinvent the wheel; use the off-the-shelf components whenever possible. 4. Data to train your robots is key. Generalized components and platforms will always miss industry-specific data and customer insights you should have access to, so use them to build. It’s your secret superpower and a future growth flywheel. 5. Make sure your robots talk to and cooperate well with other systems. 6. Do not underestimate the complexities of deploying AI robots in the real world, especially in commercial environments. Invest in people, processes, and tools to handle this properly early on. This will make or break you. The real world is nothing like your simulation environment. 7. Partner with key industry players to accelerate your growth (like we did with Toyota Material Handling Europe.) All the building blocks are finally coming together. What is the robot you’ll start working on today? #NVIDIA #JensenHuang #Robotics #AI #AIRobotics #VisualAI #VisualPerception #ComputerVision #GTC24 #AMR #AGV #MobileRobots #HumanoidRobots

  • Today, Science Robotics has published our work on the first drone performing fully #neuromorphic vision and control for autonomous flight! 🥳 Deep neural networks have led to amazing progress in Artificial Intelligence and promise to be a game-changer as well for autonomous robots 🤖. A major challenge is that the computing hardware for running deep neural networks can still be quite heavy and power consuming. This is particularly problematic for small robots like lightweight drones, for which most deep nets are currently out of reach. A new type of neuromorphic hardware draws inspiration from the efficiency of animal eyes 👁 and brains 🧠. Neuromorphic cameras do not record images at a fixed frame rate, but instead have the pixels track the brightness over time, sending a signal only when the brightness changes. These signals can now be sent to a neuromorphic processor, in which the neurons communicate with each other via binary spikes, simplifying calculations. The resulting asynchronous, sparse sensing and processing promises to be both quick and energy efficient! 🔋 In our article, we investigated how a spiking neural network (#SNN) can be trained and deployed on a neuromorphic processor for perceiving and controlling drone flight 🚁. Specifically, we split the network in two. First, we trained an SNN to transform the signals from a downward looking neuromorphic camera to estimates of the drone’s own motion. This network was trained on data coming from our drone itself, with self-supervised learning. Second, we used an artificial evolution 🦠🐒🚶♂️ to train another SNN for controlling a simulated drone. This network transformed the simulated drone’s motion into motor commands such as the drone’s orientation. We then merged the two SNNs 👩🏻🤝👩🏻 and deployed the resulting network on Intel Labs’ neuromorphic research chip "Loihi". The merged network immediately worked on the drone, successfully bridging the reality gap. Moreover, the results highlight the promises of neuromorphic sensing and processing: The network ran 10-64x faster 🏎💨 than a comparable network on a traditional embedded GPU and used 3x less energy. I want to first congratulate all co-authors at TU Delft | Aerospace Engineering: Federico Paredes Vallés, Jesse Hagenaars, Julien Dupeyroux, Stein Stroobants, and Yingfu Xu 🎉 Moreover, I would like to thank the Intel Labs' Neuromorphic Computing Lab and the Intel Neuromorphic Research Community (#INRC) for their support with Loihi (among others Mike Davies and Yulia Sandamirskaya). Finally, I would like to thank NWO (Dutch Research Council), the Air Force Office of Scientific Research (AFOSR) and Office of Naval Research Global (ONR Global) for funding this project. All relevant links can be found below. Delft University of Technology, Science Magazine #neuromorphic #spiking #SNN #spikingneuralnetworks #drones #AI #robotics #robot #opticalflow #control #realitygap

  • View profile for Khang NGUYEN TRIEU

    Group Head of Digital and Technology at Banyan Group | Board member | Tech Leadership Mentor and Sparring Partner

    4,669 followers

    How did an iconic hotel in Singapore, Marina Bay Sands, cut labor dependency with AI and robots by 30% while simultaneously generating 162,000 manhours of greater value with its staff? The secret lies in treating AI and robotics as a partner for your people, not a replacement. Marina Bay Sands (known as MBS here) in Singapore, a large-scale integrated hotel + casino + mall, is demonstrating that AI and robotics are now fully viable for complex, large-scale hospitality operations. MBS became the first in Singapore’s hospitality industry to deploy a fleet of 12 Autonomous Mobile Robots (AMRs) for back-of-house deliveries across its hotel and convention center. Facing a 35 percent surge in delivery volumes between 2019 and 2023, the resort turned to automation to manage growing demands. The deployment of AMRs, which handle manpower-heavy tasks, carrying up to 300kg and moving at 84 meters per minute, resulted in a 30 percent drop in labor dependency. However, the crucial insight for long-term value and staff adoption is the strategic focus on the workforce, repurposing Talent for Sustainable Value. MBS's comprehensive automation efforts, which include over 200 automated work processes across various functions (like 'The Wardrobe' system managing over 200,000 uniforms via ultra-high-frequency chips and automated stocktaking, or the automated upcycling of 100% of food waste by end of 2025), have resulted in the repurposing of over 162,000 manhours annually towards greater value-added tasks. For example, instead of job elimination, members of the procurement and supply chain teams who previously handled manual deliveries are now trained in new, higher-value roles such as inventory management and robot dispatching. By investing in innovation and fostering a culture of productivity, MBS leadership proves that successful integration requires to be people-driven just as much as you are AI-driven. Repurposing staff generates motivation, long-term value, and ensures technology adoption, making automation a key driver of human capital enhancement. full article here: https://lnkd.in/g_M3bpPs #HospitalityInnovation #AIinHospitality #Robotics #WorkforceDevelopment #FutureofWork #MarinaBaySands #GenAI #Leadership #Singapore #TheWayForward

  • View profile for Hassan Tetteh MD MBA FAMIA

    Global Voice in AI & Health Innovation🔹Surgeon 🔹Johns Hopkins Faculty🔹Author🔹IRONMAN 🔹CEO🔹Investor🔹Founder🔹Ret. U.S Navy Captain

    5,389 followers

    The future of elder care hinges on innovation. I know this first hand, and I lost my mother over a year ago. Through my experience caring for my mom, I saw how AI can transform how we support our aging population. Here’s how AI can revolutionize care for the elderly: 🤖 Personalized Care at Scale: AI analyzes health data to create customized care plans. This means better health outcomes tailored to each individual’s unique needs. 🏡 Promoting Independence: Smart home technologies powered by AI help seniors live independently longer. From fall detection to medication reminders, AI supports seniors in their desire to live independently longer and facilitates daily living. 👥 Reducing Caregiver Burden: AI tools can take over routine tasks, freeing up caregivers to focus on what matters most—human connection and emotional support. 🩺 Proactive Health Monitoring: AI tracks vital signs in real-time, predicting potential health issues before they become serious. Early intervention keeps seniors safer and healthier. 🚶♀️ Empowering Aging in Place: AI-enabled devices assist with mobility, home safety, and social engagement, helping seniors remain in their homes, surrounded by familiarity and comfort. Here’s how you can leverage AI to transform elder care: 🔍 Adopt AI-Powered Tools: Explore AI solutions that offer real-time health monitoring, personalized care plans, and smart home integrations. 🤝 Collaborate with Tech Providers: Work closely with AI developers to ensure that the tools meet the specific needs of the elderly population. 🌐 Educate and Empower: Provide training and resources for caregivers and seniors to integrate AI into their daily routines seamlessly. . 💡 Focus on human-AI collaboration: For the best outcomes, combine AI's strengths with human caregivers' empathy. . Did you know that by 2050, the global population aged 60 and over is projected to double? AI isn’t just an option—it’s essential for future care. Empower independence. Transform care. Embrace AI.

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,529,841 followers

    What a Self-Driving Bike Just Revealed About the Future of AI A team at the Robotics and AI Institute (RAI) just built a bike that rides itself. No joystick. No remote. No pre-programmed routes. Just reinforcement learning in motion. It learns balance through trial and error — the same way humans do. Every wobble becomes feedback, every near-fall becomes data, every correction becomes memory. Why it matters Most AI systems fail when reality gets messy. This one doesn’t. It adapts. It treats unpredictability not as a bug to fix, but as a teacher to learn from. That’s a quiet but radical shift in how intelligence forms. What this enables → Delivery robots that stay upright in crowded streets → Mobility aids that self-stabilize for elderly or disabled users → Rescue robots that recover in rough terrain → Industrial systems that keep moving safely under pressure The deeper insight We’ve spent years training AI for perfect control. But real intelligence — human or artificial — isn’t about control. It’s about correction. The ability to recover when the world stops behaving as expected. Maybe the next era of AI won’t be about prediction at all. Maybe it will be about recovery. So here’s my question: Should the next generation of AI be trained for resilience before accuracy? #AI #Robotics #MachineLearning #Resilience #Innovation #FutureOfWork

  • View profile for Andriy Burkov
    Andriy Burkov Andriy Burkov is an Influencer

    PhD in AI, author of 📖 The Hundred-Page Language Models Book and 📖 The Hundred-Page Machine Learning Book

    486,885 followers

    A major breakthrough in reinforcement learning for robot training and the NeurIPS 2025 Best Paper. When training robots to walk, navigate, or manipulate objects, RL researchers have usually been using relatively shallow networks—typically 2-5 layer MLPs mapping sensor readings to motor commands. Attempts to go deeper have failed because training becomes unstable and performance degrades. Prior work attributed these failures to RL's sparse feedback: you might get one bit of information after thousands of decisions, so the ratio of signal to parameters is tiny. In this paper, the authors show that the problem was architectural rather than fundamental. With residual connections, layer normalization, and Swish activation—surprisingly standard elsewhere but not in control RL—you can train networks up to 1000+ layers. The paper demonstrates that gains from adding layers aren't gradual: at certain depth thresholds, agents acquire new behaviors. A simulated humanoid learns to walk upright only at 16 layers; at 256 layers, it learns to vault over walls. Read online and ask questions when blocked: https://lnkd.in/e7jzcc5G Download the PDF: https://lnkd.in/e2Bk8pdU The full list of the most important AI paper of 2025: https://lnkd.in/ekfaXgwJ

  • View profile for Vaibhava Lakshmi Ravideshik

    AI for Science @ GRAIL | Research Lead @ Massachussetts Institute of Technology - Kellis Lab | LinkedIn Learning Instructor | Author - “Charting the Cosmos: AI’s expedition beyond Earth” | TSI Astronaut Candidate

    20,067 followers

    Massachusetts Institute of Technology researchers just dropped something wild; a system that lets robots learn how to control themselves just by watching their own movements with a camera. No fancy sensors. No hand-coded models. Just vision. Think about that for a second. Right now, most robots rely on precise digital models to function - like a blueprint telling them exactly how their joints should bend, how much force to apply, etc. But what if the robot could just... figure it out by experimenting, like a baby flailing its arms until it learns to grab things? That’s what Neural Jacobian Fields (NJF) does. It lets a robot wiggle around randomly, observe itself through a camera, and build its own internal "sense" of how its body responds to commands. The implications? 1) Cheaper, more adaptable robots - No need for expensive embedded sensors or rigid designs. 2) Soft robotics gets real - Ever tried to model a squishy, deformable robot? It’s a nightmare. Now, they can just learn their own physics. 3) Robots that teach themselves - instead of painstakingly programming every movement, we could just show them what to do and let them work out the "how." The demo videos are mind-blowing; a pneumatic hand with zero sensors learning to pinch objects, a 3D-printed arm scribbling with a pencil, all controlled purely by vision. But here’s the kicker: What if this is how all robots learn in the future? No more pre-loaded models. Just point a camera, let them experiment, and they’ll develop their own "muscle memory." Sure, there are still limitations (like needing multiple cameras for training), but the direction is huge. This could finally make robotics flexible enough for messy, real-world tasks - agriculture, construction, even disaster response. #AI #MachineLearning #Innovation #ArtificialIntelligence #SoftRobotics #ComputerVision #Industry40 #DisruptiveTech #MIT #Engineering #MITCSAIL #RoboticsResearch #MachineLearning #DeepLearning

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778,861 followers

    Not long ago, solving a Rubik’s Cube was considered a mark of human intelligence and spatial reasoning. Can you solve the Cube that fast? Today, AI-powered robots can do it in 0.103 seconds, thanks to ultra-fast cameras capturing 4,500 frames per second and motors executing rotations in under 10 milliseconds. It’s more than a party trick — it’s a signal of how far robotics and AI have come. 📈 Processing Power: Since 2010, compute performance for AI workloads has grown by over 1 million×. ⚙️ Robotics Precision: Modern servomotors can reach accuracy levels below 5 microns, enabling surgical precision. 🧠 Learning Efficiency: Reinforcement learning models can now train 10× faster using GPU and accelerator platforms like AMD Instinct and ROCm. 🌐 Adoption Rate: Over 70% of manufacturers are investing in autonomous robotics or cobots to boost productivity and safety. The Rubik’s Cube isn’t the story — it’s the metaphor. Machines have evolved from replicating human logic to outpacing it, not through brute force but through speed, adaptability, and self-optimization. 🔹 Robots that invent their own challenges to learn faster. 🔹 AI systems that design and test hardware in simulation before humans even prototype it. 🔹 Collaborative robotics that co-create with humans — blending creativity, empathy, and logic. AI and robotics are no longer about automation; they’re about amplifying imagination. #AI #Robotics #Innovation via @cuberx5w #MachineLearning #FutureTech #Automation #ReinforcementLearning

Explore categories