For developers working in robotics, Google has made its first Gemini robotics model, Robotics-ER 1.5, available to everyone in preview. This model is designed to be the high-level reasoning layer for an agent. It’s built to tackle complex, long-horizon tasks by breaking them down into an executable plan. Think "clean up the table" or "sort these objects into the correct bins according to local rules." A few of the technical capabilities developers can use: ✅ Tool Calling: It can natively call other functions, like Google Search (to find those "local rules") or, importantly, your own vision-language-action (VLA) models to execute the physical steps. ✅ Spatial & Temporal Reasoning: The model is tuned for fast, precise 2D spatial understanding (e.g., "point to all objects you can pick up") and can process video to understand the order of events. ✅ Flexible Thinking Budget: You can control the latency-vs-accuracy tradeoff. You can demand a fast, reactive response for simple tasks or let the model "think longer" to plan a more complex, multi-step action. ✅ Improved Safety Filters: Google has improved its ability to recognize and refuse plans that violate defined physical constraints (like a robot's payload capacity). This is available now in Google AI Studio and via the Gemini API. Getting started here: ✦ Paper: https://lnkd.in/eDqMHT2F ✦ Code: https://lnkd.in/eu-bgWky ✦ Docs: https://lnkd.in/eXNYRbrF
Advanced Robotics Platforms for Developers
Explore top LinkedIn content from expert professionals.
Summary
Advanced robotics platforms for developers are specialized tools and frameworks designed to make building, testing, and deploying intelligent robots faster and more accessible. These platforms combine hardware and software, often supporting AI, sensors, and simulation, so developers can focus on developing smart behaviors instead of spending months on infrastructure.
- Accelerate experimentation: Take advantage of no-code and open-source robotics stacks to quickly simulate, train, and iterate on new robotic applications without deep technical barriers.
- Tap into AI tools: Use integrated AI models and perception systems to enable robots to understand their environment and perform complex tasks, even with limited resources or expertise.
- Collaborate and customize: Modify robot control logic, sensors, or simulation environments to suit your unique project needs, while joining a growing community of developers sharing open frameworks and ready-to-use robots.
-
-
Big shift in robotics: NVIDIA just open-sourced Isaac Sim and Isaac Lab. Isaac Sim has already been a cornerstone for high-fidelity robotics simulation—RTX-accelerated physics, realistic lidar/camera simulation, domain randomization, ROS/URDF support, and synthetic data pipelines. Now, it’s all on GitHub with full source access. But the real multiplier? The release of Isaac Lab—a modular, open reinforcement learning and robot control framework built directly on top of Isaac Sim. It comes with ready-to-use robots (Franka, UR5, ANYmal), training loops, and environments for manipulation, locomotion, and more. What’s different now: *You’re no longer limited to APIs—developers can modify physics, sensors, and control logic at the source level. *Isaac Lab provides a training-ready foundation for sim-to-real robotics, speeding up learning pipelines dramatically. *Debugging, benchmarking, and custom integrations are now transparent, flexible, and community-driven. *Collaboration across research and industry just got easier—with reproducible environments, tasks, and results. We’ve used Isaac Sim extensively, and this open-source release is going to accelerate innovation across the robotics community. GitHub: https://lnkd.in/gcyP9F4H
-
What if building a full robotics stack didn’t take a team of ten PhDs… or 2 years of development... or $5 million dollars? That’s exactly what the new Advantech Edge AI SDK is changing. For years, integrating ROS 2 + LiDAR + cameras + AI models has been a heavy lift: - Custom drivers - Sensor fusion pipelines (gPTP anyone?) - SLAM tuning - Model deployment - Endless debugging across disconnected systems It wasn’t just hard—it was resource prohibitive. Now? You can do it in a no-code / low-code environment. ✔ Pull ROS 2 packages directly from GitHub ✔ Drop in LiDAR drivers and camera inputs ✔ Run SLAM out of the box ✔ Load and train AI models for perception ✔ Deploy everything on industrial-grade edge hardware All in one unified stack. This is the real shift: We’re moving from building infrastructure → to building intelligence. Instead of spending months wiring systems together, you can now: Simulate Train Iterate Deploy …in a fraction of the time. My take: Systems built in 1 day with this stack can outperform legacy systems that took 10 years to develop. That’s not incremental progress—that’s a step change. LiDAR users—take note. Sensor fusion, SLAM, and perception are no longer bottlenecks. They’re becoming commoditized (read FREE) capabilities. The competitive edge is shifting to: 👉 How fast you can deploy 👉 How fast you can iterate 👉 How intelligently you use the data The barrier to entry for advanced robotics and physical AI just collapsed. And this is only the beginning. #Advantech #EdgeAI #ROS2 #Robotics #LiDAR #ComputerVision #AMR #AGV #PhysicalAI #Automation #AI #SLAM #IndustrialAI
-
Founded by two former Meta employees, Fauna Robotics doesn’t expect to be fully sustained by the developer market — but it’s a start. On Tuesday, the Manhattan-based company emerged from stealth to debut its first robot. It is, by all reasonable metrics, a humanoid. Its place in the world, however, is neither moving warehouse totes nor vacuuming floors. Rather, Sprout is aimed at the humans building the robots who might eventually perform those tasks. The little router-headed robot is a platform, designed to help researchers, schools, developers, startups, and the like build humanoid applications. A close analog might be found in the different systems developed by the Hugging Face-owned Pollen Robotics, in that — unlike many earlier examples — Sprout is built specifically with this market in mind. The market is larger, more mature, and a hell of a lot better funded than it was when PR2 hit the market just over 15 years ago. “We have a really broad range of customers who are developers,” cofounder and CEO Rob Cochran tells me. “They are businesses and individuals with technical skills who have ideas that they want to express on a robotics platform. We have Disney and Parks and Entertainment looking at accelerating and doing a lot of iteration on character-based experiences in parks. We've got Boston Dynamics interested — obviously, they have a deep history in industrial robotics.” Universities including NYU and UCSD have also signed up for early access to Sprout, along with labs building AI world models. Cochran says the team settled on a bipedal humanoid form factor due to the manner of generality it affords researchers in real-world testing and — eventually — deployment. Boston Dynamics Consulting is a potential example of that deployment, as it explores what additional humanoid robots might look like in real-world spaces, beyond the sorts of factories and warehouses its parent company is targeting with Atlas. While the industrial world may soon be awash with different humanoids, the Fauna team believes that these systems won’t afford developers the accessibility and flexibility required for building with limited resources. “We're calling it the Creator Edition because it's a full-featured developer platform that you know, researchers, developers, corporate labs as well as, you know, start-ups can use, you know, many robots,” cofounder and CTO, Josh Merel, tells me. “There aren't that many robots available in the first place, but those that are available are often quite powerful, quite heavy. You need a deep robotics background, to really build on top of it. We’re making this a lot more accessible to people who have machine learning backgrounds and want to play with LLM agents on a robot in the real world in a way that's safer and makes more sense than deploying it on hardware that really isn't well suited to that.”
-
The world of Robotics just changed overnight. And it’s been in the works for years. I still remember how excited we were when NVIDIA’s Jensen Huang gave a unique shout-out to Gideon at #GTC21 (check out the video below), hinting at what would eventually happen when the worlds of AI and Robotics collide. Jensen back then: "The signs are clear: accelerated computing doing AI at data center scale will give a giant boost in simulation performance." Jensen today: "Everything that moves in the future will be robotic." NVIDIA Robotics just announced a series of robotics breakthroughs at NVIDIA GTC, with a clear aim of democratizing the building of AI Robots with game-changing foundational components and tools: • Isaac Manipulator, a collection of state-of-the-art motion generation and modular AI capabilities for robotic arms, • Isaac Perceptor, Visual AI for Autonomous Mobile Robot (watch out if you’re building smart AMRs!), • GR00T, a general-purpose foundation model for humanoid robot learning, • a new Jetson Thor-based computer for humanoid robots, built on the NVIDIA Thor SoC, • Isaac Lab for robot learning, • Isaac OSMO for hybrid-cloud workflow orchestration. Mindblowing. 😮 It validates what we at Gideon have believed in for the past 7 years: the future of flexible robots will be powered by advanced visual perception and AI. If you want to build meaningful robotics companies, there’s never been a better time. And it’s never been more important to: 1. Listen to your early customers and focus on adding value to them from day one. Build long-term relationships with their People and help them solve their top problems. 2. Specialize! Focus on solving one specific problem at a time. Do not build universal platforms, trying to tackle many problems at once. When customers hear about your company, they should immediately know you’re the best in the world to solve a specific problem they have. 3. Do not reinvent the wheel; use the off-the-shelf components whenever possible. 4. Data to train your robots is key. Generalized components and platforms will always miss industry-specific data and customer insights you should have access to, so use them to build. It’s your secret superpower and a future growth flywheel. 5. Make sure your robots talk to and cooperate well with other systems. 6. Do not underestimate the complexities of deploying AI robots in the real world, especially in commercial environments. Invest in people, processes, and tools to handle this properly early on. This will make or break you. The real world is nothing like your simulation environment. 7. Partner with key industry players to accelerate your growth (like we did with Toyota Material Handling Europe.) All the building blocks are finally coming together. What is the robot you’ll start working on today? #NVIDIA #JensenHuang #Robotics #AI #AIRobotics #VisualAI #VisualPerception #ComputerVision #GTC24 #AMR #AGV #MobileRobots #HumanoidRobots
-
NVIDIA’s $3,500 robot brain isn’t just hardware, it’s a Trojan horse. The new Jetson AGX Thor packs 128GB of memory and ships with NVIDIA’s full Isaac robotics software stack. On paper, it’s a developer kit. In practice, it’s a platform play. Amazon Robotics, Meta, Boston Dynamics, Caterpillar, and labs at Stanford and Carnegie Mellon are already building with it. The key distinction here is that every robot trained on Thor doesn’t just use NVIDIA chips, it learns inside NVIDIA’s ecosystem. Switching later isn’t impossible, but it’s expensive and messy. Jensen Huang calls robotics the company’s biggest opportunity beyond AI. It is pretty clear why; hook developers early, lock in the software, and when humanoid robots finally scale, NVIDIA owns the operating system of the physical world. Not everyone is playing along. Tesla and a few others are rolling their own silicon, trading speed for control. Startups face the harder choice: Go with NVIDIA and get to market fast. Go alone and risk falling behind. At $3,500, it’s the kind of bargain that feels cheap today but then is costly forever.
-
Amazon just acquired Fauna Robotics, the team behind Sprout, a 3.5-foot, $50K bipedal humanoid built as a developer platform for home robotics. Sprout launched in January and Amazon bought the company two months later, making it their second robotics acquisition in a month after picking up RIVR for delivery robots. Amazon already runs over a million robots in its warehouses, but those operate in controlled, repeatable environments. Home robotics is a different validation problem entirely because the scenario space is so much larger and so much messier: furniture that moves between visits, pets and kids crossing the robot's path, lighting that shifts room to room and hour to hour. You can't cover that kind of variety by testing in a lab. The interesting part of the Fauna deal isn't the hardware, it's the developer platform. Amazon isn't just building their own home robot; they're creating the foundation for third-party developers to build on top of humanoid robotics. That means the testing and validation challenge doesn't scale with one engineering team, it scales with every developer on the platform. This is the problem we're working on at Antioch. Simulation is the only practical way to cover that breadth of deployment environments without physically rebuilding every edge case, and the need for it grows in direct proportion to the number of teams building on a platform like this. Curious whether Amazon's cloud infrastructure becomes part of the developer story here. A humanoid hardware platform with built-in sim and testing tooling would give third-party developers something that, right now, only the biggest autonomy programs can afford to build in-house. https://lnkd.in/eVncGStq
-
Release of Genesis represents something extraordinary. After diving deep into the research paper, I want to share why this isn't just another AI tool - it's potentially the bridge to making personal robots a reality. What is Genesis? Imagine having a "virtual universe" where robots can practice tasks millions of times in minutes, learning from each experience, all before attempting anything in the real world. That's Genesis - but it's even more fascinating than that. 🔄 The Traditional vs Genesis Approach Let me share a simple example that blew my mind: Teaching a robot to pour water traditionally: - Program every movement manually - Test with real water (risking robot damage) - Repeat thousands of times - Limited to specific cups and situations learned With Genesis: Simply tell it: "Pour water from a pitcher into a cup without spilling" Genesis automatically: - Tests different cup sizes and shapes - Varies water amounts and conditions - Adjusts for different surfaces - Completes millions of practice runs in hours And here's the kicker - it runs 430,000 times faster than real-time! What would take a year to learn traditionally can be learned in 45 seconds. 🤯 🎮 Four Game-Changing Components: 1. Universal Physics Engine - Simulates at 43 million frames per second - 430,000x faster than real-time operation - Accurate physics for multiple material types in one simulation 2. Ultra-Fast Robotics Platform - Processes 1 year of training in 45 seconds - Enables parallel testing of thousands of scenarios 3. Photo-Realistic Rendering - Real-time physics-based rendering - Accurate material and lighting simulation 4. Natural Language Understanding - Converts plain English to robot commands - Handles complex multi-step instructions 💡 Why This Matters: Think about how we currently develop robots - it's like teaching someone to swim without water. Genesis changes this by creating a perfect practice environment where: - Engineers can test wild ideas without physical prototyping - Robots can learn complex tasks through millions of attempts 🌍 Beyond Robotics - Universal Applications: Genesis isn't just for robotics - it's transforming multiple fields: - Healthcare: Medical robots practicing surgical procedures millions of times before touching a patient - Architecture: Building design and structural analysis - Entertainment: Physics-accurate animations and VR - Education: Interactive learning environments - Manufacturing: Manufacturing robots reconfiguring for new tasks through simple instructions 🔮 Future Vision: Imagine describing a task to your home robot in plain language, and it understanding exactly what to do because it's already practiced similar scenarios millions of times in simulation. That future just got much closer. #AI #Robotics #Innovation #TechnologyInnovation #FutureOfWork #ArtificialIntelligence #RoboticAutomation
-
The Robot Operating System (ROS) has become the de facto open-source standard for building complex robotics applications (e.g. mobile robots navigating warehouses, robotic arms in manufacturing). Rather than reinventing the wheel, developers can take advantage of ROS’s vast library of pre-built, community-vetted packages for navigation, perception, and motion planning. Its node-based messaging architecture allows systems to be modular, scalable, and adaptable, while simulation and visualization tools like Gazebo and RViz make it possible to test and debug before touching hardware. ROS also benefits from a global open-source ecosystem, bridging education, research, and industry. That said, ROS is not without limitations. Its learning curve can be steep for beginners, and its multi-node design introduces complexity that can be resource-intensive on smaller platforms. While ROS 2 has made great strides with real-time performance and robustness, achieving hard real-time guarantees or running on constrained microcontrollers often requires careful consideration. For simple, single-purpose robots, the overhead of ROS may be unnecessary, and a lightweight framework or custom code might be more efficient. Ultimately, ROS makes the most sense when your project involves multiple sensors, actuators, and intelligent processes that need to work together. In these scenarios, it provides the communication infrastructure and tools to manage that complexity effectively. If you are just getting started with robotics, ROS may be overkill as you work through the basic concepts. When you start working on large, complex robots (or fleets of robots!), having a standardized underlying framework can be crucial. Check out my full blog post to read more about the advantages and disadvantages of ROS: https://lnkd.in/dBpshT7d #ROS #robotics #robot #embedded #programming #software #AI
-
Imagine smarter robots for your business. New research from Google puts advanced Gemini AI directly into robots, which can now understand complex instructions, perform intricate physical tasks with dexterity (like assembly) and adapt to new objects or situations in real time. The paper introduces "Gemini Robotics," a family of AI models based on Google's Gemini 2.0, designed specifically for robotics. They present Vision-Language-Action (VLA) models capable of direct robot control, performing complex, dexterous manipulation tasks smoothly and reactively. The models demonstrate generalization to unseen objects and environments and can follow open-vocabulary instructions. It also introduces "Gemini Robotics-ER" for enhanced embodied reasoning (spatial/temporal understanding, detection, prediction), bridging the gap between large multimodal models and physical robot interaction. Here's why this matters: At scale, this will unlock more flexible, intelligent automation for the future of manufacturing, logistics, warehousing, and more, potentially boosting efficiency and enabling tasks previously too complex for robots as we've imagined in the past. Very, very promising! (Link in the comments.)
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development