Robotics and Digital Imaging Technologies

Explore top LinkedIn content from expert professionals.

Summary

Robotics and digital imaging technologies combine advanced robots with tools that turn images and videos into interactive digital models, making it easier to create, test, and improve systems in fields like manufacturing, healthcare, and training. These technologies use cameras, AI, and 3D modeling to build digital twins or real-time simulations that help automate, analyze, and refine processes without needing physical prototypes.

  • Try accessible tools: Use smartphones or simple cameras to quickly scan and create 3D models, making robot training and digital scene creation open to anyone.
  • Test before deployment: Run robots or surgical systems in virtual environments created from real-world data to spot issues and improve outcomes before using them on-site.
  • Enable real-time updates: Set up systems that provide live feedback and precise tracking, so you can monitor movements and make informed decisions instantly.
Summarized by AI based on LinkedIn member posts
  • View profile for Mukundan Govindaraj
    Mukundan Govindaraj Mukundan Govindaraj is an Influencer

    Global Developer Relations | Physical AI | Digital Twin | Robotics

    18,717 followers

    From Image to 3D: Why Lyra Could Unstick Brownfield Workflows Most brownfield digital-twin projects stall on day one: scanning is costly, photogrammetry is fussy, and CAD clean-up eats budgets. NVIDIA Research’s Lyra points to a different on-ramp. The workflow: > Ingest a single image (or video). > Self-distill a pre-trained, camera-controlled video diffusion model into a 3D Gaussian Splat (3DGS) scene—no real multi-view training data required. > Feed-forward generate 3D (and even 4D, i.e., dynamic) scenes for real-time rendering. > Export to simulation (e.g., Isaac Sim) to test perception, locomotion, and task policies before you touch the real site. Why this matters for industry: > Instead of waiting weeks for LiDAR + mesh clean-up, teams can bootstrap a plausible 3D context from minimal visuals—good enough to start layout studies, Gemba walk-throughs, and retrofit planning.  > Physical AI Data flywheel: Robots and agents need diverse, realistic worlds. Lyra’s pipeline can go text→image→3D and text→video→4D, then drop those 3DGS worlds into Isaac Sim for policy training and regression tests—exactly the kind of closed loop we want for factory autonomy and site inspection.  > Faster iteration, lower embodied carbon: Early feasibility and clash checks happen in synthetic scenes, reducing site visits, rework, and wasted materials—small steps that add up on SDG-aligned projects. This is not about throwing away scanning or BIM but rather giving teams a fast start so design, safety, and robotics folks can begin validating assumptions in few hours vs weeks. If you are in the business of industrial twins and Physical AI, this is for you. Read more about Lyra : https://lnkd.in/gpTjtxEx Our Model and code: https://lnkd.in/gtVJFhFE #NVIDIAResearch #Lyra #3DGaussianSplatting #DigitalTwin #PhysicalAI #BrownfieldEngineering #IndustrialDigitalization #RoboticsTraining #Simulation #AI4Industry #OpenUSD #Omniverse #IndustrialMetaverse #SustainableAI University of Toronto Vector Institute Simon Fraser University NVIDIA

  • View profile for Tom Emrich 🏳️‍🌈
    Tom Emrich 🏳️🌈 Tom Emrich 🏳️‍🌈 is an Influencer

    Building the platform for physical AI at Springcraft | Hiring founding engineers | 17+ years in spatial computing | Ex-Meta, Niantic

    72,942 followers

    This week's defining shift for me is that creating 3D data is getting much simpler. New tools are turning everyday inputs like smartphone video, single photos, and text prompts into usable 3D environments and assets. This lowers the barrier to building the scenes, objects, and spaces that robotics, simulation, and immersive content rely on. It also shifts 3D creation from a specialized skill to something all teams can generate quickly and at the scale modern spatial systems require. This week’s news surfaced signals like these: 🤖 Parallax Worlds raised $4.9 million to turn standard video into digital twins for robotics testing. The platform turns basic walkthrough videos into interactive 3D spaces that teams can use to run their robot software and see how it performs before sending anything into the field. 🪑 Meta introduced SAM 3D to reconstruct objects and people from single images, producing full-textured meshes even when subjects are partly hidden or shot from difficult angles. The models were trained using real-world data and a staged process to improve accuracy. 🌏 Meta unveiled WorldGen, a research tool that generates full 3D worlds from text prompts. It produces complete, navigable spaces that can be used in Unity or Unreal and shows how AI can create environments without manual modeling. Why this matters: Faster 3D pipelines expand who can build, test, and refine spatial ideas. They turn 3D creation from a bottleneck into a regular part of development, which opens the door to more experimentation and better decisions earlier in the process. #robotics #digitaltwins #simulation #VR #AR #virtualreality #spatialcomputing #physicalAI #AI #3D

  • View profile for Jack Shuang Hou

    Diagnostics Executive | Microfluidics & Immunoassay Specialist | Led EUA 230055, EUA 240006 & 510(k) K240728 | Biomarker & Assay Innovation

    18,757 followers

    🦴🤖 Medtronic plc Receives FDA Clearance for Stealth AXiS™ — Spine Surgery Moves From Tools to Intelligent Systems Surgical robotics is no longer about mechanical assistance — it’s about decision assistance. Medtronic plc just received FDA clearance for the Stealth AXiS™ surgical system, the first platform to natively integrate surgical planning, navigation, and robotics into one unified workflow for spine surgery. 🗣️ Ron Lehman, MD — Tenured Professor of Orthopaedic Surgery, Columbia University & Spine Medical Director, The Spine Hospital at NewYork-Presbyterian/The Allen Hospital described real-time motion visibility during surgery as “game changing.” Here’s why this matters 👇 🧠 1️⃣ The shift: navigation → intraoperative intelligence Traditional workflow: • Plan → operate → verify → adjust → repeat imaging Stealth AXiS adds LiveAlign™ segmental tracking ➡️ Real-time spine motion visualization ➡️ Continuous alignment feedback ➡️ Fewer workflow interruptions The system doesn’t just guide the surgeon — it understands the surgery as it happens. ⚙️ 2️⃣ Platform architecture beats standalone robots Instead of selling a robot, Medtronic plc is selling a scalable ecosystem: • Planning software • Navigation • Robotics • AiBLE™ data layer Hospitals adopt once → expand capabilities over time. This mirrors what happened in imaging: CT scanners became data platforms. 📊 3️⃣ The real competition is workflow predictability 🗣️ Michael Carter — Senior Vice President & President, Cranial & Spinal Technologies, Medtronic plc emphasized reducing variability. In modern surgery, consistency > speed Predictability > dexterity Robotics now targets outcome reproducibility, not just precision. 🧩 My takeaway The first era of surgical robotics improved surgeon mechanics. The next era improves surgical decisions. When imaging, navigation, robotics, and analytics merge into a continuous data loop — surgery becomes a controlled process rather than a skilled performance. The operating room is quietly becoming a real-time feedback system. #MedTech #SurgicalRobotics #SpineSurgery #DigitalSurgery #HealthcareInnovation https://lnkd.in/ghncbmHY

  • View profile for Amy Webb

    CEO of FTSG • Global Leader in Strategic Foresight • Quantitative Futurist • Prof at NYU Stern • Cyclist

    99,289 followers

    Found an exciting new study on 3D modeling, AI and robotics. I'll explain the tech, but first... a story: Imagine pointing a camera at your factory floor or a complex assembly line. Instantly, on your screen, you see a live, interactive 3D model of that entire space – not just the machinery, but also your workers moving within it, all updated continuously in real-time. Think of it like having a perfect, living dynamic dollhouse version of your operations that mirrors reality second-by-second. Rather than a recording of something that already happened, it's live spatial understanding. That's what this new research potentially makes possible. It introduces a framework for simultaneously tracking camera movement, estimating human poses, and reconstructing both the human and the surrounding scene in 3D, all in real-time. Using 3D Gaussian Splatting, it efficiently models dynamic elements. This sets a precedent for creating live, detailed digital twins of humans interacting with environments, which will be crucial for advancements in robotics (so they have real-time perception), virtual and/or augmented reality, and human-computer interaction. Eventually, this means a lot of positive knock-on effects: - Smarter Robots: Robots could use this live 3D view to navigate complex, changing environments and work much more safely and effectively alongside your human workforce. - Hyper-Realistic Training: You could drop trainees into virtual or AR simulations that perfectly replicate live operational conditions for unparalleled realism. - Remote Expertise: Remote experts could literally "walk through" the live digital twin to troubleshoot issues or guide on-site staff with complete, real-time context. This will enable bridging the gap between the physical world and digital systems instantly, enabling much smarter automation, collaboration, and analysis. Paper: https://lnkd.in/eH6VmmCg

  • View profile for Akshet Patel 🤖

    Robotics Engineer | Creator

    53,267 followers

    1. Scan 2. Demo 3. Track 4. Render 5. Train models 6. Deploy What if robots could learn new tasks from just a smartphone scan and a single human demonstration, without needing physical robots or complex simulations? [⚡Join 2400+ Robotics enthusiasts - https://lnkd.in/dYxB9iCh] A paper by Justin Yu, Letian (Max) Fu, Huang Huang, Karim El-Refai, Rares Andrei Ambrus, Richard Cheng, Muhammad Zubair Irshad, and Ken Goldberg from the University of California, Berkeley and Toyota Research Institute Introduces a scalable approach for generating robot training data without dynamics simulation or robot hardware. "Real2Render2Real: Scaling Robot Data Without Dynamics Simulation or Robot Hardware" • Utilises a smartphone-captured object scan and a single human demonstration video as inputs • Reconstructs detailed 3D object geometry and tracks 6-DoF object motion using 3D Gaussian Splatting • Synthesises thousands of high-fidelity, robot-agnostic demonstrations through photorealistic rendering and inverse kinematics • Generates data compatible with vision-language-action models and imitation learning policies • Demonstrates that models trained on this data can match the performance of those trained on 150 human teleoperation demonstrations • Achieves a 27× increase in data generation throughput compared to traditional methods This approach enables scalable robot learning by decoupling data generation from physical robot constraints. It opens avenues for democratising robot training data collection, allowing broader participation using accessible tools. If robots can be trained effectively without physical hardware or simulations, how will this transform the future of robotics? Paper: https://lnkd.in/emjzKAyW Project Page: https://lnkd.in/evV6UkxF #RobotLearning #DataGeneration #ImitationLearning #RoboticsResearch #ICRA2025

  • View profile for Tom Zerega

    Founder & CEO of Magnetic 3D - Helping brands achieve unparalleled engagement with "Holographic" Glasses-Free 3D Digital Signage and AI-powered XR applications

    24,461 followers

    Ever wondered how robots learn to grab stuff like pros? Watch Atlas casually pick up objects like it’s no big deal. But behind that smooth move is years of learning... just not in the way you'd think. Training robots in real life is slow, expensive, and full of trial-and-error. You’re burning hardware, wasting time, and dealing with a ton of physical failures. So how do they do it faster? They don’t. Their digital twins do. Companies like Boston Dynamics and NVIDIA are now training robots in virtual worlds. Thousands of simulated attempts. Different objects, lighting, physics, chaos. All without touching a single real screw. Once the robot’s “brain” is perfected in simulation, it’s uploaded into the real robot - ready to act like it’s done it all before. Simulation is no longer just for gaming or CGI. It’s a core part of robotics, AI, manufacturing, and yes; 3D visualization too. In fact, our partners are seeing the benefits of visualizing digital twins on our Magnetic 3D displays. No headsets, no '3D' glasses, immediate depth perception and spatial clarity as a result. Robots are learning in 3D. Shouldn't humans too? 🔔 Follow me at Tom Zerega for the latest in tech and innovation. ♻️ Repost to share this with your network.

Explore categories