What if the biggest players in lab automation have been thinking about it backwards? Traditional lab automation follows a simple logic: You have a big enclosure, robotic arms inside, and samples that get moved from station to station. One application runs, everything else waits. The next process starts when the previous one finishes. It works, but it creates a fundamental bottleneck - sequential processing limits throughput no matter how fast the robotics get. On a recent trip to the US, I visited a startup called LabSync that inverts this entire architecture. Instead of moving the instruments, they fix the workstations in place and move the samples. Magnetic tiles on the floor shuttle plates and components between fixed stations, rerouting in real time based on what needs to happen next. The concept is deceptively simple. But the implications are significant. When you move the samples instead of the instruments, you unlock parallelization at a scale that traditional systems cannot offer: • Multiple processes run simultaneously • There is no queue, no waiting for one application to finish before the next one begins • The throughput potential changes dramatically I have spent years in this space, and I believe they are onto something. Although the concept is not new, this is a rethinking of the fundamental architecture that has defined lab automation for decades. Whether LabSync becomes the company that scales this or not, the question they are asking is the right one: Why are we still moving the tools instead of the work? It was nice meeting you, great work Steve & Landon.
Robotics In Science Projects
Explore top LinkedIn content from expert professionals.
-
-
Very promising! A new open-source platform for research on Human-AI teaming from Duke University uses real-time human physiological and behavioral data such as eye gaze, EEG, ECG, across a wide range of test situations to identify how to improve Human-AI collaboration. Selected insights from the CREW project paper (link in comments): 💡 Comprehensive Design for Collaborative Research. CREW is built to unify multidisciplinary research across machine learning, neuroscience, and cognitive science by offering extensible environments, multimodal feedback, and seamless human-agent interactions. Its modular design allows researchers to quickly modify tasks, integrate diverse AI algorithms, and analyze human behavior through physiological data. 🔄 Real-Time Interaction for Dynamic Decision-Making. CREW’s real-time feedback channels enables researchers to study dynamic decision-making and adaptive AI responses. Unlike traditional offline feedback systems, CREW supports continuous and instantaneous human guidance, crucial for simulating real-world scenarios, and making it easier to study how AI can best align with human intentions in rapidly changing environments. 📊 Benchmarking Across Tasks and Populations. CREW enables large-scale benchmarking of human-guided reinforcement learning (RL) algorithms. By conducting 50 parallel experiments across multiple tasks, researchers could test the scalability of state-of-the-art frameworks like Deep TAMER. This ability to scale the study of the interaction of human cognitive traits with AI training outcomes is a first. 🌟 Cognitive Traits Driving AI Success. The study highlighted key human cognitive traits—spatial reasoning, reflexes, and predictive abilities—as critical factors in enhancing AI performance. Overall, individuals with superior cognitive test scores consistently trained better-performing agents, underscoring the value of understanding and leveraging human strengths in collaborative AI development. Given that Humans + AI should be at the heart of progress, this platform promises to be a massive enabler of better Human-AI collaboration. In particular, it can help in designing human-AI interfaces that apply specific human cognitive capabilities to improve AI learning and adaptability. Love it!
-
My boys recently asked a humanoid robot to sing "APT." It couldn't do it. But the next generation won't just learn the song. They will remember who asked for it, why they liked it, and recall that interaction three years from now. We are witnessing the rise of a new horizontal layer in the AI stack: Memory-as-a-Service. Hardware is rapidly becoming a commodity. The real competitive moat for robotics won't be dexterity or speed. It will be the cognitive core. The ability to acquire, retain, and reason from experience. Specialised companies are now building this "hippocampus" as a pluggable component: ↳ MemSync.ai: Focuses on decentralisation. It offers a portable, user-owned memory that travels with you across different platforms. ↳ Letta (formerly MemGPT): Enables agents to "self-edit" their memory, deciding autonomously what to keep and what to forget. ↳ Google Vertex AI: The enterprise play. Managed, scalable memory banks designed for commercial robot fleets. The next hurdle is reasoning. We need robots that don't just look up facts but use episodic memory to plan for new situations. We also need to solve "catastrophic forgetting." A true general-purpose robot must learn new skills in a warehouse or home without overwriting its previous training. I admit, standing on the edge of this world where machines have a perfect memory of every interaction leaves me feeling thrilled and a little fearful. If you are building in this memory layer, I’d love to chat. For audio notes and to read the rest of this post, follow the link to my Substack in the first comment. ♻️ Repost and be the first to share with your network. 🔔 Follow Jason Tan for more AI strategies, product development and adoption. 🎤 Now taking speaker bookings 🔗 Work with me – link in bio #ai #genai #humanoid
-
A new paper from Technical University of Munich and Universitat Politècnica de Catalunya Barcelona explores the architecture of autonomous LLM agents, emphasizing that these systems are more than just large language models integrated into workflows. Here are the key insights:- 1. Agents ≠ Workflows Most current systems simply chain prompts or call tools. True agents plan, perceive, remember, and act, dynamically re-planning when challenges arise. 2. Perception Vision-language models (VLMs) and multimodal LLMs (MM-LLMs) act as the 'eyes and ears', merging images, text, and structured data to interpret environments such as GUIs or robotics spaces. 3. Reasoning Techniques like Chain-of-Thought (CoT), Tree-of-Thought (ToT), ReAct, and Decompose, Plan in Parallel, and Merge (DPPM) allow agents to decompose tasks, reflect, and even engage in self-argumentation before taking action. 4. Memory Retrieval-Augmented Generation (RAG) supports long-term recall, while context-aware short-term memory maintains task coherence, akin to cognitive persistence, essential for genuine autonomy. 5. Execution This final step connects thought to action through multimodal control of tools, APIs, GUIs, and robotic interfaces. The takeaway? LLM agents represent cognitive architectures rather than mere chatbots. Each subsystem, perception, reasoning, memory, and action, must function together to achieve closed-loop autonomy. For those working in this field, this paper titled 'Fundamentals of Building Autonomous LLM Agents' is an interesting reading:- https://lnkd.in/dmBaXz9u #AI #AgenticAI #LLMAgents #CognitiveArchitecture #GenerativeAI #ArtificialIntelligence
-
From Image to 3D: Why Lyra Could Unstick Brownfield Workflows Most brownfield digital-twin projects stall on day one: scanning is costly, photogrammetry is fussy, and CAD clean-up eats budgets. NVIDIA Research’s Lyra points to a different on-ramp. The workflow: > Ingest a single image (or video). > Self-distill a pre-trained, camera-controlled video diffusion model into a 3D Gaussian Splat (3DGS) scene—no real multi-view training data required. > Feed-forward generate 3D (and even 4D, i.e., dynamic) scenes for real-time rendering. > Export to simulation (e.g., Isaac Sim) to test perception, locomotion, and task policies before you touch the real site. Why this matters for industry: > Instead of waiting weeks for LiDAR + mesh clean-up, teams can bootstrap a plausible 3D context from minimal visuals—good enough to start layout studies, Gemba walk-throughs, and retrofit planning. > Physical AI Data flywheel: Robots and agents need diverse, realistic worlds. Lyra’s pipeline can go text→image→3D and text→video→4D, then drop those 3DGS worlds into Isaac Sim for policy training and regression tests—exactly the kind of closed loop we want for factory autonomy and site inspection. > Faster iteration, lower embodied carbon: Early feasibility and clash checks happen in synthetic scenes, reducing site visits, rework, and wasted materials—small steps that add up on SDG-aligned projects. This is not about throwing away scanning or BIM but rather giving teams a fast start so design, safety, and robotics folks can begin validating assumptions in few hours vs weeks. If you are in the business of industrial twins and Physical AI, this is for you. Read more about Lyra : https://lnkd.in/gpTjtxEx Our Model and code: https://lnkd.in/gtVJFhFE #NVIDIAResearch #Lyra #3DGaussianSplatting #DigitalTwin #PhysicalAI #BrownfieldEngineering #IndustrialDigitalization #RoboticsTraining #Simulation #AI4Industry #OpenUSD #Omniverse #IndustrialMetaverse #SustainableAI University of Toronto Vector Institute Simon Fraser University NVIDIA
-
This week's defining shift for me is that creating 3D data is getting much simpler. New tools are turning everyday inputs like smartphone video, single photos, and text prompts into usable 3D environments and assets. This lowers the barrier to building the scenes, objects, and spaces that robotics, simulation, and immersive content rely on. It also shifts 3D creation from a specialized skill to something all teams can generate quickly and at the scale modern spatial systems require. This week’s news surfaced signals like these: 🤖 Parallax Worlds raised $4.9 million to turn standard video into digital twins for robotics testing. The platform turns basic walkthrough videos into interactive 3D spaces that teams can use to run their robot software and see how it performs before sending anything into the field. 🪑 Meta introduced SAM 3D to reconstruct objects and people from single images, producing full-textured meshes even when subjects are partly hidden or shot from difficult angles. The models were trained using real-world data and a staged process to improve accuracy. 🌏 Meta unveiled WorldGen, a research tool that generates full 3D worlds from text prompts. It produces complete, navigable spaces that can be used in Unity or Unreal and shows how AI can create environments without manual modeling. Why this matters: Faster 3D pipelines expand who can build, test, and refine spatial ideas. They turn 3D creation from a bottleneck into a regular part of development, which opens the door to more experimentation and better decisions earlier in the process. #robotics #digitaltwins #simulation #VR #AR #virtualreality #spatialcomputing #physicalAI #AI #3D
-
Found an exciting new study on 3D modeling, AI and robotics. I'll explain the tech, but first... a story: Imagine pointing a camera at your factory floor or a complex assembly line. Instantly, on your screen, you see a live, interactive 3D model of that entire space – not just the machinery, but also your workers moving within it, all updated continuously in real-time. Think of it like having a perfect, living dynamic dollhouse version of your operations that mirrors reality second-by-second. Rather than a recording of something that already happened, it's live spatial understanding. That's what this new research potentially makes possible. It introduces a framework for simultaneously tracking camera movement, estimating human poses, and reconstructing both the human and the surrounding scene in 3D, all in real-time. Using 3D Gaussian Splatting, it efficiently models dynamic elements. This sets a precedent for creating live, detailed digital twins of humans interacting with environments, which will be crucial for advancements in robotics (so they have real-time perception), virtual and/or augmented reality, and human-computer interaction. Eventually, this means a lot of positive knock-on effects: - Smarter Robots: Robots could use this live 3D view to navigate complex, changing environments and work much more safely and effectively alongside your human workforce. - Hyper-Realistic Training: You could drop trainees into virtual or AR simulations that perfectly replicate live operational conditions for unparalleled realism. - Remote Expertise: Remote experts could literally "walk through" the live digital twin to troubleshoot issues or guide on-site staff with complete, real-time context. This will enable bridging the gap between the physical world and digital systems instantly, enabling much smarter automation, collaboration, and analysis. Paper: https://lnkd.in/eH6VmmCg
-
Meet the Surgical Robot Transformer❗ These surgical tasks were NOT performed by a surgeon - they were done by #AI, using a machine learning technique called Imitation Learning (IL). What is Imitation Learning (IL)? It’s a method where hashtag#robots learn tasks by observing and mimicking human actions, much like how people learn by watching others. Instead of programming every step, the robot uses data from expert demonstrations to replicate actions. Why is this important? As of 2021, over 10 million surgeries have been performed using 6,500 da Vinci robotic systems. These surgeries generate a wealth of recorded data, including videos and robot kinematic data, which can be used to train machine learning models. Unlike other robotics companies, which hire operators to collect teleoperation data, the da Vinci robot already operates via a surgeon-controlled console. This makes it a great platform for imitation learning. #research: https://lnkd.in/dim8CuXW #authors: Ji Woong (Brian) Kim, Tony Z. Zhao, Samuel Schmidgall, Anton Deguet, Marin Kobilarov, Chelsea Finn, Axel Krieger, The Johns Hopkins University, Stanford University
-
TECHNOLOGY BEHIND AI PARKING ROBOTS XINJIN SHANI’S SMART INNOVATION. Uses AI for real-time space detection. Operates without human intervention. Scans vehicles using 3D LiDAR. Rotates cars for optimal placement. Machine learning improves efficiency. Works in multi-level parking structures. Uses automated lifts for stacking. Detects car size and weight. Reduces parking time significantly. GPS-based navigation ensures accuracy. Cloud integration for remote control. Prevents collisions with obstacle detection. IoT connectivity enables seamless updates. Handles electric and traditional vehicles. Facial recognition allows vehicle retrieval. Thermal sensors detect overheating issues. Voice commands enable interaction. Adaptive algorithms optimize space usage. Enhances urban parking efficiency.
-
EMBODIMENT One of the hottest problems in Robotics and AI today is: Embodiment. Although experts will disagree on what exactly embodiment is, most researchers understand as embodiment the employment (use) of representations of the body and what the body does (i.e. actions) in the solutions of various tasks. Embodiment and its consideration changes the kinds of questions that may be asked about a cognitive system. In computer vision for example, if a system with vision is moving in its environment, the questions asked are about finding the egomotion and segmenting the scene. But when biological systems move, their brains tell their vision systems about the movement they are generating. In this case, equipped with this knowledge, we can ask new questions, like: given the current image can we predict the next one? Consideration of embodiment introduces a large variety of new interesting problems. In some sense, we can redo many computer vision problems by incorporating embodiment – it is a basic concept in Active Perception. One such problem that we investigate in our group (prg.cs.umd.edu) is the use of embodiment in creating visual space descriptions. Today’s robots for the most part employ 3D sensors, i.e. cameras that provide the exact distance of any object in the scene, in meters or feet. A consequence of this is that all robots need to be calibrated (they have to be taught what a meter is). Biological systems however do not work in this way. For example, as you read these words, in your immediate environment you can see many objects and although you don’t know their metric distance, you have intimate knowledge of exactly where they are, because you can put your finger at any point of those objects. Thus, visual space could be encoded in motor coordinates – we demonstrate this by building robotic mechanisms that can decide to go through a hole (a door) without knowing the opening’s absolute size or jump over a gap without knowing the gap’s absolute size. This is the subject of the thesis of our student Levi Burner. Our paper “Embodied visuomotor representation” introducing these concepts just appeared in the journal Nature Robotics. Here is the link: https://lnkd.in/eX9Hnfgc Congratulations to our student @Levi Burner who also just successfully defended his PhD thesis. He proposed a new theory for doing robotics without a ruler.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development