Robotics Engineering Technical Skills

Explore top LinkedIn content from expert professionals.

  • View profile for Keesjan (Case) Engelen

    Titoma, Electr. Design & Mfg Colombia, Taiwan, China

    98,078 followers

    Feel the Invisible… Most haptic gear is still stuck in the buzz-and-vibrate phase. Big gloves. Full suits. Nothing that really feels right. Turns out, a team at the University of Illinois found a better way with a fingertip patch. It weighs just 0.3 grams. Inside are nine micro actuators that press and stretch like tiny muscles to simulate pressure, vibration, and texture. Each one’s built from a soft elastomer and a spring. When it expands, it pushes back. That’s how you get real feedback without adding bulk. The patch also senses touch. It can send and receive signals in real time. All from something the size of your fingertip. Still a lab project, but it makes sense. It’s light, flexible, and built to sit directly on the skin. If it scales, it could mean sensation in prosthetics, feedback in surgery, or touch in small robots. A small patch with a lot of reach. Where else could this kind of tech make a difference? Daily #electronics insights from Asia—follow me, Keesjan, and never miss a post by ringing my 🔔. #technology #innovation

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    627,898 followers

    If you are building AI agents or learning about them, then you should keep these best practices in mind 👇 Building agentic systems isn’t just about chaining prompts anymore, it’s about designing robust, interpretable, and production-grade systems that interact with tools, humans, and other agents in complex environments. Here are 10 essential design principles you need to know: ➡️ Modular Architectures Separate planning, reasoning, perception, and actuation. This makes your agents more interpretable and easier to debug. Think planner-executor separation in LangGraph or CogAgent-style designs. ➡️ Tool-Use APIs via MCP or Open Function Calling Adopt the Model Context Protocol (MCP) or OpenAI’s Function Calling to interface safely with external tools. These standard interfaces provide strong typing, parameter validation, and consistent execution behavior. ➡️ Long-Term & Working Memory Memory is non-optional for non-trivial agents. Use hybrid memory stacks, vector search tools like MemGPT or Marqo for retrieval, combined with structured memory systems like LlamaIndex agents for factual consistency. ➡️ Reflection & Self-Critique Loops Implement agent self-evaluation using ReAct, Reflexion, or emerging techniques like Voyager-style curriculum refinement. Reflection improves reasoning and helps correct hallucinated chains of thought. ➡️ Planning with Hierarchies Use hierarchical planning: a high-level planner for task decomposition and a low-level executor to interact with tools. This improves reusability and modularity, especially in multi-step or multi-modal workflows. ➡️ Multi-Agent Collaboration Use protocols like AutoGen, A2A, or ChatDev to support agent-to-agent negotiation, subtask allocation, and cooperative planning. This is foundational for open-ended workflows and enterprise-scale orchestration. ➡️ Simulation + Eval Harnesses Always test in simulation. Use benchmarks like ToolBench, SWE-agent, or AgentBoard to validate agent performance before production. This minimizes surprises and surfaces regressions early. ➡️ Safety & Alignment Layers Don’t ship agents without guardrails. Use tools like Llama Guard v4, Prompt Shield, and role-based access controls. Add structured rate-limiting to prevent overuse or sensitive tool invocation. ➡️ Cost-Aware Agent Execution Implement token budgeting, step count tracking, and execution metrics. Especially in multi-agent settings, costs can grow exponentially if unbounded. ➡️ Human-in-the-Loop Orchestration Always have an escalation path. Add override triggers, fallback LLMs, or route to human-in-the-loop for edge cases and critical decision points. This protects quality and trust. PS: If you are interested to learn more about AI Agents and MCP, join the hands-on workshop, I am hosting on 31st May: https://lnkd.in/dWyiN89z If you found this insightful, share this with your network ♻️ Follow me (Aishwarya Srinivasan) for more AI insights and educational content.

  • View profile for Supriya Rathi

    110k+ | India #1. World #10 | Physical-AI | Podcast Host - SRX Robotics | Connecting founders, researchers, & markets | DM to post your research | DeepTech

    112,806 followers

    Introduce InflatableBots, shape-changing inflatable #robots for large-scale encountered-type haptics in #VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflat- able structures. This enables safe, scalable, and deployable haptic interactions on a large scale. They developed three coordinated inflatable mobile robots, each of which consists of an omni-directional mobile base and a reel-based inflatable structure. The robot can simultaneously change its height and position rapidly (horizontal: 58.5 cm/sec, vertical: 10.4 cm/sec, from 40 cm to 200 cm), which allows for quick and dynamic haptic rendering of multiple touch points to simulate various body-scale objects and surfaces in real- time across large spaces (3.5 m x 2.5 m). They evaluated the system with a user study (N = 12), which confirms the unique advantages in safety, deployability, and large-scale interactability to significantly improve realism in VR experiences. #research #paper: https://lnkd.in/dKxW23tY #authors: Ryota Gomi, Ryo Suzuki, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura University of Calgary #robotics #innovation #technology #future

  • View profile for Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 is an Influencer

    Co-founder & CEO at Hugging Face

    302,461 followers

    🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy

  • View profile for Chandandeep Singh

    AI Manipulation & Robot Learning Engineer | Robotics Learning Systems Architect| Founder @ Learn Robotics & AI

    63,376 followers

    🚀 The Importance of Kinematics in Robotics Software 🤖 (Open Source Robots for learning Robotics: https://lnkd.in/ec44NKQe) Kinematics is a fundamental aspect of robotics that deals with the motion of objects without considering the forces that cause that motion. Understanding kinematics is crucial for developing effective robotic systems. Here’s why it matters: 1️⃣ Understanding Robot Motion 🦾 Robot Movement: Kinematics helps describe how robots move, including positions, velocities, and accelerations 🚗 📐 Path Planning: Essential for determining how to move from point A to point B while avoiding obstacles 🔄 2️⃣ Forward and Inverse Kinematics ➡️ Forward Kinematics: Calculates the position of the robot's end effector (e.g., a robotic arm) based on joint angles and configurations. 🖱️ 🔄 Inverse Kinematics: Determines the required joint angles to achieve a desired position for the end effector. This is vital for tasks like grasping and manipulation. 🤲 3️⃣ Motion Control and Planning 🎯 Trajectory Generation: Kinematic equations are used to generate smooth trajectories for robotic motion, ensuring efficient and precise movements. 🚦 Real-Time Control: Helps implement control algorithms that enable robots to follow paths accurately in dynamic environments. 4️⃣ Simulation and Testing 🛠️ Robotic Simulators: Kinematics plays a key role in simulating robot behavior, allowing developers to test algorithms and strategies before deploying them to physical robots. 🌍 🔍 Visualization: Tools like RViz provide visual feedback on kinematic models, aiding in debugging and development. 5️⃣ Applications Across Industries ⚙️ Manufacturing: Used in robotic arms for assembly, welding, and painting, ensuring precise operations. 🏥 Healthcare: Robotics in surgery relies on kinematic models to guide instruments with high accuracy. 🚗 Autonomous Vehicles: Kinematics is essential for motion planning and navigation, enabling safe and efficient driving. 6️⃣ Foundation for Advanced Robotics 📚 Building Blocks for Dynamics: Kinematics is the first step toward understanding more complex concepts like dynamics, control theory, and robot learning. 🧠 Interdisciplinary Knowledge: Combines concepts from geometry, physics, and engineering, providing a comprehensive foundation for robotics development. Understanding kinematics is not just about math; it’s about bringing robots to life and making them function in the real world. Embrace kinematics as a key skill in your robotics journey! 🌟 #Robotics #Kinematics #SoftwareEngineering #MotionPlanning #RoboticSystems (Open Source Robots for learning Robotics: https://lnkd.in/ec44NKQe) Image Source: https://lnkd.in/eutUhgSw

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778,861 followers

    These students were challenged to build a robot capable of scaling a vertical wall in record time, a task that mirrors real engineering problems faced by aerospace, manufacturing, and autonomous robotics teams worldwide. Will you be able to win? To succeed, each group had to master a full engineering cycle: 🔹 Mechanical design: calculating torque, motor ratios, surface grip, and center of gravity 🔹 Material selection: optimizing weight-to-strength ratios (aluminum, carbon fiber, 3D-printed composites) 🔹 Control algorithms: PID tuning, sensor feedback loops, and stability control 🔹 Energy efficiency: maximizing battery output and motor load under vertical stress 🔹 Failure analysis: testing, measuring, iterating, and rebuilding And this isn’t just academic. Challenges like this reflect real-world robotics breakthroughs: 📌 NASA’s Valkyrie robot uses similar balance and grip logic for climbing unstable surfaces in disaster response missions. 📌 Boston Dynamics spent over 10 years perfecting the control systems students experiment with on a smaller scale. 📌 Industrial robots used in warehouses face the same physics constraints — friction, payload, torque, and trajectory planning. 📌 Spacecraft design teams use identical modeling principles to ensure robots can maneuver on asteroids with extremely low gravity. And student innovation is accelerating fast: 🚀 University robotics teams report up to 40% faster prototype cycles thanks to rapid 3D printing. 🚀 High-school robotics programs now routinely use LIDAR, machine vision, and ROS, tools once limited to major research labs. 🚀 Over 90% of global robotics firms hire from hands-on competition pipelines like FIRST, VEX, and Eurobot. 🚀 The educational robotics market is growing 17% annually, driven by demand for engineers who can build, code, and troubleshoot under real conditions. Competitions like this create the mindset industry needs: not memorization, but building, breaking, fixing, optimizing — the same loop that drives innovation at the world’s leading tech companies. One student prototype at a time, the future of automation, AI, and robotics is already climbing upward. 🚀🤝 #Engineering #Robotics #STEM #Innovation #Education #AI #Automation #FutureOfWork #NextGenTech

  • View profile for Swami Sivasubramanian
    Swami Sivasubramanian Swami Sivasubramanian is an Influencer

    VP, AWS Agentic AI

    189,962 followers

    Agentic AI systems are moving beyond digital environments and into the physical world. We can now see this technology in motion through robotics, autonomous vehicles, and smart infrastructure. How do agents work alongside us in real environments? Our latest AWS Open Source blog explains how teams can build intelligent physical AI systems that bridge edge and cloud computing. By combining Strands Agents SDK, Amazon Bedrock AgentCore, Claude 4.5, NVIDIA GR00T, and Hugging Face LeRobot, customers can create agentic systems that leverage cloud-scale reasoning while maintaining millisecond responsiveness for real-time physical interaction. The architecture enables edge devices to handle fast, instinctual responses while the cloud provides deliberate reasoning and fleet-wide learning. We're seeing remarkable results—from robotic arms performing complex manipulation tasks to autonomous systems that continuously improve through shared experience. Learn about building intelligent physical AI with agentic systems in this deep dive from our team: https://lnkd.in/gEJVuF5F

  • View profile for Vaibhava Lakshmi Ravideshik

    AI for Science @ GRAIL | Research Lead @ Massachussetts Institute of Technology - Kellis Lab | LinkedIn Learning Instructor | Author - “Charting the Cosmos: AI’s expedition beyond Earth” | TSI Astronaut Candidate

    20,067 followers

    Massachusetts Institute of Technology researchers just dropped something wild; a system that lets robots learn how to control themselves just by watching their own movements with a camera. No fancy sensors. No hand-coded models. Just vision. Think about that for a second. Right now, most robots rely on precise digital models to function - like a blueprint telling them exactly how their joints should bend, how much force to apply, etc. But what if the robot could just... figure it out by experimenting, like a baby flailing its arms until it learns to grab things? That’s what Neural Jacobian Fields (NJF) does. It lets a robot wiggle around randomly, observe itself through a camera, and build its own internal "sense" of how its body responds to commands. The implications? 1) Cheaper, more adaptable robots - No need for expensive embedded sensors or rigid designs. 2) Soft robotics gets real - Ever tried to model a squishy, deformable robot? It’s a nightmare. Now, they can just learn their own physics. 3) Robots that teach themselves - instead of painstakingly programming every movement, we could just show them what to do and let them work out the "how." The demo videos are mind-blowing; a pneumatic hand with zero sensors learning to pinch objects, a 3D-printed arm scribbling with a pencil, all controlled purely by vision. But here’s the kicker: What if this is how all robots learn in the future? No more pre-loaded models. Just point a camera, let them experiment, and they’ll develop their own "muscle memory." Sure, there are still limitations (like needing multiple cameras for training), but the direction is huge. This could finally make robotics flexible enough for messy, real-world tasks - agriculture, construction, even disaster response. #AI #MachineLearning #Innovation #ArtificialIntelligence #SoftRobotics #ComputerVision #Industry40 #DisruptiveTech #MIT #Engineering #MITCSAIL #RoboticsResearch #MachineLearning #DeepLearning

  • View profile for Ted Strazimiri

    Drones & Data

    28,172 followers

    Researchers at Hong Kong University MaRS Lab have just published another jaw dropping paper featuring their safety-assured high-speed aerial robot path planning system dubbed "SUPER". With a single MID360 lidar sensor they repeatedly achieved autonomous one-shot navigation at speeds exceeding 20m/s in obstacle rich environments. Since it only requires a single lidar these vehicles can be built with a small footprint and navigate completely independent of light, GPS and radio link. This is not just #SLAM on a #drone, in fact the SUPER system continuously computes two trajectories in each re-planning cycle—a high-speed exploratory trajectory and a conservative backup trajectory. The exploratory trajectory is designed to maximize speed by considering both known free spaces and unknown areas, allowing the drone to fly aggressively and efficiently toward its goal. In contrast, the backup trajectory is entirely confined within the known free spaces identified by the point-cloud map, ensuring that if unforeseen obstacles are encountered or if the system’s perception becomes uncertain, the system can safely switch to a precomputed, collision-free path. The direct use of LIDAR point clouds for mapping eliminates the need for time-consuming occupancy grid updates and complex data fusion algorithms. Combined with an efficient dual-trajectory planning framework, this leads to significant reductions in computation time—often an order of magnitude faster than comparable SLAM-based systems—allowing the MAV to operate at higher speeds without sacrificing safety. This two-pronged planning strategy is particularly innovative because it directly addresses the classic speed-safety trade-off in autonomous navigation. By planning an exploratory trajectory that pushes the speed envelope and a backup trajectory that guarantees safety, SUPER can achieve high-speed flight (demonstrated speeds exceeding 20 meters per second) without compromising on collision avoidance. If you've been tracking the progress of autonomy in aerial robotics and matching it to the winning strategies emerging in Ukraine, it's clear we're likely to experience another ChatGPT moment in this domain, very soon. #LiDAR scanners will continue to get smaller and cheaper, solid state VSCEL based sensors are rapidly improving and it is conceivable that vehicles with this capability can be built and deployed with a bill of materials below $1000. Link to the paper in the comments below.

  • View profile for Andriy Burkov
    Andriy Burkov Andriy Burkov is an Influencer

    PhD in AI, author of 📖 The Hundred-Page Language Models Book and 📖 The Hundred-Page Machine Learning Book

    486,885 followers

    VLA models are systems that combine three capabilities into one framework: seeing the world through cameras, understanding natural language instructions like "pick up the red apple," and generating the actual motor commands to make a robot do it. Before these unified models existed, robots had separate modules for vision, language, and movement that were stitched together with manual engineering, which made them brittle and unable to handle new situations. This review paper covers over 80 VLA models published in the past three years, organizing them into a taxonomy based on their architectures—some use a single end-to-end network, others separate high-level planning from low-level control, some use diffusion models for smoother action sequences. The paper walks through how these models are trained using both internet data and robot demonstration datasets, then maps out where they're being applied. The later sections lay out the concrete technical problems that remain unsolved. Read online with an AI tutor: https://lnkd.in/eZdzYfdu PDF: https://lnkd.in/ezzncewE

Explore categories