Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data. 2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro -> RoboCasa produces N (varying visuals) -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK
Engineering Simulation Tools Overview
Explore top LinkedIn content from expert professionals.
-
-
Assessing Aeroacoustics of Fan Noise in CFD by ENGYS 🚗 Reducing automotive cooling fan noise is critical - with levels reaching up to 85dBA, manufacturers seek efficient CFD solutions. Automotive cooling fans are a major noise source, reaching up to 85dBA at certain frequencies. To tackle this, Johnson Electric partnered with Engys to simulate and reduce fan noise using advanced CFD techniques. The project combined two computational approaches: an unsteady RANS (uRANS) simulation to analyze tonal noise within a 12-hour CPU time and a detached eddy simulation (DES) to assess broadband noise, validated against experimental results. Simulating turbulent flows is challenging, requiring accurate modeling of both the noise source and acoustic wave propagation. Engys leveraged its Helxy software, using an acoustic analogy approach to balance accuracy and computational efficiency. A CAD model of the fan, including the anechoic chamber, was analyzed to optimize mesh, time steps, and numerical schemes. Their extrude meshing algorithm improved boundary layer resolution while maintaining smooth transitions, cutting turnaround times by 20-30%. More on tackling CFD aeroacoustic challenges here: https://lnkd.in/eJxNhuAX #CFD #Aeroacoustics #NoiseReduction #AutomotiveEngineering #Simulation
-
Your plan to become a BIM specialist: 1) Learn Revit Take online courses or tutorials (e.g., Udemy, LinkedIn Learning, or Autodesk’s official training). Practice creating models for different disciplines (architectural, structural, MEP). Focus on: 1- Creating walls, floors, roofs, and structural elements. 2- Adding families (parametric components like doors, windows, etc.). 3- Generating construction documents (plans, sections, schedules). 2) Learn Navisworks 1- Use Navisworks for clash detection and project coordination. 2- Learn to merge models from different disciplines and run clash tests. 3) Explore Dynamo (Optional but Recommended) Dynamo is a visual programming tool for Revit that automates repetitive tasks. Learn to create scripts for tasks like placing families, generating geometry, or extracting data. 4) Gain Knowledge of BIM Standards and Processes 1- Study BIM Level 2 standards (common in many countries). 2- Understand the COBie (Construction Operations Building Information Exchange) format for data delivery. 3- Learn about Common Data Environments (CDEs) like BIM 360 or Aconex for collaboration. 5) Build a Portfolio Create sample projects showcasing your BIM skills. Include: 1- 3D models of buildings or structures. 2- Construction documentation (plans, sections, schedules). 3- Examples of clash detection and coordination. 4- Any automation scripts (if you’ve learned Dynamo). 6) Get Certified (Optional but Helpful) Certifications can boost your credibility: 1- Autodesk Certified Professional (ACP) in Revit. 2- Certified BIM Manager (from organizations like AGC or RICS). 3- ISO 19650 Certification for BIM standards. 7) Gain Practical Experience 1- Internships: Look for internships or entry-level roles in AEC firms. 2- Freelancing: Take on small BIM modeling projects on platforms like Upwork or Fiverr. 3- Networking: Join BIM communities (e.g., LinkedIn groups, forums like Revit Forum) to connect with professionals. 8) Stay Updated 1- Follow industry trends like BIM Level 3, Digital Twins, and AI in BIM. 2- Attend webinars, conferences, and workshops on BIM. 3- Keep learning new tools and techniques.
-
A few years ago, I learned the hard way that jumping straight into hardware, sensors, motors, and wiring can lead to costly mistakes and late-night headaches. That’s when I discovered the true importance of #simulation in robotics and engineering. During the early phase of my final-year thesis, I spent weeks recreating our school cafeteria with Iman Tokosi in Blender, exporting it as an SDF model and loading it into Gazebo using #ROS2. Suddenly, I could drive a virtual robot through aisles and around tables without the fear of damaging anything real. It was challenging and eye-opening, and it saved me countless hours and resources. Then came the moment that changed everything: integrating #SLAM so the robot could build its own map while moving, and setting up #Nav2 to let it plan and follow paths autonomously. Watching it navigate the environment with precision and independence was a powerful confirmation that the system worked. Now, imagine a world where every structure, product, and system is simulated down to the smallest detail. The result? Reduced costs, faster development, increased reliability, enhanced safety, and stronger adherence to standards. Some may still view simulation as “just for show,” but I’ve experienced firsthand that it’s the foundation of true innovation. Are you leveraging simulation in your next robotics or engineering project? Let’s connect and exchange ideas!
-
+6
-
I had the chance a few weeks ago to sit down (virtually of course) with Woodrow Bellamy III, host of SAE’s Aerospace & Defense Technology podcast, for a candid conversation about virtual prototyping. While I work across many industries these days, I always enjoy a chance to share insights from my years working in Aerospace. The key question was about the reliability of virtual prototyping as a replacement for physical prototyping. How much faith can be put into a digital model for accurately simulating a real-word system, when you have real-world stakes and consequences? The answer is: a lot. When we first started using simulation for some of the aircraft programs I worked, we questioned if we could trust the results to predict peformance of a new aircraft. It was an appropriate question to ask. It was only after conducting extensive testing to validate the results of the simulation against an existing aircraft that we started to trust the simulation models while designing a new aircraft. The same litmus test will be needed for companies in any industry to take the leap with virtual prototyping: A company can start by developing the virtual model and ensure the validity and robustness of the simulation by comparing against existing physical products. Once the team is sufficiently confident about the performance of the digital model, they can take the insights from existing products and systems and apply them toward the development of future projects. The digital twin of a physical system is infinitely more malleable. It can be designed, tested, and experimented upon with far more ease and using significantly fewer resources. Virtual prototypes allow for limitless design exploration, help to identify design issues early and before building physical prototypes, and make physical testing more effective. Before going for the real-life testing, virtual analyses highlight critical areas in the design, and the test plans can be adjusted to focus on areas of greatest concern. The aerospace industry adopted the use of the digital twin as a revolutionary design tool decades ago. And the digital twin has evolved quite a bit over the ensuing decades. It is no longer just a 3D model of a product or process. Today, the comprehensive digital twin is a precise virtual representation of the product that replicates its physical form, function and behavior and encompasses all cross-domain models and data from mechanical and electrical through software code. So, although the current ratio of prototyping testing, worldwide, may be 90% physical and 10% digital, it isn’t an overstatement to conceive of a future where the ratio is flipped; maybe even 100% digital. I'm excited about the opportunities offered by virtual prototyping and testing as a means to enable companies to develop and validate innovative products faster! To check out my conversation with Woodrow, please check out the link in the comments. #digitaltransformation #siemensxcelerator
-
𝗗𝗼𝗻’𝘁 𝗝𝘂𝘀𝘁 𝗥𝗲𝗮𝗱 𝗔𝗯𝗼𝘂𝘁 𝗔𝗜 𝗶𝗻 𝗠𝗮𝗻𝘂𝗳𝗮𝗰𝘁𝘂𝗿𝗶𝗻𝗴. 𝗔𝗽𝗽𝗹𝘆 𝗜𝘁. The AI headlines are exciting. But if you're a founder, engineer, or educator in manufacturing, here's the question that actually matters: 𝗪𝗵𝗮𝘁 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗱𝗼 𝘵𝘰𝘥𝘢𝘺 𝘁𝗼 𝘁𝘂𝗿𝗻 𝘁𝗵𝗲𝘀𝗲 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻𝘀 𝗶𝗻𝘁𝗼 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻? Let’s get tactical. 𝟭. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗔𝗜 𝗱𝗲𝗺𝗮𝗻𝗱 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴 Tool to try: Lenovo’s LeForecast A foundation model for time-series forecasting. Trained on manufacturing-specific datasets. 𝗨𝘀𝗲 𝗶𝘁 𝗶𝗳: You’re battling supply chain volatility and need better inventory planning. 👉 Tip: Start by connecting your ERP data. Don’t wait for perfect integration: small wins snowball. 𝟮. 𝗕𝘂𝗶𝗹𝗱 𝗮 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘁𝘄𝗶𝗻 𝗯𝗲𝗳𝗼𝗿𝗲 𝗯𝘂𝘆𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝗻𝗲𝘅𝘁 𝗿𝗼𝗯𝗼𝘁 Tools behind the scenes: NVIDIA Omniverse, Microsoft Azure Digital Twins Schaeffler + Accenture used these to simulate humanoid robots (like Agility’s Digit) inside full-scale virtual factories. 𝗨𝘀𝗲 𝗶𝘁 𝗶𝗳: You’re considering automation but can’t afford to mess up your live floor. 👉 Tip: Simulate your current workflows first. Even without a robot, you’ll find inefficiencies you didn’t know existed. 𝟯. 𝗕𝗿𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗤𝗔 𝗽𝗿𝗼𝗰𝗲𝘀𝘀 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝟮𝟬𝟮𝟬𝘀 Example: GM uses AI to scan weld quality, detect microcracks, and spot battery defects: before they become recalls. 𝗨𝘀𝗲 𝗶𝘁 𝗶𝗳: You’re relying on spot checks or human-only inspections. 👉 Tip: Start with one defect type. Use computer vision (CV) models trained with edge devices like NVIDIA Jetson or AWS Panorama. 𝟰. 𝗘𝗱𝗴𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗼𝗽𝘁𝗶𝗼𝗻𝗮𝗹 𝗮𝗻𝘆𝗺𝗼𝗿𝗲 Why it matters: If your AI system reacts in seconds instead of milliseconds, it's too late for safety-critical tasks. 𝗨𝘀𝗲 𝗶𝘁 𝗶𝗳: You're in high-speed assembly lines, robotics, or anything safety-regulated. 👉 Tip: Evaluate edge-ready AI platforms like Lenovo ThinkEdge or Honeywell’s new containerized UOC systems. 𝟱. 𝗕𝗲 𝗲𝗮𝗿𝗹𝘆 𝗼𝗻 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 The EU AI Act is live. China is doubling down on "self-reliant AI." The U.S.? Deregulating. 𝗨𝘀𝗲 𝗶𝘁 𝗶𝗳: You're deploying GenAI, predictive models, or automation tools across borders. 👉 Tip: Start tagging your AI systems by risk level. This will save you time (and fines) later. Here are 5 actionable moves manufacturers can make today to level up with AI: pulled straight from the trenches of Hannover Messe, GM's plant floor, and what we’re building at DigiFab.ai. ✅ Forecast with tools like LeForecast ✅ Simulate before automating with digital twins ✅ Bring AI into your QA pipeline ✅ Push intelligence to the edge ✅ Get ahead of compliance rules (especially if you operate globally) 🧠 Each of these is something you can pilot now: not next quarter. Happy to share what’s worked (and what hasn’t). 👇 Save and repost. #AI #Manufacturing #DigitalTwins #EdgeAI #IndustrialAI #DigiFabAI
-
A humanoid robot costs $90K to break once. AI lets you break thousands... and learn from every fall. My background is mechanical engineering, robotics, and integration & test. But this field is moving so fast with AI that reading articles wasn't cutting it anymore. I felt out of the loop, so... I recently upgraded my personal setup to support AI training workloads and ran my first experiment: Teaching a bipedal (two-legged) humanoid robot to navigate a custom parkour course using reinforcement learning in NVIDIA Isaac Lab 5.1. But before I share what I learned, let me explain what's actually happening under the hood. A GPU-accelerated AI agent runs thousands of virtual robots in parallel. Each one learns from its own falls and successes simultaneously. The AI develops a "control policy," which is the brain that tells a robot how to move through the physical world. Why does this matter? Because what once required million-dollar labs and months of physical testing can now run on a single AI-capable GPU in hours. Robotics R&D is becoming software-first. Here's what that looked like for this experiment: 76 minutes of CUDA-accelerated training time. 393 million training steps. 4,096 robots learning in parallel on my RTX 5080. So what did I learn so far? Three things stood out to me: 》The setup before you can hit "Run" is a challenge. It took me seven hours to troubleshoot versioning, packages, and dependencies before I could run anything. I forced myself to do it manually because I wanted to understand what's under the hood. YouTube tutorials hit their limit quickly, but thankfully the NVIDIA developer forums saved me. 》The cost case is undeniable. A Unitree H1 costs around $90K. I *virtually* crashed thousands of them. My damage bill? $0. Simulation lets you fail-forward at scale. This gets you to a solid starting point for physical testing, but... 》The Sim-to-Real gap is real. This policy works well in simulation, but I couldn't get a feel for stress points, sensor behavior, or true stability. Failure is not predictable and happens at the edges. The next step would be to transfer this policy to a physical robot, gather real-world data, and continuously aligning the simulation to close that gap. The key thing here is: Testing real hardware is expensive. Simulation in software is cheap. How can you leverage both, intelligently? The benefit isn't limited to cost savings. This workflow also compresses developmental cycles and allows you to field systems faster. Do you think virtual simulation is a game-changer that is here to stay, or a fad? How would you build confidence in a robotic control policy that is trained in a virtual world? #robotics #ai #nvidia #omniverse #isaaclab ~~~~~~~~ Citations: NVIDIA IsaacLab -> https://lnkd.in/ekVMDnDc RSL-RL -> https://lnkd.in/eJye3XTW Unitree H1-> unitree.com/h1/ Note: this is an educational personal project. Opinions are my own, no affiliation or endorsement.
-
Without 𝔹𝕀𝕄 it was impossible to bring the museum of the future to life in Dubai … 😯 𝗕uilding 𝗜nformation 𝗠odelling and parametric modelling were key to the design and construction of the Museum of the Future, in Dubai . Dealing with a shape that curves in multiple directions; They had to coordinate the services with the façade and the inner lining of the room . 𝗙𝗔𝗖𝗔𝗗𝗘 𝗗𝗘𝗦𝗜𝗚𝗡 A façade formed from more than 𝟷,𝟶𝟶𝟶 stainless steel-clad composite panels covers this optimized grid of steel tubes and nodes. These contain the windows, which are formed by calligraphy cutouts. The digital model was used to ensure that none of the grid structure was visible through the windows formed by the letters. There are 𝟷𝟶,𝟶𝟶𝟶 pieces of glass, each cut to shape using water jets controlled using dimensional data extracted from the model. The MEP team was able to take the model and use it to develop the environmental systems based on daylight and solar modelling. 𝗠𝗘𝗣 BIM was key to enabling the MEP design team to thread the services through the building. Equally importantly, it enabled the detection of clashes early in the design process, which ensured problems could be resolved in the model, long before work moved to site. They modelled everything in 3D; we took the model and put it into IES software and then ran the energy modelling to determine the peak load conditions for the various spaces, to enable them to design the HVAC systems . Stakeholders also used Immersive Solutions (𝗔𝗥/𝗩𝗥) to do a virtual walkthrough of the 3D model and to check each element for clashes and design complexities. 𝗠𝗔𝗡𝗔𝗚𝗘𝗠𝗘𝗡𝗧 The BIM model was also used to analyze how people would move around the building, to identify circulation pinch points and, critically, to simulate evacuation strategies in the case of a fire. the BIM model is now being used to support the facilities management team running the building to optimize its operation. the model also is used to re-evaluate the movement of people around the spaces as the exhibits and exhibitions change and evolve over time, which is in keeping with the building’s ambition to be a gateway to the future . 𝗕𝗜𝗠 𝗕𝗘𝗡𝗘𝗙𝗜𝗧𝗦 The use of BIM led to a 𝟼𝟻% reduction in rework on-site and a 50% improvement in productivity. They achieve the LEED Platinum Certification, helping the stakeholders reduce 𝟺𝟻% water usage in the project, along with 𝟸𝟻% reduction in total energy. 𝗠𝗔𝗗𝗘 𝗣𝗢𝗦𝗦𝗜𝗕𝗟𝗘 𝗕𝗬 : Buro Happold 𝗨𝗦𝗜𝗡𝗚 : Autodesk Revit - Tekla BIM Sight - Autodesk Dynamo Studio - Autodesk Navisworks - Autodesk Robot Structural Analysis Professional - Autodesk 3ds Max . . . . . . . #arafnan #Architecture #design #BIM #Revit #navisworks #dynamo #coordination #clashes #tekla #dubai #uae
-
+13
-
Build your first robot in simulation! 👾 📌 If you’re self-learning robotics, this is genuinely one of the better repos to save for later. NVIDIA Robotics released a "Getting Started with Isaac Sim" tutorial series covering everything from building your first robot to hardware-in-the-loop deployment. What's inside? → Building Your First Robot Explore the Isaac Sim interface, construct a simple robot model (chassis, wheels, joints), configure physics properties, implement control mechanisms using OmniGraph and ROS 2, integrate sensors (RGB cameras, 2D lidar), and stream sensor data to ROS 2 for real-time visualization in RViz. → Ingesting Robot Assets Import URDF files, prepare simulation environments, add sensors to existing robot models, and access pre-built robots to accelerate development. → Synthetic Data Generation Learn perception models for dynamic robotic tasks, understand synthetic data generation, apply domain randomization with Replicator, generate synthetic datasets, and fine-tune AI perception models with validation. → Software-in-the-Loop (SIL) Build intelligent robots, implement SIL workflows, use OmniGraph for robot control, master Isaac Sim Python scripting, deploy image segmentation with ROS 2 and Isaac ROS, and test with and without simulation. → Hardware-in-the-Loop (HIL) Understand HIL fundamentals, learn NVIDIA Jetson platform, set up the Jetson environment, and deploy Isaac ROS on Jetson hardware. The progression makes sense: start with basics (build a robot), add perception (sensors and data), generate training data (synthetic generation), develop software (SIL), then deploy to hardware (HIL). Each module builds on the previous one. For robotics teams, this is the path to faster iteration. Simulate first, validate in software-in-the-loop, generate synthetic training data at scale, then deploy to hardware with confidence. 🎓 If this helps at least one engineer to become more fluent in the world of robotics, means a lot to me! 🫶🏼 Here's the course (it's free): https://lnkd.in/dRYdkmdi ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
-
4-DOF Dual Robotic Arm Pick & Place Simulation in MATLAB ➡ Coordinated dual-arm manipulation for cubes, spheres, and cylinders ➡ Analytical Inverse Kinematics for fast and accurate joint computation ➡ DH-parameter-based kinematic modeling ➡ Smooth trajectory planning with multi-stage interpolation ➡ Real-time 3D visualization with end-effector path tracing ➡ Automated simulation video generation ✨ Why this matters: In robotics, dual-arm coordination is crucial for industrial automation, collaborative robots, and intelligent material handling. This simulation demonstrates how accurate kinematics, workspace-safe IK, and trajectory planning enable two manipulators to work together seamlessly in a 3D environment. Beyond visualization, the project reinforces core concepts in joint coordination, kinematic modeling, and end-effector path planning, making it highly valuable for academic learning, prototyping, and portfolio building. 📊 Key Highlights: ✔ Dual 4-DOF manipulators working collaboratively ✔ Analytical IK for precise motion and stability ✔ Real-time 3D animation with labeled joints and links ✔ Smooth multi-stage trajectory interpolation ✔ Workspace-safe motion planning ✔ Supports multiple object shapes (cube, cylinder) 💡 Future Potential: This framework can be extended toward: ➡ Dynamic modeling & torque-based control ➡ Obstacle avoidance & path optimization ➡ ROS integration for real-world deployment ➡ AI-based trajectory planning and reinforcement learning 🔗 For students, engineers & robotics enthusiasts: This is a ready-to-use MATLAB project for learning, teaching, and prototyping advanced dual-arm robotic systems. 🔁 Repost to support robotics learning & engineering innovation! 🔁 #Robotics #MATLAB #Automation #4DOF #RobotArm #Kinematics #TrajectoryPlanning #InverseKinematics #ForwardKinematics #PickAndPlace #ControlSystems #Mechatronics #EngineeringProjects #Simulation #3DAnimation #STEM #RoboticsEngineering #TechInnovation
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development