When did we start trusting assumptions more than measurements? Simulation has transformed engineering. We can model complex systems, iterate designs quickly, and explore conditions that would be difficult or expensive to test. But every simulation begins with assumptions. Material properties are simplified. Boundary conditions are estimated. Interfaces are idealized. And sometimes, those small assumptions quietly evolve into large errors. That is where measurement brings us back to reality. In the following application, engineers needed to understand how a piston head truly behaves under load. Not in theory, but in practice. The piston head carries the full impact of combustion, influencing performance, efficiency, durability, and emissions. There is no margin for uncertainty. Strain gage sensors were installed directly on the piston head, and the assembly was placed into a simulated cylinder head. Instead of firing the engine, controlled air pressure was used to replicate loading conditions seen in high performance operation. This approach revealed something important. Even without extreme temperature effects, the measured strain data provided immediate insight into how the structure responded. It allowed engineers to validate their FEA model early, identify discrepancies, and refine the design before moving into more complex and costly testing. Testing did not replace simulation. It grounded it. Because in the end, the goal is not just to predict behavior. It is to understand it. And that understanding starts with measurement.
Simulation and Data Analysis in Engineering
Explore top LinkedIn content from expert professionals.
Summary
Simulation and data analysis in engineering means using computer models and real-world measurements to predict and understand how systems, machines, or processes behave before investing in physical testing or making costly changes. These techniques help engineers explore scenarios, uncover risks, and validate designs, turning assumptions into informed decisions.
- Question assumptions: Always challenge the inputs and premises behind your simulation to avoid hidden errors and improve model accuracy.
- Validate with data: Compare simulation results to actual measurements and sensor data to ground your predictions and refine your designs.
- Explore scenarios: Use simulation tools to test different possibilities and identify potential bottlenecks, failures, or improvements before making real-world changes.
-
-
Do you understand the "why" from the engineering simulations you conduct? My greatest technical mentors always instilled a strong foundation in first principles, or "the first basis from which a thing is known." Unless you already know the answer (know what to expect or how to interpret what you see), how can you confirm the result validity from a simulation? When dealing with complex, multi-variable systems, I think of this as the "why" of what is being observed. With the increasing penetration of power electronic converters and the rapid change of the power system, the need for modeling and simulation engineers continues to increase. Access to modeling tools has made everyone capable of installing software and appearing as an expert. The most talented engineers I know are lifelong learners. I'd encourage everyone to keep their sense of curiosity. Ask questions and strive for a deeper understanding of what the simulation tools seem to be telling us. Don't assume the output is correct, or even the input. We should always employ good engineering judgement and make ethical decisions about how we treat assumptions. Situations such as those in the comment below likely occur to make a simulation result look "nicer" or are misunderstandings of physics. A poor looking result doesn't always mean an inaccurate result. When best practices and sound engineering judgement have been used, care should be taken when making adjustments to ensure validity. Otherwise, it's not the simulation result that's in error, but the input data or assumptions (think garbage in equals garbage out). Loading input files and hitting run is not a valid approach to performing a technical study. Similar to the mantra of "measure twice and cut once", the testing and evaluation of sub-systems or components should ensure the quality of what's being used for the task at hand (be it harmonic analysis, dynamic stability, small-signal stability, or any other analysis type). Take your time to understand or run sensitivities to determine root cause of behavior observed, know how things are modeled and “why”, then research the control theory, electromagnetic response, or physical machine properties. Ask questions to those who can help shed some light on what's being observed. Keeping our focus on the "why" will continue to make for more informed decisions and incredible engineers. #PowerSystems #PowerElectronics #ControlSystems #Modeling #SystemStudies #RenewableEnergy
-
𝙒𝙝𝙖𝙩 𝙞𝙛 𝙮𝙤𝙪𝙧 𝙛𝙖𝙘𝙩𝙤𝙧𝙮 𝙤𝙣𝙡𝙮 𝙬𝙤𝙧𝙠𝙨 𝙗𝙚𝙘𝙖𝙪𝙨𝙚 𝙧𝙚𝙖𝙡𝙞𝙩𝙮 𝙝𝙖𝙨𝙣’𝙩 𝙩𝙚𝙨𝙩𝙚𝙙 𝙞𝙩 𝙮𝙚𝙩? Most plants look stable— until demand shifts, a resource slips, or variability shows up where no one expected it. That’s when leaders realize the system wasn’t designed for reality. It was designed for assumptions. This is why simulation-based decision making—especially Discrete Event Simulation (DES)—has become essential for smart plants. Not to predict the future. But to stress-test the system before the system is forced to respond. Here’s what DES actually validates—end to end: 1️⃣ 𝙋𝙧𝙤𝙘𝙚𝙨𝙨 𝙁𝙡𝙤𝙬 𝙊𝙥𝙩𝙞𝙢𝙞𝙯𝙖𝙩𝙞𝙤𝙣 DES shows how material and information truly move—not how the routing sheet claims they do. 2️⃣ 𝙀𝙦𝙪𝙞𝙥𝙢𝙚𝙣𝙩 𝙐𝙩𝙞𝙡𝙞𝙯𝙖𝙩𝙞𝙤𝙣 𝘼𝙣𝙖𝙡𝙮𝙨𝙞𝙨 High utilization can hide starvation and blocking. DES exposes when assets look busy but flow is unhealthy. 3️⃣ 𝘽𝙤𝙩𝙩𝙡𝙚𝙣𝙚𝙘𝙠 𝙄𝙙𝙚𝙣𝙩𝙞𝙛𝙞𝙘𝙖𝙩𝙞𝙤𝙣 Constraints aren’t static. DES reveals where the bottleneck migrates under different conditions. 4️⃣ 𝙋𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣 𝘾𝙖𝙥𝙖𝙘𝙞𝙩𝙮 𝙋𝙡𝙖𝙣𝙣𝙞𝙣𝙜 Capacity isn’t a fixed number. DES models how throughput behaves under variability, downtime, and mix changes. 5️⃣ 𝘽𝙪𝙛𝙛𝙚𝙧 𝙎𝙞𝙯𝙞𝙣𝙜 Too much buffer masks instability. Too little amplifies it. DES finds the point where flow stays resilient. 6️⃣ 𝘾𝙮𝙘𝙡𝙚 𝙏𝙞𝙢𝙚 𝘿𝙞𝙨𝙩𝙧𝙞𝙗𝙪𝙩𝙞𝙤𝙣 Averages lie. DES reveals the spread—and where volatility is introduced. 7️⃣ 𝙍𝙚𝙨𝙤𝙪𝙧𝙘𝙚 𝘼𝙡𝙡𝙤𝙘𝙖𝙩𝙞𝙤𝙣 People, machines, and automation interact as a system. DES tests the balance before locking it in. 8️⃣ 𝘿𝙚𝙢𝙖𝙣𝙙 𝙁𝙡𝙤𝙬 𝙊𝙥𝙩𝙞𝙢𝙞𝙯𝙖𝙩𝙞𝙤𝙣 DES connects demand patterns to execution reality—without overloading the system. 9️⃣ 𝙏𝙧𝙞𝙖𝙡 𝘽𝙪𝙞𝙡𝙙 𝙎𝙘𝙚𝙣𝙖𝙧𝙞𝙤 𝘼𝙣𝙖𝙡𝙮𝙨𝙞𝙨 Instead of learning after launch, DES lets teams explore “what if” scenarios before they become problems. 🔟 𝘿𝙖𝙩𝙖-𝘿𝙧𝙞𝙫𝙚𝙣 𝙄𝙣𝙫𝙚𝙨𝙩𝙢𝙚𝙣𝙩 𝘿𝙚𝙘𝙞𝙨𝙞𝙤𝙣𝙨 Every capex decision is validated against system behavior—not isolated ROI logic. This is the real shift leaders are making: 𝙁𝙧𝙤𝙢 𝙩𝙧𝙞𝙖𝙡 𝙗𝙪𝙞𝙡𝙙𝙨 → 𝙩𝙤 𝙫𝙖𝙡𝙞𝙙𝙖𝙩𝙚𝙙 𝙨𝙘𝙚𝙣𝙖𝙧𝙞𝙤𝙨 𝙁𝙧𝙤𝙢 𝙤𝙥𝙞𝙣𝙞𝙤𝙣𝙨 → 𝙩𝙤 𝙚𝙫𝙞𝙙𝙚𝙣𝙘𝙚 𝙁𝙧𝙤𝙢 𝙛𝙞𝙧𝙚𝙛𝙞𝙜𝙝𝙩𝙞𝙣𝙜 → 𝙩𝙤 𝙙𝙚𝙨𝙞𝙜𝙣𝙚𝙙 𝙨𝙩𝙖𝙗𝙞𝙡𝙞𝙩𝙮 Simulation doesn’t improve factories. It reveals whether the system was ever ready. 𝙄𝙛 𝙮𝙤𝙪’𝙧𝙚 𝙨𝙘𝙖𝙡𝙞𝙣𝙜 𝙥𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣, 𝙞𝙣𝙩𝙧𝙤𝙙𝙪𝙘𝙞𝙣𝙜 𝙖𝙪𝙩𝙤𝙢𝙖𝙩𝙞𝙤𝙣, 𝙤𝙧 𝙧𝙚𝙗𝙖𝙡𝙖𝙣𝙘𝙞𝙣𝙜 𝙘𝙖𝙥𝙖𝙘𝙞𝙩𝙮— 𝙩𝙝𝙚 𝙦𝙪𝙚𝙨𝙩𝙞𝙤𝙣 𝙞𝙨𝙣’𝙩 𝙘𝙖𝙣 𝙩𝙝𝙚 𝙡𝙞𝙣𝙚 𝙧𝙪𝙣?
-
Most engineering and business forecasts still rely on single-number estimates: one MTBF, one warranty-return rate, one “expected” portfolio return. Monte Carlo simulation flips that mindset by treating every key input as a distribution instead of a constant, then running thousands of virtual futures to see the full range of possible outcomes. Instead of asking “what will happen,” you start asking “what is the probability that we hit our reliability target or our financial goal under realistic variability and uncertainty.” For reliability engineers and decision makers, this becomes a virtual test lab and a virtual market at the same time. You can combine ALT or run-to-failure data, usage variability, and stress profiles to project field failures, while also modeling revenue, cost, or portfolio risk using the same framework. The result is a more honest conversation with stakeholders, framed in probabilities and risk envelopes instead of optimistic point estimates.
-
If you run service and maintenance, you’re managing a moving system, not a checklist. The energy transition multiplies this complexity: assets interact across electricity, heat, fuels, storage, and conversion. That means troubleshooting can’t stop at the asset level. It has to read the system. Here’s what’s working: bring design models and operational data into one living view. The material highlights this shift clearly with the digital twin and executable digital twin. Simulation models built during design are extended into operations, learning from sensor inputs to predict issues before they become outages. In practice, that looks like predicting turbine blade stress with only a few physical sensors, or using hybrid multiphase CFD to qualify equipment performance before deployment so field testing isn’t the first test. This approach addresses the energy trilemma with day-to-day control. Affordability and access through higher efficiency and fewer truck rolls. Security through better visibility across critical parameters and faster root-cause analysis. Sustainability through tuned combustion, smarter storage, and cleaner fuel blends. It’s not new tech for tech’s sake. It’s a single source of truth that lets teams see cause and effect across engineering, production, and service. One takeaway you can apply now: standardize a closed-loop workflow between engineering and ops. Reuse design models, connect real-time sensor data, and track changes in one place. If maintenance finds a recurring issue, feed it back into the model, simulate fixes, then roll the approved settings to the field. Over time, the system gets easier to run, not harder. If you’re balancing safety, cost, and sustainability targets, and want system performance you can trust, let’s compare notes on how you’re closing the loop between design and operations.
-
𝗛𝗼𝘄 𝗘𝗮𝗿𝘁𝗵𝗾𝘂𝗮𝗸𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗧𝗮𝘂𝗴𝗵𝘁 𝗠𝗲 𝘁𝗼 (𝗟𝗼𝘃𝗲 𝘁𝗼) 𝗖𝗼𝗱𝗲 🖤 I'm a structural engineer with a background in structural analysis and structural dynamics. More than fifteen years ago, I first came into contact with 𝗘𝗮𝗿𝘁𝗵𝗾𝘂𝗮𝗸𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 – during my studies and as a student assistant. Since then, it's been a constant part of my professional journey. And it quickly became clear to me: 𝗘𝗮𝗿𝘁𝗵𝗾𝘂𝗮𝗸𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝘀 𝗮 𝘀𝗽𝗲𝗰𝗶𝗮𝗹 𝗸𝗶𝗻𝗱 𝗼𝗳 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲. Not just technically, but also when it comes to the tools we use. Especially when dealing with the most advanced type of seismic analysis: 𝗡𝗼𝗻𝗹𝗶𝗻𝗲𝗮𝗿 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲-𝗛𝗶𝘀𝘁𝗼𝗿𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Let me highlight three key points: 1️⃣ 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗼𝗳 𝗖𝗼𝗻𝘃𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 Many software providers advertise impressive features. But on closer inspection, it often reminded me of the glossy images on ready meals – expectations and reality rarely align. Even today, essential features are still missing in many software packages. And I’ve come across more than a few bugs. The standard response from vendors? “𝘕𝘰𝘣𝘰𝘥𝘺 𝘯𝘰𝘵𝘪𝘤𝘦𝘥 – 𝘪𝘵'𝘴 𝘳𝘢𝘳𝘦𝘭𝘺 𝘶𝘴𝘦𝘥.” Or: "𝘛𝘩𝘢𝘯𝘬𝘴! 𝘞𝘦 𝘸𝘪𝘭𝘭 𝘧𝘪𝘹 𝘪𝘵 𝘯𝘦𝘹𝘵 𝘺𝘦𝘢𝘳. 𝘔𝘢𝘺𝘣𝘦." That's why it's worth developing your 𝗼𝘄𝗻 𝘁𝗼𝗼𝗹𝘀: to validate results or extend existing software. Especially for more advanced analyses like response-history simulations. 2️⃣ 𝗟𝗮𝗿𝗴𝗲 𝗗𝗮𝘁𝗮 𝗦𝗲𝘁𝘀 + 3️⃣ 𝗥𝗲𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝗧𝗮𝘀𝗸𝘀 Time-history analysis means running a static analysis for thousands of time steps. Add iterations for nonlinear behavior, and the effort grows. And analyzing just one earthquake is like judging dart skills based on a single throw. To get meaningful results, you need to run simulations with at least seven different ground motions—often also for different structural configurations and intensity levels. That creates huge computational demands and massive data sets to process. After intense simulations—assuming no convergence issues!—I found the evaluation of results to be the real bottleneck. Excel quickly reached its limits. Tools like 𝗠𝗔𝗧𝗟𝗔𝗕 or 𝗣𝗬𝗧𝗛𝗢𝗡 proved far more efficient. And all those repetitive tasks? I automated them—freeing up time to focus on what really matters. 👉 Earthquake engineering brought 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 into my life as a structural engineer. I’ve built my own tools—for 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 (see image), analysis, reporting, data management, and more. It’s a skill I wouldn't want to work without today. 👋 𝗪𝗵𝗮𝘁 𝗮𝗯𝗼𝘂𝘁 𝘆𝗼𝘂? Do you develop your own tools? What brought you to programming? What programming language do you use? Feel free to share in the comments!
-
"Lubrication Simulation of a Differential Gearbox Using the SPH Method" In automotive power transmission systems, the optimal performance of a differential gearbox plays a vital role in reducing friction, improving efficiency, and extending component lifespan. One of the key design challenges in this field is understanding lubricant behavior under complex rotational conditions and gear contact interactions. The PTEC Group technical team has conducted a high-fidelity simulation of the lubrication and oil distribution process in a differential gearbox using the SPH (Smoothed Particle Hydrodynamics) method. This simulation was performed on high-performance computing systems utilizing 104 parallel cores, enabling detailed analysis and precise modeling. To capture the physics accurately, the SPH numerical approach—a particle-based method rather than traditional meshing—was employed. This technique allows for a realistic representation of lubricant motion, splashing, and film formation under high-speed rotational conditions, providing valuable insights into the actual lubrication behavior within the gearbox. 🔸 Ehsan Saadati 🔸 PTEC Group 🔸 www.ptec-cae.com #CFD #CAE #FEM #PTEC #SPH #SmoothedParticleHydrodynamics #GearboxSimulation #DifferentialGear #LubricationAnalysis #MultiphaseFlow #FluidDynamics #ParticleBasedSimulation #MechanicalEngineering #OilFilm #FlowSimulation #DigitalEngineering #EngineeringSimulation #AutomotiveEngineering #ThermalModeling #ViscosityModeling #Tribology #DesignOptimization #NumericalSimulation
-
In simulation optimization, especially within complex systems, estimating the performance of a design under uncertainty is critical. This is where discrete event simulation comes into play. For a given design, we simulate different scenarios, capturing various outcomes to understand how well our design performs. The average result of these simulations, gives us a solid estimate of the design’s effectiveness. 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥 𝐀𝐝𝐯𝐢𝐜𝐞: ✅ Start with a Clear Objective: Define what you want to optimize; be it cost, efficiency, or customer satisfaction. Knowing your end goal helps tailor the simulation scenarios effectively. ✅ Run Enough Simulations: To capture the variability of real-world conditions, ensure you run a sufficient number of simulations. More runs give a more accurate estimate of quality, but balance it with computational resources. ✅ Analyze and Iterate: Use the results to identify areas of improvement. If certain designs perform poorly under specific conditions, use that insight to refine and test again. Continuous iteration helps in honing the optimal solution. ✅ Leverage Software Tools: Utilize simulation tools that can handle complex event-driven processes. This can save time and provide more detailed insights. By following these steps, you can make your optimization efforts robust and reliable. Simulation-based optimization is more than just a tool; it’s a way to drive better, data-informed decisions in uncertain environments. I've attached a handwritten version of the formula! (On a sticky note to honor Dr. Kruti Lehenbauer 🙇♂️) What simulation tools have you used? What do you see as the tradeoffs between them? #SimulationOptimization #DiscreteEventSimulation #DecisionMaking
-
It is always easier to fix a problem observed in the layout design than to set up a simulation to assess its impact. Examples include routing over splits, going close to an antipad, and teardrops in the layout. As Eric Bogatin likes to say, a rough answer quickly is sometimes better than an accurate answer later. Simulating mainframes 20 years ago is not that different from the simulations I run now for high-speed networks. I use similar tools and models that have advanced over the years to address serial links, such as IBIS, AMI, and PAM4, but I still face many of the same concepts and problems when analyzing and solving them. Ask anyone responsible for the SI of a complete system, and you'll find they spend as much time on I2C and SPI as on the high-speed links. Network, computer, and storage systems contain many printed circuit boards that use common logic components and technologies. New technologies are signaling faster, with higher edge rates and clocking both edges, resulting in reduced system-level timing and noise margins. Reusing signal integrity and timing analysis environments, along with their associated models, results in significant manpower and schedule savings. My upcoming posts explore the key aspects of the Design Analysis Reuse methodology. This includes examining the types of data that must be portable across designs and how mechanical changes to the PCB design can be handled without compromising reuse. Reuse within a design and between designs is discussed. What is Signal Integrity (SI) and Timing Analysis Simulation Reuse? Design analysis reuse is simply the ability to leverage existing simulation setups for new designs. I define SI and Timing Simulation Analysis reuse as the ability to fully leverage an entire simulation analysis environment within a design or across designs, independent of physical implementation. The simulation environment contains critical interface definitions, simulation and timing models, simulation conditions, and solution space. The more commonality among designs, the more leverage one obtains. As a simple example of reuse, you can reuse the complete pre-layout environment, models, stimulus, populations, and variations to drive the post-layout analysis of the same design. A more complex example could be memory design. Reuse allows full leverage of models, topologies, constraints, and variables from a single instantiation of a single-board design (e.g., memory chips on the board) to drive the multi-board implementation of the same function (e.g., memory chips in DIMMs). With Samsung and Micron announcing DDR4 parts End Of Life, price by as much as 100% and availablilty 26-52 weeks have skyrocketed. The ability to simulate and explore different solutions and vendors becomes paramount. The figure below shows the achievable waveform quality and timing margins for a clock relative to its data in a high-speed source-synchronous design when a robust reuse methodology is implemented.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development