I found and fixed 18 issues in a control system before I ever set foot on the customer's site. Last month I was preparing for commissioning on an industrial oven control system. Multi-zone temperature control. Recipe management. Safety interlocks. Data logging. Before packing my tools, I ran the entire system in simulation. S7-1200 in PLCSIM, HMI in the WinCC Unified simulator, thermal model to simulate actual oven behavior. Here's what I found: **Thermal control (10 fixes)** - Temperature scaling off by a factor of 2.5 - Setpoint ramping broke in OB1 (non-deterministic cycle times) - PID outputs stuck at 0% due to mode configuration - Recipe stop command failed mid-execution - ...and more (safety fault chain, setpoint initialization, thermal model tuning) **State machine logic (4 fixes)** - Fault reset button worked, but faults didn't clear through the full chain - Zones showed "HEATING" when they should show "STOPPED" - Circular dependency caused deadlock during startup - Hold time and cooldown logic issues **HMI and data logging (4 fixes)** - CSV export only triggered on manual stop, not recipe completion - Alarms fired constantly from unmapped simulation IO - WinCC Unified Basic panel's TagLogging API returns null - had to build custom CSV logging from scratch - Alarm API fails from scheduled tasks Every single one of these would have surfaced during on-site commissioning. Instead of troubleshooting a scaling bug while the customer watches, I fixed it at my desk. Instead of realizing at 6 PM that data logging doesn't work, I discovered it in simulation and had time to build a proper solution. Just simple math, really. A day in simulation costs me my time. A day on-site costs travel, accommodation, the customer's production time, and the pressure of fixing problems live. This wasn't complex factory automation. It was a multi-zone oven. And simulation still found 18 issues. How many issues are hiding in the systems you're about to commission? By the way - if you've ever discovered a problem during live commissioning that simulation would have caught, I'd love to hear what it was. I guess we all have these stories. --- #otomakeit #efficiency #industrialautomation #controlsystems #controlpanel #plcprogramming #commissioning #simulation
Simulation Modeling in Production
Explore top LinkedIn content from expert professionals.
Summary
Simulation modeling in production is the process of creating digital replicas of manufacturing systems or operations to test, analyze, and improve them before real-world implementation. By using simulation models, businesses can safely solve problems, predict outcomes, and make smarter decisions without the risks or costs of live testing.
- Test ideas virtually: Use digital models to explore production changes, equipment setups, or workflows before making expensive or disruptive shifts in your facility.
- Spot hidden issues: Run simulations to uncover system bottlenecks, unexpected interactions, or potential failures that may not appear until full-scale operation.
- Make informed decisions: Compare multiple scenarios in a simulated environment to choose the best strategy for production planning, inventory control, or new product introductions.
-
-
When I was working with one of my customers—an automotive manufacturer—we were about to launch a new assembly line for a critical product. Everything was planned down to the last detail, and they felt confident. But here’s what I told them: “𝘓𝘦𝘵’𝘴 𝘳𝘶𝘯 𝘢 𝘋𝘪𝘴𝘤𝘳𝘦𝘵𝘦 𝘌𝘷𝘦𝘯𝘵 𝘚𝘪𝘮𝘶𝘭𝘢𝘵𝘪𝘰𝘯 𝘧𝘪𝘳𝘴𝘵, 𝘫𝘶𝘴𝘵 𝘵𝘰 𝘣𝘦 𝘴𝘶𝘳𝘦.” At first, they didn’t see the need. After all, they had invested in top-tier equipment, trained the team, and scheduled everything perfectly. But I insisted, knowing the potential risks. 𝗔𝗻𝗱 𝘁𝗵𝗮𝗻𝗸 𝗴𝗼𝗼𝗱𝗻𝗲𝘀𝘀 𝘄𝗲 𝗱𝗶𝗱. During the simulation, we discovered a potential bottleneck in a key station. Operators were expected to handle more than they realistically could, and the result? Significant downtime and production delays if left unchecked. → Without DES, they would’ve found out the hard way—after launch. → 𝗪𝗶𝘁𝗵 𝗗𝗘𝗦, we identified the issue in hours and adjusted the process before a single part hit the line. Here’s exactly how we did it: We mapped out the entire process in a simulation environment. We tested multiple production scenarios, including different demand levels and equipment breakdowns. We identified where the bottlenecks would occur and adjusted the line accordingly. We optimized the workflow, balancing the load across stations, ensuring smooth operations. The result? They launched the assembly line 𝗼𝗻 𝘁𝗶𝗺𝗲, avoided costly downtime, and avoided over $100K in potential rework and delays and and prevented future costs that would have compounded over time. 𝗧𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗽𝗼𝘄𝗲𝗿 𝗼𝗳 𝗗𝗶𝘀𝗰𝗿𝗲𝘁𝗲 𝗘𝘃𝗲𝗻𝘁 𝗦𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻. If we hadn’t run the simulation, they would have lost weeks of production time fixing a problem they never saw coming. So, if you’re setting up a new assembly line, ask yourself: → Are you willing to risk delays and unexpected costs? Or would you prefer to 𝙞𝙙𝙚𝙣𝙩𝙞𝙛𝙮 𝙖𝙣𝙙 𝙨𝙤𝙡𝙫𝙚 𝙥𝙤𝙩𝙚𝙣𝙩𝙞𝙖𝙡 𝙥𝙧𝙤𝙗𝙡𝙚𝙢𝙨 𝙗𝙚𝙛𝙤𝙧𝙚 𝙩𝙝𝙚𝙮 𝙝𝙖𝙥𝙥𝙚𝙣? This is how modern manufacturing leaders avoid the pitfalls that kill efficiency. 𝙄𝙛 𝙮𝙤𝙪’𝙧𝙚 𝙧𝙚𝙖𝙙𝙮 𝙩𝙤 𝙨𝙚𝙚 𝙝𝙤𝙬 𝘿𝙀𝙎 𝙘𝙖𝙣 𝙨𝙖𝙛𝙚𝙜𝙪𝙖𝙧𝙙 𝙮𝙤𝙪𝙧 𝙤𝙥𝙚𝙧𝙖𝙩𝙞𝙤𝙣𝙨, 𝙡𝙚𝙩’𝙨 𝙩𝙖𝙡𝙠. 😊 → DM me, and I’ll help you implement the same strategy that worked for my customer. It’s practical, it’s effective, and it’s what separates the good from the great.
-
Running simulations: base model vs. lookahead model I see people posting on the use of “simulations” for planning inventory policies. If you are using a lookahead model (which is typical for most real-world inventory problems), there are two models where simulation can be used: 1. The base model, which can be a simulator or the real world. 2. The lookahead model, which is used in the policy for planning the future to make a decision now. See the figure below - I use the same notational style for both models, but the lookahead model uses tildes on each variables, which also carry two time subscripts: the point in time we are making the decision, and the time period within the lookahead model. The base model is used to evaluate the policy, and is needed to perform any parameter tuning. The base model can be based on history or a simulation of what you think the future can be. When simulating inventory policies, special care has to be used because we do not have historical data on market demand – we typically just have sales, which can be “censored” (a topic that has been recognized in the inventory literature for over 60 years). For example, if we run out of product (and there is no back ordering), we lose the sales, which typically means that we do not see (or record) them. I find it is generally best to run simulations using mathematical models of uncertainty so that we can run many simulations, testing different policies. Stockouts depend on properly simulating the tails of distributions, along with market shifts, price changes and supply chain disruptions. There are, of course, settings where you have no choice but to test your ideas in the field. It is expensive, risky, and slow, but sometimes you just have no choice, especially when you have to capture human behavior. If your policy requires planning into the future, you really need to be using a stochastic (probabilistic) model of the future which properly captures the tails of distributions. With long lead times, you should also plan for the possibility of significant disruptions, which can mean that you also have to capture the decisions you might make in the future. See chapter 19 of: https://lnkd.in/dB99tHtM (“tinyurl.com/” with “RLandSO”) for an in-depth treatment of direct lookahead policies. #supplychain #inventory Nicolas Vandeput Joannes Vermorel
-
This is the moment simulation becomes more important than prototyping. In our last posts, Pascalis and I showed two things: First, how you can generate a full production and warehouse environment in NVIDIA Omniverse using Claude Code and the USDA data format. Second, how NVIDIA’s new Kimodo model can generate robot motions from simple text prompts. Now we are taking the next step: Transferring robot motion into Omniverse and merging both use cases. Omniverse is not just for static visualizations. It allows dynamic simulation of movements, interactions and behavior with CAD components inside a virtual environment. And this is where it gets interesting for future product development. The vision is clear: If we can model production environments, warehouses, and real operating environments of products, we can simulate mechatronic products in realistic conditions before they physically exist. Environment → Sensor & actuator interaction → Model-in-the-loop simulation. Very similar to how autonomous vehicles are developed today, but applied to all kinds of mechatronic products. The effects are huge: • Less physical prototyping • Earlier insights without building hardware • Faster iteration cycles • Better product decisions earlier in development • Simulation becomes the main development environment Omniverse already shows how granular these simulations can be created today. Not through months of manual modeling, but increasingly through prompts that generate environments, movements and soon maybe even control logic. We are moving from designing products to designing behavior in simulated worlds first. And that will fundamentally change how we develop products. Curious to hear your thoughts! When will simulation become the primary development environment in your industry? Vlad Larichev | Rüdiger Stern | Rick Bouter | Ruben Hetfleisch | Dr.-Ing. Tobias Guggenberger
-
For Nigeria's Project 3M bopd, instead of defaulting to: -Drilling more wells -Increasing interventions Operators should first ask: -Are we operating at optimal network conditions? -Which wells are constraining the system? -Can selective shut-ins or choke optimisation unlock hidden capacity? Today, during our retreat, I shared a case example with my colleagues from a project on integrated production system modelling (IPSM) I did over 10 years ago. Here, four oil wells were producing into a common manifold and one of the wells was shut in and "unexpectedly", total production increased by almost a thousand barrels per day? This appeared counterintuitive at first glance. How can producing from fewer wells result in higher output? This is where integrated production system modelling and analytics come in. With the right tools and expertise, operators can: -Diagnose system constraints -Simulate alternative operating scenarios -Unlock production without additional CAPEX When multiple wells produce into a shared system, they don’t operate independently. They interact through flowlines and manifolds. Each additional well contributes to system backpressure, which in turn increases flowing wellhead pressure and reduces drawdown. By shutting in one well, the system experiences: -Reduced backpressure, -Increased drawdown for the remaining wells. -Improved flow conditions This of course leads to increased production. In many assets, especially where infrastructure is constrained, one poorly performing or high-backpressure well can penalise the entire network/system. This is the reason why CypherCrescent Limited recommend integrated production system modeling as a foundational option before well intervention and drilling. It is interesting that most of the operators in Africa still do not prioritize integrated production system modelling before well intervention decisions are made. I’m a strong advocate of well intervention but before we intervene, we must first get the fundamentals right through proper system housekeeping
-
We build every production line twice. 🤖 Initially in 3D. Then in reality. The digital version comes first for a reason. We test FANUC Europe robot movements, check reach zones, detect collisions, and verify cycle times. All of this happens before a single piece of steel is cut or a robot is mounted. It’s not about showing off technology. It’s about avoiding costly surprises on the production floor. If you catch a problem in the simulation, it costs you an hour of engineering work. If it shows up during installation, it’s days of downtime and a crew standing idle… 😏 The math is simple. 👍 I’ve seen lines launched in a few weeks thanks to precise virtual testing, and I’ve also seen lines struggle for months when this step was skipped to “save effort.” It’s ironic: rushing at the expense of simulation always ends up costing more time. 😉
-
The Digital Twin of the Production System: A Key to Modern Manufacturing. Let’s think about the factory as a big complex machine. A machine that will outlive the products it produces. Would you develop such a machine without creating a digital Model? The digital twin of a factory is a virtual, real-time replica of its physical counterpart. This isn't just a static 3D model; it's a dynamic, living simulation that utilizes data from sensors, IoT devices, and other sources to accurately replicate the actual factory's operations, processes, and performance. This technology is essential because it allows manufacturers to run "what-if" scenarios without halting real production or wasting resources. It creates a risk-free environment for testing new ideas, optimizing processes, and identifying potential problems before they can cause costly disruptions. The result is a more efficient, agile, and sustainable operation. How Siemens Empowers the Factory Digital Twin Siemens is a leader in this field, helping its customers develop and sustain their digital twins through its comprehensive Digital Enterprise portfolio. The company's approach isn't limited to a single product; it's a holistic ecosystem that integrates the entire product and production lifecycle. Here's how Siemens helps: Designing and Simulating: Siemens' software, such as the Xcelerator platform, enables companies to create a digital twin from the outset. This includes developing products, planning production lines, and simulating factory layouts to ensure everything is optimized before any physical assets are purchased. Connecting the Physical and Digital: Siemens provides the automation and industrial IoT technology to collect real-time data from the factory floor. This constant stream of information ensures the digital twin is always an accurate, up-to-date reflection of the physical factory, enabling real-time monitoring and predictive analytics. Long-Term Maintenance and Optimization: A digital twin is an ongoing project, not a one-time build. Siemens provides the tools and expertise to maintain the twin over its entire lifecycle. The company's solutions enable continuous data analysis, identify areas for improvement, and simulate changes to support the factory's peak performance for years to come. Siemens' comprehensive digital twin enables manufacturers to significantly reduce time-to-market, improve product quality, and increase overall efficiency. It's a game-changer for businesses looking to stay competitive in the era of Industry 4.0. For example, here is a diagram of a Battery production system. Here we achieved: 20% reduction in space, 30% improvement in productivity, and 25% faster material replenishment.
-
ArcelorMittal Luxembourg invested in a new vacuum degasser and a 15% production increase for their plant in Belval. The obvious question: Can the crane handle it? The wrong way to answer: Just rely on crane utilization percentages. The right way: Check whether cranes would actually delay production at the critical moments during tapping and casting. We ran the simulation across two demanding scenarios. Phase 1: Maximum heats per day with the shortest casting times. Phase 2: Fewer heats, but heavy crane workload from deslagging and VD transports. Yes, crane utilization increased. 12% in phase 1. 23% in phase 2. But that's not what mattered. What mattered: In neither scenario did the cranes fail to deliver ladles when furnace or caster needed them. No forced waiting. No production loss. The investment moved forward, because we could prove the cranes wouldn't become the constraint. They're now finishing up the project. This year we will see if reality agrees with the model. Planning a production increase or new equipment? The question isn't how much busier your cranes are going to be. Bu rather if they cause delays.
-
Every hand in the room went up. Every answer was wrong. In 1993, I was 24, managing the largest cookware factory in Africa. I’d read The Goal and thought I understood TOC. Then I attended a workshop where Dr. Eli Goldratt himself taught us. He asked: “What is the goal of manufacturing?” Easy. Meet demand with acceptable lead times and quality as cost effective as possible. “Are you doing that?” We weren’t. None of us in the room were. He asked why. We listed everything — bad forecasts, breakdowns, unreliable suppliers. Everything that was out of our control. Then he said something I never forgot: “If all those problems disappeared — would manufacturing be easy?” “Of course!” “Switch on your computers.” A factory simulator appeared. No demand or supply variability or uncertainty. All we had to do was commit to a production plan, allocate resources, and deliver. It should have been trivial. We all lost money. Not because of uncertainty or variability. Because we followed the “right” rules — highest margin, lowest cost, high resource utilization — and every one pointed us to the unprofitable product mix and resource allocation When Goldratt challenged us to try the opposite, the factory turned profitable. Here’s what most people miss: Finding the constraint was easy. We could all see the bottleneck. The breakthrough wasn’t Step 1 of the Five Focusing Steps. It was Step 2 — exploit the constraint, stop wasting it — and Step 3 — subordinate everything to that decision. Question every policy and measurement that conflicts with better exploiting your constraint. Traditional cost accounting was directing us to waste our bottleneck. Right constraint identified. Wrong rules for exploiting it. That’s the core problem in most organizations. Not finding the wrong constraint — but not changing the policies and measurements that cause them to waste it. That workshop changed everything. Reading The Goal wasn’t enough. TOC is an applied science. You have to apply it to learn it. A simulator is a low-risk, focused way with fast feedback to do exactly that. That experience shaped 18 years working with Dr. Goldratt until his sad passing in 2011. It’s why when we co-founded Goldratt Research Labs in 2008, simulators and digital twins became central to our mission. Today I’m relaunching GSim — Production Simulator. I’ve rebuilt the original simulators from scratch. The same experience that transformed 1000’s of the original Jonah Program graduates — now available to everyone. Discover for yourself: → Why you can’t make reliable commitments if we ignored demand, capacity, supply, or cash constraints → Why traditional rules waste your constraint. → How TOC offers a solution to both All original scenarios included — plus build your own. 🚀 Launch price: $37/year or $79 lifetime GSim-Production.com Project version launching later this week at GSim-Projects.com I’d love to hear what you discover.
-
Runway, Waymo, World Labs, Google; all shipped world models in the last 90 days for generating 3D environments. The reflexive read is that generative AI is eating simulation. That framing gets the causality backwards. Sim's problem was never physics or compute. It was building well-calibrated environments. Hand-modeling a warehouse, a construction site, a loading dock takes weeks before you run a single test. Most teams cap out at a few dozen environments and call it coverage. That's not coverage, that's a sample. World models collapse that timeline. A phone walkthrough becomes a sim environment in minutes. Driving scenarios generate from fleet video. Edge case variants spin up from text. What was dozens of hand-built environments becomes thousands of generated ones. The analogy that matters here: since late last year, the conversation around language models rightly shifted from what the models can do to the harnesses that put them to work. The context, constraints, and orchestration that make them useful in production. The same shift is coming for sim. Sim infrastructure is to world models as coding harnesses are to coding agents. Where does the constraint move? Three things I expect to play out this year: 1. Scenario design becomes the differentiating skill. When generating environments is cheap, the quality of what you generate matters more. Which edge cases expose real failure modes? Which conditions cover gaps in your stack? When the cost of building a world trends toward zero, taste is the constraint. Same conversation the coding world is having right now. 2. Orchestration and infrastructure become load-bearing. Running 3,000 scenarios requires coordination that wasn't necessary when your suite was 30 environments deep: parallel execution, deterministic replay, CI integration. World models feed the machine. Infrastructure is the machine. 3. Continuous regression becomes standard. When you can run thousands of scenarios in CI, sim stops being a project milestone and becomes a permanent feedback loop. Shipping cadences for autonomy start to look more like software. The teams that set this up early get a structural iteration advantage. World model companies and sim infrastructure companies aren't competing. They're complements. One makes environments cheap to create. The other makes testing fast to run. Simulation works in production when both exist.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development