Running simulations: base model vs. lookahead model I see people posting on the use of “simulations” for planning inventory policies. If you are using a lookahead model (which is typical for most real-world inventory problems), there are two models where simulation can be used: 1. The base model, which can be a simulator or the real world. 2. The lookahead model, which is used in the policy for planning the future to make a decision now. See the figure below - I use the same notational style for both models, but the lookahead model uses tildes on each variables, which also carry two time subscripts: the point in time we are making the decision, and the time period within the lookahead model. The base model is used to evaluate the policy, and is needed to perform any parameter tuning. The base model can be based on history or a simulation of what you think the future can be. When simulating inventory policies, special care has to be used because we do not have historical data on market demand – we typically just have sales, which can be “censored” (a topic that has been recognized in the inventory literature for over 60 years). For example, if we run out of product (and there is no back ordering), we lose the sales, which typically means that we do not see (or record) them. I find it is generally best to run simulations using mathematical models of uncertainty so that we can run many simulations, testing different policies. Stockouts depend on properly simulating the tails of distributions, along with market shifts, price changes and supply chain disruptions. There are, of course, settings where you have no choice but to test your ideas in the field. It is expensive, risky, and slow, but sometimes you just have no choice, especially when you have to capture human behavior. If your policy requires planning into the future, you really need to be using a stochastic (probabilistic) model of the future which properly captures the tails of distributions. With long lead times, you should also plan for the possibility of significant disruptions, which can mean that you also have to capture the decisions you might make in the future. See chapter 19 of: https://lnkd.in/dB99tHtM (“tinyurl.com/” with “RLandSO”) for an in-depth treatment of direct lookahead policies. #supplychain #inventory Nicolas Vandeput Joannes Vermorel
Decision Analysis and Simulation Techniques
Explore top LinkedIn content from expert professionals.
Summary
Decision analysis and simulation techniques are methods used to systematically evaluate and predict outcomes in uncertain or complex scenarios, often by modeling risks, testing policies, or optimizing strategies. These approaches combine mathematical modeling and computational simulations—like Monte Carlo methods—to support smarter, data-driven decisions in fields such as risk management, supply chain planning, and business strategy.
- Quantify uncertainty: Use simulation models to explore a wide range of possible outcomes and better understand the risks and variability in your decisions.
- Test multiple scenarios: Run simulations to assess how changes in inputs or assumptions affect your strategy, helping you prepare for disruptions and market shifts.
- Tune decision models: Adjust and refine parameters within your analysis to improve performance and scalability, even when working with very large or complex systems.
-
-
A "sampled success metric" is a performance measure or evaluation criterion calculated from a sample or subset of data rather than the entire population. Its calculation often involves higher costs per sample, such as manual review, leading to a trade-off between sample size and metric accuracy/sensitivity. In this tech blog, written by the data science team from Shopify, the discussion revolves around how the team leverages Monte Carlo simulation to understand metric variability under various scenarios to help the team make the right trade-offs. Initially, the team defines simulation metrics to describe the variability of the sampled success metric. For instance, if the actual success metric is decreasing over time, the metric could indicate how many months of sampled success metric would show a decrease, termed as "1-month decreases observed". Then, the team defines the distribution to run the Monte Carlo simulation. Monte Carlo simulation, a computational technique using random sampling to estimate outcomes of complex systems or processes with uncertain inputs, draws samples from a dedicated distribution that matches business needs. Based on past observations, the team’s application follows a Poisson distribution. Next comes the massive simulation phase, where the team runs multiple simulations for one parameter and then changes various parameters to simulate different scenarios. The goal is to quantify how much the sample mean will differ from the underlying population mean given realistic assumptions. The final result provides a clear statistical distribution of how much extra sample size could lead to metrics variability decrease and increased accuracy. This case study demonstrates that Monte Carlo simulation could be a valuable toolkit to add to your decision-making and data science knowledge. #datascience #analytics #metrics #algorithms #simulation #montecarlo #decisionmaking – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/dKnrZzzV
-
Here's my cheat sheet for a first-pass quantitative risk assessment. Use this as your “day-one” playbook when leadership says: “Just give us a first pass. How bad could this get?” 1. Frame the business decision - Write one sentence that links the decision to money or mission. Example: “Should we spend $X to prevent a ransomware-driven hospital shutdown?” 2. Break the decision into a risk statement - Identify the chain: Threat → Asset → Effect → Consequence. Capture each link in a short phrase. Example: “Cyber criminal group → business email → data locked → widespread outage” 3. Harvest outside evidence for frequency and magnitude - Where has this, or something close, already happened? Examples: Industry base rates, previous incidents and near misses from your incident response team, analogous incidents in other sectors 4. Fill the gaps with calibrated experts - Run a quick elicitation for frequency and magnitude (5th, 50th, and 95th percentiles). - Weight experts by calibration scores if you have them; use a simple average if you don’t. 5. Assemble priors and simulate - Feed frequencies and losses into a Monte Carlo simulation. Use Excel, Python, R, whatever’s handy. 6. Stress-test the story - Host a 30-minute premortem: “It’s a year from now. The worst happened. What did we miss?” - Adjust inputs or add/modify scenarios, then re-run the analysis. 7. Deliver the first-cut answer - Provide leadership with executive-ready extracts. Examples: Range: “10% chance annual losses exceed $50M.” Sensitivity drivers: Highlight the inputs that most affect tail loss Value of information: Which dataset would shrink uncertainty fastest. Done. You now have a defensible, numbers-based initial assessment. Good enough for a go/no-go decision and a clear roadmap for deeper analysis. This fits on a sticky note. #riskassessment #RiskManagement #cyberrisk
-
Risk matrices, although widely used, are profoundly limited and even misleading tools. By assigning arbitrary numbers to subjective categories of probability and impact, they give the false impression of being scientific, when in reality they lack quantitative logic. What does a “risk 25” mean? Nothing. It cannot be objectively compared or prioritized. In contrast, techniques like Monte Carlo simulation, which take into account frequency and monetary impact based on justified distributions, allow for realistic risk assessment. We can answer key questions such as: how much should we invest in mitigation? Which strategy generates more value? Or how should we prioritize among risks? Furthermore, matrices do not allow for risk aggregation, analysis of cause-effect relationships, or sensitivity or stress testing. And yet, many organizations still rely on them simply to comply with regulations. If we want risk management to be taken seriously, we must abandon the illusion of precision offered by matrices and adopt truly quantitative tools that enable informed decision-making with real economic insight. What do you think?
-
🎯 How can we use a low-fidelity optimization model to achieve similar performance to a high-fidelity model? Many decision-making algorithms can be viewed as tuning a low-fidelity model within a high-fidelity simulator to achieve improved performance. A great example comes from Cost Function Approximations (CFAs) by Warren Powell. CFAs embed tunable parameters, such as cost coefficients, into a simplified, deterministic model. These parameters are then refined by optimizing performance in a high-fidelity stochastic simulator, either via derivative-free or gradient-based methods. A similar philosophy appears in optimal control, where controllers are tuned using simulation optimization. ⚙️ Inspired by this paradigm, my student Asha Ramanujam recently developed the PAMSO algorithm. PAMSO—Parametric Autotuning for Multi-Timescale Optimization—tackles complex systems that operate across multiple timescales: High-level decision layer: makes strategic decisions (e.g., planning, design). Low-level decision layer: takes high-level inputs, makes detailed operating decisions (e.g., scheduling), applies detailed constraints and uncertainties, and computes the true objective. However, one-way top-down communication between layers often results in infeasibility or poor solutions due to mismatches between the high-level and the detailed low-level operating models. 💡 PAMSO augments the high-level model with tunable parameters that serve as a proxy for the complex physics and uncertainties embedded in the low-level model. Instead of attempting to jointly solve both levels, we fix the hierarchical structure: the high-level layer makes planning or design decisions, and then passes them down to the low-level scheduling or operational layer, which acts as a high-fidelity simulator. We treat this top-down hierarchy as a black box: The inputs are the tunable parameters embedded in the high-level model. The output is the overall objective value after the low-level simulator evaluates feasibility and performance. By optimizing these parameters using derivative-free methods, PAMSO is able to steer the entire system toward high-quality, feasible solutions. 🚀 Bonus: Transfer Learning! If these parameters are designed to be problem-size invariant, they can be tuned on smaller problem instances and transferred to solve larger-scale problems with minimal extra effort. ⚙️ Case studies demonstrate PAMSO’s scalability and effectiveness in generating good, feasible solutions: ✅ A MINLP model for integrated design and scheduling in a resource-task network with ~67,000 variables ✅ A massive MILP model for integrated planning and scheduling of electrified chemical plants and renewable energy with ~26 million variables Even solving the LP relaxation of these problems is beyond memory limits, and their structure is not easily decomposable for optimization techniques. https://lnkd.in/gDfcvDaZ
-
Volumetric Method Principle: Estimates hydrocarbons in place (STOIIP/GIIP) based on the reservoir’s geometry, porosity, saturation, and formation volume factor. Applies before production begins (static method). Strengths: Useful in early field life (before production data). Straightforward and quick. Requires geological and petrophysical data. Weaknesses: Accuracy depends on data quality (porosity, thickness, area). Assumes uniformity—doesn't capture heterogeneity or compartmentalization. Does not account for reservoir connectivity. 🔍 2. Material Balance Method (MBE) Principle: Uses the law of conservation of mass to estimate Original Hydrocarbon in Place (OHIP) by relating cumulative production to pressure depletion. Strengths: Applicable after some production data is available. Good for estimating drive mechanisms. Integrates PVT and production data. Weaknesses: Assumes average reservoir pressure is known accurately. Requires reliable PVT data. Sensitive to aquifer behavior assumptions. 🔍 3. Decline Curve Analysis (DCA) Principle: Projects future production using historical trends (rate-time data), assuming reservoir behavior remains consistent. Types include: Exponential Harmonic Hyperbolic Strengths: Simple and fast. Requires only production data. Effective in mature reservoirs. Weaknesses: Poor prediction in early life or unstable production. Doesn’t directly estimate hydrocarbons in place. Assumes constant operating conditions and no interventions. 🔍 4. Reservoir Simulation (Numerical Modeling) Principle: Uses mathematical models and computer simulations to predict reservoir performance under different scenarios. Integrates geology, petrophysics, PVT, SCAL, and production history. Strengths: Handles complex reservoir geometries. Simulates different development strategies. Powerful for optimization and forecasting. Weaknesses: Data- and labor-intensive. Requires skilled personnel and calibration. Can produce misleading results if poorly constrained. 🔍 5. Analog/Analytical Models Principle: Estimates reserves by comparing with similar, previously developed fields (analogs). Strengths: Quick and low cost. Useful for frontier areas with little data. Weaknesses: Assumes similarity—can be misleading. Not suitable for unique or heterogeneous reservoirs. 🔍 6. Probabilistic Methods (Monte Carlo Simulation) Principle: Applies probability distributions to input variables (porosity, saturation, area, etc.) to generate a range (P90, P50, P10) of reserves. Strengths: Accounts for uncertainty. Provides risk-based estimates. Useful for decision-making and portfolio management. Weaknesses: Requires proper input distributions. Computational resources needed. Can give false confidence if assumptions are wrong.
-
Monte Carlo (MC) simulations are useful to determine the optimal sample size for complex statistical analyses, including path analysis, CFA, SEM, LCA, LPA, growth curves, and multilevel modeling. Simulations are also a valuable research tool to examine the performance and robustness of statistical techniques under various conditions. How do MC simulations work? 1️⃣ We set up a statistical model with known parameter values—a so-called "population model." 2️⃣ Our computer (a) draws samples of size N from the population model. 3️⃣ We estimate the model parameters in each MC sample and average the results across samples. This allows us to determine whether the sample size N is appropriate for bias-free estimation & sufficient statistical power. Additionally, we can study special data & design characteristics such as non-normality, missing values, & clustered samples. Which programs can you use for MC simulations? 👉🏽 Mplus is a user-friendly program for simulating a wide variety of statistical models, including SEM, mixture, & multilevel models. 👉🏽 A free R package for simulations is SIMDesign. 👉🏽 MonteCarloSEM is a free R package for simulating SEMs under various conditions of sample size and non-normal data. 👉🏽 Another free R package for simulations is tidyMC.
-
This is a new model I built for a training workshop last week, designed to showcase how we can build scenarios with different options and dynamics. The model has two control variables. The first is a Bernoulli variable that determines whether we receive fast track designation (or accelerated approval) for our treatment. If this variable is TRUE, we skip the Phase 3 duration and costs and go directly to registration after Phase 2. In this case, we also conduct the confirmatory trial specified in the model. Confirmatory success matters — if TRUE, sales continue as planned. If FALSE, sales stop when the confirmatory results are available. The second control variable is a toggle called PH2 Use Clinical Parameters. If set to 0, we use the Phase 2 variables in the upper section of the form. If set to 1, we instead use clinical parameters to build up the Phase 2 duration based on recruitment rates, number of patients, and number of centers. The cost is then calculated as a function of the number of patients and centers and their respective unit costs. This creates a detailed clinical trial simulation where we can explore strategies such as speeding up trial timelines by adding centers or reducing treatment duration — at an associated cost. The result is a model that supports clinical strategy testing. For example, we may adjust the Phase 2 CDP to increase the likelihood of securing fast track designation. This may involve higher costs but can shorten the time to market. This is an example of using a model to generate strategic insight. The objective is to explore alternative approaches and trade-offs, rather than simply estimate how long the current plan will take. Please share your thoughts in the comments. #captario #modelingandsimulation #strategy #decisionsupport #decisionanalytics #drugdevelopment #biotech #pharma #strategyimplementation #projectmanagement #portfoliomanagement
-
Risk Management Monday - "What is the chance I might lose money on this deal?" My client asked me this question, and it stopped me in my tracks. He had been one of my longest-tenured clients, and this was maybe the fourth or fifth acquisition that I had worked on with him to that point. I had just completed the valuation of the target, and it looked like we were in a position to get a bargain. I had to confess that I didn't know. Conventional valuation models don't really answer that question. But the question was entirely valid. As a small business owner, yes, he wanted to pay a fair price for the business, but what really concerned him was the possibility of losing money, such as having to put, say, $1 million into the company after acquiring it. So, we expanded the scope of the project to include something called a Value-at-Risk analysis. VaR is something that I had read about - maybe in business school, almost certainly in the CFA program, but had never actually done. Value-at-Risk answers the question, "Assuming the worst-case scenario has a (X)% chance of happening, how much money do I lose if that occurs?" X can be anything. VaR is usually (but not always) performed using some sort of simulation model. I happened to know how to carry out Monte Carlo simulations, so that was my go-to tool. Working closely with the client, we built a detailed model incorporating 20 or so risk variables and performed data analysis to determine those variables' variation (standard deviation) and distribution (shape of outcomes, such as a bell curve). We then ran the model to simulate 100,000 different cash flow outcomes, and we varied the percentage threshold for the loss scenario. It took an hour for a state-of-the-art computer to run one simulation. In fact, I had to buy more memory for the computer to keep it from crashing, so it took a few days to complete the analysis. (These go much faster today, about 7 years later.) The result was that there was a 25% chance that my client, the buyer, would have to invest at least $1 million in the next five years to keep the company afloat. The bargain turned into a deal-breaker, and my client walked away. By asking a simple question, we were motivated to find the tools to answer it and achieved a depth of financial analysis that we don't see very often. As a result my client was much better informed to make a critical decision and manage his risk. You can't manage risk if you don't know what it is. Would any of you like to see a video on this topic? If so, leave a comment below! #riskmanagement My name is Mike (Stage Name - Unblakeable) - Helping you win your best business decisions at your most critical moments. Liked this post? Want to see more? Ring the bell on my profile. Connect with me (also @unblakeable on Instagram, Facebook, and Twitter) Subscribe to my YouTube channel! (Your Business Value) Join the LinkedIn group "Unblakeable's Group That Doesn't Suck" Welcome!
-
Last week, I had fun playing with the old yet very popular Monte Carlo simulation. I explored building a Python app for portfolio optimization using this technique. Monte Carlo simulations are fantastic for modeling uncertainty and exploring a range of possible investment outcomes, rather than relying on a single prediction. I explored how these simulations can enhance portfolio optimization by accounting for risk and variability. By simulating thousands of potential future scenarios, we can better understand the trade-off between risk and return, and make more informed investment decisions. One of the fascinating aspects was implementing the mathematical foundations, like Geometric Brownian Motion, to model asset prices. This approach helps in assessing the probability of different outcomes in financial markets, which are inherently unpredictable due to numerous random variables. If you're interested in how Monte Carlo simulations can be applied in finance, especially for portfolio optimization, feel free to check out my latest article. I delve into the math behind it and share how I built the Python app: https://lnkd.in/egU6Tf8S You can access the app here: https://lnkd.in/euu5mpQC #Finance #MonteCarloSimulation #PortfolioOptimization #Python #Math
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development