Most “sensitivity analyses” aren’t sensitive at all. They’re just a few +/- tweaks in Excel. If you want to do it properly, here are 5 quick tips: 1. Pick the right drivers → Focus on the 3–5 variables that truly move the business (price, volume, churn, CAC). 2. Test extremes, not just margins → Push assumptions until the model breaks; that’s where you find the risks. 3. Use scenarios, not scatter → Structure downside, base, and upside cases with clear triggers. 4. Visualize impact → Tornado charts, spider plots, or even simple waterfall views make risks tangible for leaders. 5. Connect to decisions → End every sensitivity test with: “If X happens, here’s what we’ll do.” Sensitivity analysis isn’t about proving your model. It’s about showing leaders where it bends, and where it breaks. P.S. What’s the most surprising variable you’ve seen sink a “bulletproof” plan?
Sensitivity Analysis in Planning
Explore top LinkedIn content from expert professionals.
Summary
Sensitivity analysis in planning helps organizations understand how changing different assumptions or variables can impact their outcomes, making it a valuable tool for addressing uncertainty and risk in forecasts, operational decisions, and technical projects.
- Identify key variables: Focus on the few factors that truly influence your results, so you can quickly spot which changes matter most.
- Explore scenarios: Test both moderate and extreme changes in your model to reveal risks and prepare backup plans for unexpected outcomes.
- Visualize results: Use charts and clear displays to show leaders how different choices or external events could affect your plans and goals.
-
-
You can't treat every forecast the same. More uncertainty means more risk, and you want to deal with it correctly. After building forecasting models at P&G, Unilever, and Squarespace, I've learned there are three ways to manage uncertainty: 𝟭) 𝗔𝘃𝗼𝗶𝗱 𝗔𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 𝗦𝘁𝗮𝗰𝗸𝗶𝗻𝗴 The more uncertainty, the fewer assumptions you should include. Why? Because if you add multiple variables on top of each other, their margin of error multiplies. If you base the forecast on many assumptions, it's nearly impossible to determine which one was accurate and which wasn't. So, keep your models as simple as possible. Isolate the variables. You can always add additional assumptions later once you better understand the correlations. 𝟮) 𝗥𝘂𝗻 𝗪𝗵𝗮𝘁-𝗜𝗳 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 It's your job as a finance leader to quantify the risk of a forecast. The easiest way to do that is by changing individual inputs and noting how much impact that has on the forecast. For example, if a 5% price change affects the revenue forecast by 25%, that's a major risk you'll need to call out. 𝟯) 𝗦𝗵𝗼𝘄 𝗮 𝗥𝗮𝗻𝗴𝗲 Sometimes analysts make the mistake of assuming ranges make it look like they aren't confident in their forecast. But a well-measured range is critical for two reasons: One, it shows the order of magnitude of risk. Your CFO knows what's a conservative estimate to communicate to investors. Two, it enables scenario planning. Leaders can plan contingency measures if results are at the lower end of the range. 𝗜𝗻 𝘀𝘂𝗺, 𝘁𝗼 𝗺𝗮𝗻𝗮𝗴𝗲 𝘂𝗻𝗰𝗲𝗿𝘁𝗮𝗶𝗻𝘁𝘆 𝗶𝗻 𝗮 𝗺𝗼𝗱𝗲𝗹: 1. Reduce the number of assumptions 2. Estimate the risk by running sensitivity analysis 3. Provide ranges instead of point estimates Which approach do you find most useful? Comment below 👇 -Christian Wattig 📌 Get my 𝗙𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴 𝘁𝗲𝗺𝗽𝗹𝗮𝘁𝗲 + 𝟰𝟲 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 (free) here: https://lnkd.in/eBAmSF_6
-
One of the most technically demanding projects I’ve worked on was the development of a yard scheduling simulator for a large automotive logistics operation. Unlike the elegance of linear or mixed-integer formulations, this problem was, by its very nature, hostile to static optimization. Vehicles arrive in waves by boat, then must be staged, shuttled, fueled, and sequenced before departing toward downstream facilities. The combinatorial explosion of vehicle–worker–shuttle interactions, compounded with stochastic arrivals and service times, renders a closed-form MILP model either intractable or entirely irrelevant. The reality is not one of “optimal scheduling” but of orchestrating thousands of micro-decisions under severe uncertainty. We built a discrete-event simulator that encoded the entire process flow: shuttle departures triggered by configurable policies (e.g., minimum fill count vs. maximum wait time), stochastic service times drawn from empirical distributions, and worker assignments subject to geographic constraints on the yard topology. Each entity (vehicles, shuttles, workers) was modeled as an independent agent with a state machine, while global metrics such as throughput, utilization, and average dwell time were tracked in real time. The technical insight was to treat the simulator as the primary analytical instrument, capable of stress-testing operational rules against realistic perturbations: late vessel arrivals, uneven worker attendance, or sudden equipment outages. Optimization, in this context, emerged from iterating over policies. We experimented with heuristic shuttle policies, from naïve FIFO to adaptive threshold-based triggers that accounted for both queue length and downstream bottlenecks. Sensitivity analysis revealed non-linear behaviors: for instance, aggressive shuttle dispatch reduced idle time but paradoxically increased average fuel station congestion, creating longer systemic delays. Such dynamics are invisible in static models but glaringly obvious once the simulation faithfully reproduces queueing cascades. The lesson here is that many industrial problems masquerade as “optimization candidates” when in fact they are better approached as control problems under uncertainty. A MILP will happily generate a “solution” that collapses under the weight of reality because it presupposes deterministic flows and linearizable constraints. A simulator and policies, by contrast, embraces messiness: it allows us to observe emergent properties and to calibrate policies against the entropy inherent in physical operations. The rigor is no less, on the contrary, the technical challenge lies in ensuring the simulation remains both faithful to reality and computationally efficient enough to support thousands of replications across scenarios. #supplychain #logistics #automotive #optimization
-
You’re not behind on your numbers. You’re behind on your assumptions. Most misses in FP&A aren’t execution problems... They’re planning blind spots. That’s where 𝘴𝘦𝘯𝘴𝘪𝘵𝘪𝘷𝘪𝘵𝘺 𝘢𝘯𝘢𝘭𝘺𝘴𝘪𝘴 earns its keep. In the example below, a business expects 2% volume growth and 2% price increase, landing at $11.8M in EBITDA. But what if growth slows to 1%? To stay at the same EBITDA, you’d need to bump prices to at least 2.5%. And if you can’t? That’s a scenario worth planning for. ➤ Sensitivity matrices like this help you see the ripple effects of tiny changes. ➤ They give you options, not just forecasts. ➤ And they make conversations with leadership a lot sharper. 𝘈 1% shift shouldn’t break your plan. It should trigger your backup.
-
Concept Note: Sensitivity Analysis and Sizing Considerations for Reservoir and Electromechanical Ratings in Pumped Storage Projects 1. Reservoir Sizing through Sensitivity Analysis The fixation of reservoir capacity—expressed in Million Cubic Meters (MCM)—must be guided by a detailed sensitivity analysis. This involves evaluating the trade-off between incremental live storage and the costs associated with raising the dam/embankment/dike height per meter, or through excavation options. The analysis should also include: • Avoidance of submergence of critical infrastructure or environmentally/socially sensitive zones. • Assessment of land acquisition requirements, forest clearance implications, and rehabilitation costs. This sensitivity analysis must be accompanied by a cost-benefit evaluation to determine the most viable reservoir capacity that optimally supports project performance while minimizing socio-environmental impacts and capital expenditure. 2. Relationship Between Installed Capacity and Storage For a fixed head differential and same-sized upper and lower reservoirs, multiple configurations of installed capacity are possible. The installed capacity (MW) and energy storage (MWh) are not linearly dependent, and their relationship is better expressed in terms of the number of hours of full-load generation that the plant can sustain in a single cycle. A comprehensive approach to reservoir sizing, capacity installation, and E&M equipment rating This relationship is critical for: • Defining operational flexibility. • Optimizing asset utilization under varying grid demands. • Ensuring adequacy of storage for target dispatch duration (e.g., 6-hour, 8-hour, or 10-hour class PSPs). 3. Pump-Turbine and Generator-Motor Ratings The reversible Francis pump-turbine exhibits inherent variations in performance based on head conditions: • As Turbine: Rated at net average operating head, where it achieves maximum efficiency. • At lower heads, turbine output decreases. • At higher heads, output exceeds the rated capacity. • As Pump: Rated at minimum pumping head, where: • Input power (MW) is maximum. • Discharge (m³/sec) is also maximum. • At higher heads, both input power and discharge decrease. These characteristics necessitate precise matching with the grid demand profile and the reservoir head variability. 4. Generator-Motor Sizing The generator-motor unit is typically rated to: • Handle the maximum turbine output at the maximum head, and • Accommodate the maximum pump input at minimum head. This ensures system reliability under the most demanding hydraulic conditions in both generating and pumping modes. 5. Overall Plant Rating The plant’s installed capacity is usually defined as the combined output of all units at rated head. However, actual operational performance is a function of: • Head variation across reservoir levels, • Equipment efficiency curves, and • System integration strategy for generation and pumping.
-
As an actuary, you seldom have the information required to solve a problem with certainty. Nonetheless, you need to present your best solution. The ability to make decisions based on incomplete information is one of the most important skills an actuary can possess. Depending on your mindset, developing this skill can be intimidating. Much of our academic training leads us to believe that problems have “right” or “wrong” solutions. But this distinction is not so clear in practice. Consider loss reserving as an example. While there are an unlimited number of “wrong” ways to estimate loss reserves, there is no single “right” approach. Making decisions based on incomplete information can be a messy process. In this way, it mirrors life: you have to do the best with what you’ve got. Nonetheless, there are ways to optimize this process. Here two tips to help become comfortable making decisions under uncertain conditions: #1 Sensitivity test assumptions. Suppose you estimate loss reserves at $10 million. What are the assumptions underlying your estimate? How would your estimate change if your assumptions changed by X%? Routinely sensitivity testing assumptions will give you a sense for how much sway they have. Once you have identified the most influential assumptions, refine your focus accordingly. #2 Look at the implications inherent in selected estimates. Once you select an actuarial estimate, you can calculate the assumptions implicit in your selection. For example, what is the implied reported loss development pattern underlying your estimate of IBNR? How does this compare to historical development patterns? By reverse-engineering your own estimates, you can gain a comfort level by comparing the implicit assumptions to comparable benchmarks. What techniques do you use to get comfortable with estimates based on incomplete information?
-
𝗪𝗵𝗮𝘁 𝗶𝗳 𝘆𝗼𝘂 𝗰𝗼𝘂𝗹𝗱 𝗽𝗿𝗲𝗱𝗶𝗰𝘁 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲? 🌚 If you’ve been following my FP&A posts, you’ll notice a deliberate focus on topics that guide your 2025 business planning and budgeting. By now, all things being equal, you should have rounded up or be close to rounding up your preliminary forecasts. (If you haven’t started yet, I’m sending you love and light, dear 🥱 . Start soon, though—it’s already late in the game!) Now, if you’ve completed your preliminary 2025 numbers, it’s time to take it to the next level: layering in Scenario and Sensitivity Analysis. These are the “little things” that separate great FP&A analysts from the rest. I can personally attest to how much of a game-changer they’ve been for me—since incorporating them into my forecasts, I’m rarely caught off guard. I know the curveballs to expect, and I’m able to help decision-makers plan with confidence. That’s the kind of FP&A professional you want to be. If you’ve ever been asked, “What happens if sales drop by 10% and costs rise by 15%?” or “What if interest rates hit the roof?” or “What happens if the Naira depreciates beyond ₦2,000/US$1?” you know the struggle. That’s where Scenario and Sensitivity Analysis step in as your go-to tools. Here’s a quick way to understand them: Scenario Analysis: Big picture—“What if multiple things change?” Sensitivity Analysis: Focused Lens—"How does this one factor impact us?” A mistake I often see also is treating these analyses as mutually exclusive. They’re not! Using both gives you a balanced, holistic view and prepares you for almost anything. Here's how to use both effectively: 1. Start with Sensitivity: Identify your most sensitive drivers—the variables that significantly impact your outcomes. Is it FX rates? Raw material prices? Workforce costs? Test those first. 2. Move to Scenarios: Build 2-3 compelling scenarios incorporating those sensitive variables. Example: >> Base Case: Inflation in Nigeria stays between 30% - 33% in 2025 and Naira trades at ₦1800/US$1 >> Optimistic/Best Case: Inflation in Nigeria scales back to single digit and Naira appreciates back to ₦400/US$1 >> Pessimistic/Worst Case: Inflation in Nigeria go north of 40%, and Naira depreciates by more than ₦3,500/US$1 (God forbid abeg! 😭 ) 3. Link to Actions: Scenarios and sensitivity analyses are meaningless without clear action points. Use your insights to plan. For instance, if FX volatility spikes, what’s your hedge plan? If costs rise, where can you cut? Incorporate these into your 2025 budget—not just as good practice but as a way to stand out. I'd also love to hear from you: what scenarios have you built into your forecasts? P.S. This topic is one of my go-to interview icebreakers when assessing candidates! Now you know... 🚶♀️ #FPA #FPATuesday #Budgeting #ScenarioAnalysis #SensitivityAnalysis #2025Planning
-
Over the past few months, I’ve been curious about a big question: How do electricity prices, and the time it takes to build new power capacity, actually impact the value of large tech companies? Most financial models for hyperscalers focus on demand growth, margins, and discount rates. But what happens if we bring in the realities of the power grid interconnection queues, lead times, and regional electricity costs as first-order inputs? To test this, I built a discounted cash flow (DCF) model for Oracle that explicitly ties its valuation to power sector dynamics. The exercise revealed some striking results: 📊 Key Findings Base Case Valuation: $710B EV, or $227/share Time-to-Power Value: Every month of acceleration in bringing new capacity online adds ~$150M in enterprise value Fast Build (-6mo): +8% EV (~+$57B) Slow Build (+12mo): -12% EV (~-$85B) Electricity Price Sensitivity: Higher prices cut valuation by ~8%, while lower/renewable-driven prices boost ~5% Lead Times: On average, Oracle faces 42 months to energize new capacity, with risk extending to 58 months 🖼️ What the charts show: Top left: Different electricity price scenarios for the US East, a reminder that volatility alone can swing valuations. Top center: Regional S-curves of capacity deployment, showing just how uneven growth timelines can be across Ashburn, Chicago, Phoenix, and Frankfurt. Top right: The “value of time”, how shifting deployment forward or backward by months translates directly into tens of billions in enterprise value. Bottom left: Enterprise value across scenarios, illustrating that both power prices and build speed reshape valuation outcomes. Bottom center: Revenue curves for Oracle Cloud Infrastructure, unconstrained vs. power-constrained. The yellow wedge is lost revenue due to power bottlenecks. Bottom right: Sensitivity analysis across critical variables like WACC, PUE, utilization, and power delays. 💡 Big takeaway: In this framework, energy infrastructure — not demand — becomes the binding constraint. For Oracle and every hyperscaler racing to scale AI capacity, time-to-power is now a valuation driver. I’ve uploaded the full financial model so you can explore the assumptions, scenarios, and sensitivities yourself: https://lnkd.in/ev_vPsjv Curious to hear your perspective should metrics like “time-to-power” be part of every investor and boardroom conversation going forward? Disclaimer: This analysis is for entertainment and discussion purposes only. It does not represent investment advice, and it does not reflect the views or opinions of my employer.
-
The concept of sensitivity analysis can often be shrouded in mystery. For many new to research, it's imagined as one specific type of analysis. However, sensitivity analysis isn't one singular test—it's about assessing how robust our findings are.💡 When we say that estimates are "robust," we mean that the results remain stable even when assumptions are changed. If results change drastically with even small changes in assumptions, it means they’re not robust. Here are key tips: 1️⃣ Measures of Central Tendency: If you use the mean in your main analysis, consider using the median in sensitivity analysis 📊 2️⃣ Contextual Definitions: For constructs without a universal definition, you might use a widely accepted definition for your primary analysis and test it with contextual modifications as part of sensitivity analysis 🧠 3️⃣ Exposure Variables: When exposure thresholds differ, try using various thresholds to define exposure. The main analysis could use the most commonly applied threshold, while sensitivity analysis explores others ⚖️ 4️⃣ Coherence with Outcomes: Looking at different outcomes measuring related aspects can strengthen your conclusions 📈 5️⃣ Outcome Specificity: If your primary outcome is less specific, explore secondary outcomes that may be more specific. For instance, looking at deaths due to smoking (more specific) vs all-cause mortality (less specific) 💀 6️⃣ Assessment Methods: If you have multiple methods for assessing the same outcome (e.g., self-reported vs. biomarker data), you can use the more accurate method as the main analysis and the less accurate one for sensitivity testing. 7️⃣ Handling Missing Values: How does your result change when you adjust for missing data? Test using different approaches like multiple imputation, listwise deletion, or inverse proportional weighting 📉 8️⃣ Model Assumptions: Test how your results hold when adjusting key model assumptions (e.g., linearity, independence) 🔧 9️⃣ Outlier Handling: Consider how sensitive your results are to extreme values. Does removing outliers or using robust methods change the outcome? 🚨 🔟 Timeframe Adjustments: For time-dependent data, check how your results change with different observation periods ⏳ 1️⃣1️⃣ Data Transformation: Examine how sensitive your findings are to data transformations (e.g., log-transformation vs Box-Cox transformation) 🔄 1️⃣2️⃣ Aggregation Level: Assess how results change when aggregating or disaggregating data (e.g., regional or demographic groupings) 🌍 1️⃣3️⃣ Uncertainty in Input Parameters: Monte Carlo simulations are a great way to test the range of possible outcomes with varying input assumptions 🎲 Bottomline: Sensitivity analysis isn’t a one-size-fits-all process—it’s context-driven. It’s not about fixing a "bad" analysis; rather, it’s about assessing how well a well-conducted analysis holds up under different assumptions and conditions💪 Please reshare ♻️ #Chisquares #VillageSchool #SensitivityAnalysis
-
The Forecast That Died of Too Many What-Ifs A finance team once showed me a model with 47 scenarios. Best case. Worst case. Middle case. And then: best-worst. Worst-middle. Best-best-middle. It was like watching someone drown in their own lifeboats. Here’s the fallout: Every decision got delayed because “another scenario” needed to be tested. By the time the board approved one plan, the market had already moved on. It’s the financial version of over-training a fighter until they’re too exhausted to step in the ring. So how do you stop sensitivity analysis from becoming paralysis analysis? Use the 3-Box Rule: • One Strategic scenario (macro shifts you can’t control). • One Operational scenario (levers you can actually pull). • One Stress scenario (the “survive the punch” plan). Everything else? Noise. Because the real risk isn’t being wrong. It’s being so “thorough” you forget to decide. And here’s the twist: The companies that run fewer scenarios actually make faster moves— and that speed is what keeps them alive.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development