In Supply Planning, even a "perfect" forecast can quietly destroy your service levels. Let me share a scenario I encountered during a diagnostic review at a manufacturing plant. Two SKUs. Both missed forecast by 5 units. One had a weekly volume of 10. The other moved 1000 units. The system showed a stellar overall forecast accuracy. But on the ground? One product constantly ran out of stock, while the other sat untouched in the warehouse. What was going on? The metric used—MAPE—told a lopsided story. It exaggerated the error for the smaller SKU and almost ignored the impact on the high-runner. The bias? Not even tracked. We replaced MAPE with MAE + Bias. The moment we did, patterns emerged. We saw when we were consistently over-forecasting one SKU and under-forecasting another. The team adjusted safety stocks, demand drivers, and even supplier lead times accordingly. The result? Lower inventory, better service levels, and more trust in the numbers. Because in supply chain, real accuracy isn’t about how close you look—it’s about how well you perform.
Forecast Accuracy Measurement Methods
Explore top LinkedIn content from expert professionals.
-
-
Frankly speaking, I didn't see the point in discussing MAPE when I wrote recent posts on error measures. However, I've received several comments and messages asking for clarification. So, here it is. TL;DR: Avoid using MAPE! In fact, anything that has "APE" in it, will cause issues (https://lnkd.in/eH7JQENX + see image). MAPE, or Mean Absolute Percentage Error, is a still-very-popular-in-practice error measure, which is calculated by taking the absolute difference between the actual and forecast, dividing it by the actual value. The rationale is clear: we need to get rid of scale, we want something that measures accuracy well, is easy to calculate and interpret. Unfortunately, MAPE is none of these things, and here is why. 1. It is scale sensitive: if you have sales in thousands then the actual in the denominator will bring the overall measure down and you will have a very low number even if the model is not doing well. Similarly, if you deal with very low values, they will inflate the measure, making it easily hundreds of percents. 2. It is well known that MAPE prefers when you underforecast (https://lnkd.in/eKAaKVBZ). It is not symmetric and misleading. BTW, "symmetric" MAPE is not better and not symmetric either (https://lnkd.in/etfeTHQ4). 3. It is not easy to calculate on intermittent demand. Technically speaking, you get an infinite value, so it is not possible to calculate it in that case. 4. Okay, it is easy to interpret. But the value itself does not tell you anything about performance of your model (see point 1 above). 5. And it is not clear what it is minimised with (remember this post? https://lnkd.in/eSGV2ZMR). So, how can we fix that? The main problem of MAPE is in the denominator. If we change it, we solve problems (1) and (2). Hyndman & Koehler (2006, https://lnkd.in/e2vxfKzD) proposed a solution, taking the Mean Absolute Error (MAE) of forecast and dividing it by the mean absolute differences of the data. The latter step is done purely for scaling reasons, and we end up with something called "MASE" that does not have the issues (1), (2) and (5), but is not easy to interpret. The problem with MASE is that it is minimised by median and as a result not appropriate for intermittent demand (https://lnkd.in/ezX_EVCC). But there is a good alternative based on the Root Mean Squared Error (RMSE), called RMSSE (https://lnkd.in/e7SQznfG) that uses the same logic as MASE: take RMSE and divide it by the in-sample Root Mean Squared differences. It is still hard to interpret, but at least it ticks the other four boxes. If you really need the "interpretation" in your error measure, consider dividing MAE/RMSE by the in-sample mean of the data (https://lnkd.in/enWyQHBs). This might not fix the issue (1) completely, but at least it would solve the other four problems. For more on error measures see my monograph: https://lnkd.in/e_URj36s Read the full post here: https://lnkd.in/eWjhXtqD #datascience #forecasting #machinelearning
-
When a Metric Shift Sparked a 12% Forecast Accuracy Boost 🚀 In supply chain planning, the metric you choose can make or break your strategy. For years, I relied on MAPE (Mean Absolute Percentage Error) to judge forecast accuracy until I realized it told only half the story. MAPE treats every SKU equally. That means a tiny miss on a low-volume item can distort your entire accuracy picture… even when you’re doing great on the products that actually drive revenue. Enter WMAPE (Weighted Mean Absolute Percentage Error). Unlike MAPE, WMAPE gives higher weight to forecast errors on high-volume or high-impact items providing a more business-relevant, bottom-line view of accuracy. Here’s how I applied it: Extracted forecast and actual data from SAP S/4HANA across diverse SKUs. Built a side-by-side dashboard in Excel comparing MAPE and WMAPE. Found that traditional metrics were hiding key issues in top SKUs. Collaborated with demand planners to adjust statistical models where it truly mattered. That switch led to tighter alignment between planning and production and a 12% sustained improvement in forecast accuracy. WMAPE transformed how we measured performance and responded to errors. It moved the conversation from “What’s our overall accuracy?” to “Where does inaccuracy actually hurt the business?” If you want your metrics to drive meaningful action, WMAPE deserves a spot in your toolkit. #WMAPE #DemandPlanning #ForecastAccuracy #SupplyChainOptimization #PlanningExcellence
-
Bad forecast = Bad inventory + Bad losses + Bad cash This infographic shows 7 measures for forecast accuracy & bias for demand planners : 1️⃣ MAPE (Mean Absolute Percentage Error) ↳ Pros: easy to explain; allows to compare SKUs of any size ↳ Cons: explodes when actuals ≈ 0; over‑penalizes low‑volume items 2️⃣ WAPE / WMAPE (Weighted APE) ↳ Pros: volume‑weighted; tiny SKUs don’t distort the big picture ↳ Cons: still collapses when actuals are zero; masks big misses on slow movers 3️⃣ MAE / MAD (Mean Absolute Error/Deviation) ↳ Pros: clear “units‑off” view; less sensitive to outliers ↳ Cons: hard to compare across products with very different scales 4️⃣ RMSE (Root Mean Squared Error) ↳ Pros: heavily penalizes large misses; great for high‑value SKUs ↳ Cons: extremely sensitive to outliers; a single spike skews results 5️⃣ MFE / Bias (Mean Forecast Error) ↳ Pros: shows direction (over‑ vs. under‑forecast); crucial for fixing systematic bias ↳ Cons: positive and negative errors cancel out; hides magnitude 6️⃣ sMAPE (Symmetric MAPE) ↳ Pros: reduces MAPE’s inflation on low volumes; bounded between 0 % and 200 %. ↳ Cons: still undefined when both forecast and actual are zero; less intuitive than plain MAPE 7️⃣ MASE (Mean Absolute Scaled Error) ↳ Pros: scale‑free; compares across SKUs and time series; benchmark: values < 1 beat a naïve forecast ↳ Cons: requires a “naïve” benchmark to compute; harder to communicate to non‑analysts Any others to add?
-
“𝗢𝘂𝗿 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗶𝘀 𝟴𝟱%.” A great metric to track — but do you know what's behind that number? In supply chain planning, 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 isn't just a KPI — it's a reflection of how well your decisions align with reality. And like most metrics, it depends heavily on how it’s measured. Different scenarios call for different approaches — using the right metric helps you: • Evaluate planning effectiveness • Build trust in numbers • Drive better inventory and service outcomes Here’s a breakdown of the 3 most common and useful forecast performance metrics: 𝟭. 𝗠𝗔𝗣𝗘 (𝗠𝗲𝗮𝗻 𝗔𝗯𝘀𝗼𝗹𝘂𝘁𝗲 𝗣𝗲𝗿𝗰𝗲𝗻𝘁𝗮𝗴𝗲 𝗘𝗿𝗿𝗼𝗿) Formula: MAPE = (|Forecast – Actual| / Actual) * 100 > Simple to interpret > Can be sensitive when actual demand is low 𝟮. 𝗪𝗔𝗣𝗘 (𝗪𝗲𝗶𝗴𝗵𝘁𝗲𝗱 𝗔𝗯𝘀𝗼𝗹𝘂𝘁𝗲 𝗣𝗲𝗿𝗰𝗲𝗻𝘁𝗮𝗴𝗲 𝗘𝗿𝗿𝗼𝗿) Formula: WAPE = Σ|Forecast – Actual| / ΣActual > Stable across portfolios with high demand variability > Common in CPG, retail, and multi-SKU environments 𝟯. 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗕𝗶𝗮𝘀 Formula: Bias = Σ(Forecast – Actual) > Indicates whether forecasts consistently lean high or low > Key to understanding planning behavior 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲: Use 𝗪𝗔𝗣𝗘 for a realistic measure of error, 𝗕𝗶𝗮𝘀 to monitor forecast tendencies, and 𝗠𝗔𝗣𝗘 when demand is stable and volumes are meaningful. 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝗽𝗲𝗿𝗳𝗲𝗰𝘁𝗶𝗼𝗻 — 𝗶𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝗰𝗹𝗮𝗿𝗶𝘁𝘆, 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆, 𝗮𝗻𝗱 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁. #SupplyChain #Demandforecasting #Accuracy #InventoryManagement #DemandPlanning #CostOptimization #Logistics #Procurement #InventoryControl #LeanSixSigma #Cost #OperationalExcellence #BusinessExcellence #ContinuousImprovement #ProcessExcellence #Lean #OperationsManagement
-
In one of my earlier roles in demand forecasting, our BI system used a very simple percentage error formula: (forecast - actuals)/forecast. At the time, I questioned whether dividing by the actuals might provide a more accurate picture of forecast performance, but like many things in practice, it wasn’t a high priority for discussion. Later, at another company, we used MAPE across all products, including those with highly intermittent demand. It was a consistent approach, but no one really questioned whether a different metric might better capture the nuances of different demand patterns. It wasn’t until my time back to university doing my PhD that I encountered the broader landscape of forecast accuracy metrics. That’s when I started asking a bigger question: which metric should be used for which purpose? Forecast accuracy seems simple until you try to measure it consistently across products, teams, or tools. Most people start with MAPE or RMSE because that’s what the software provides. But eventually, the questions come up: – Why does one model look better on RMSE but worse on MAPE? – Why do different teams report accuracy differently? – Why does it feel like the numbers don’t tell the full story? I wrote this article to help unpack those questions: what each accuracy metric emphasizes, when it’s most useful, and what happens when different metrics lead to different conclusions. It includes: – A breakdown of common metrics like RMSE, MAE, MAPE, sMAPE, MASE, and more – Practical examples of when each metric works best — and when it doesn’t – Guidance on how to choose the right metrics based on product portfolios and business goals I'm curious, which forecasting error measures are being used where you work? Are you using more than one?
Explore categories
- Hospitality & Tourism
- Productivity
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development