Demand planning dashboard forecast accuracy metrics: MAPE vs RMSE vs MAE selection

Our demand planning team is building executive dashboards in IS 2023.2 to track forecast accuracy. We’re debating which error metrics to display prominently. MAPE (Mean Absolute Percentage Error) is intuitive for executives, but I’ve read it has issues with low-volume items and can’t handle zero actuals. RMSE penalizes large errors more than MAE, which might be good for identifying problematic forecasts. We also need to track forecast bias to detect systematic over/under-forecasting. What metrics do experienced demand planners actually rely on for dashboard KPIs?

MAPE is dangerous for low-volume items - you’ll get 200-300% errors that skew everything. We learned this the hard way. For dashboard display, we use weighted MAPE (wMAPE) which weights by actual demand volume. This gives you the percentage intuition executives want without the low-volume explosion problem.

Let me provide a comprehensive framework for forecast accuracy metrics in demand planning dashboards:

MAPE Calculation and Limitations: MAPE is calculated as: (|Actual - Forecast| / Actual) × 100, averaged across periods. Its limitations are significant:

  • Undefined when actual demand is zero (division by zero)
  • Asymmetric - penalizes over-forecasting more than under-forecasting
  • Explodes with low-volume items (1 unit actual vs 5 unit forecast = 400% error)
  • Not comparable across different demand scales

For dashboards, use weighted MAPE (wMAPE) instead: Sum(|Actual - Forecast|) / Sum(Actual) × 100. This avoids the low-volume explosion and provides a more stable metric.

RMSE vs MAE Trade-offs: RMSE (Root Mean Square Error) and MAE (Mean Absolute Error) serve different purposes:

  • MAE treats all errors equally - gives you average magnitude of error
  • RMSE penalizes large errors more heavily due to squaring - helps identify problematic outliers
  • RMSE will always be ≥ MAE. Large gap between them indicates high variability

For executive dashboards, display both. When RMSE is much larger than MAE, you have occasional large misses that need investigation. When they’re close, errors are consistent in magnitude.

Bias Measurement: Critical but often overlooked. Calculate Mean Error (ME) without absolute value: Sum(Forecast - Actual) / n

  • Positive ME = systematic over-forecasting (inventory buildup risk)
  • Negative ME = systematic under-forecasting (stockout risk)
  • Near-zero ME with high MAE = errors cancel out but accuracy is still poor

Display bias as a percentage of average demand for intuitive interpretation.

Multi-Metric Dashboards: Our IS 2023.2 demand planning dashboard displays:

  1. wMAPE (primary metric for accuracy)
  2. MAE in actual units (helps planners understand magnitude)
  3. RMSE/MAE ratio (outlier detector - ratio > 1.5 indicates investigation needed)
  4. Bias % (systematic tendency indicator)
  5. Forecast Value Add vs naive forecast (justifies model sophistication)

Segment all metrics by:

  • Product category (A/B/C classification)
  • Demand pattern (stable/seasonal/erratic/lumpy)
  • Planning horizon (1-month, 3-month, 6-month accuracy)

This multi-dimensional view helps executives understand where forecasting works well and where it needs improvement, while giving planners actionable insights for model tuning.

That makes sense. How do you set the tolerance thresholds for color coding? Does it vary by product category or is it uniform across the board?

Don’t forget about tracking forecast value add (FVA). This compares your statistical forecast accuracy to a naive forecast (like last year’s actuals). If your sophisticated models aren’t beating simple methods, that’s critical information for executives. We display FVA alongside MAPE on our executive dashboard, and it’s been eye-opening in some categories where our complex models weren’t actually adding value.