Predictive analytics forecast chart in SSRS 2016 shows mismatched data after ML model update

We embedded a Python-based ML forecasting model into our SSRS 2016 reporting infrastructure. Everything worked perfectly until we updated the model last week. Now the forecast chart displays values that don’t match the actual predictions from the model when tested independently.

The ML model schema changed slightly - we added two new features and renamed one output field from ‘Forecast’ to ‘PredictedDemand’. I updated the SSRS dataset to match, but the chart still shows incorrect values:

# Model output structure (new)
result = {
  'PredictedDemand': 1250.5,
  'ConfidenceInterval': [1100, 1400],
  'ModelVersion': '2.1'
}

The chart is showing values around 850 instead of the expected 1250 range. I suspect the SSRS dataset field mapping isn’t correctly handling the schema change, but I’ve refreshed the fields multiple times. Has anyone dealt with ML model schema changes breaking SSRS forecast charts?

Have you checked if your forecast chart has multiple data series defined? When ML model schemas change, sometimes old series definitions remain hidden but still active, causing the chart to blend or average values from multiple sources. This could explain the discrepancy between dataset preview and chart display.

I’ve seen this exact issue with embedded Python models in SSRS 2016. The problem is usually in how SSRS handles the data type conversion from Python dictionary output to dataset fields. When you changed the model output structure, SSRS might be misinterpreting which value to use. Check your dataset’s field mapping - you may need to explicitly specify the field extraction logic rather than relying on automatic mapping. Also, the 850 value could be coming from a default or fallback calculation if the field mapping fails silently.

Good catch on the field references. I found that the chart Value expression was still using =Fields!Forecast.Value, which I updated to =Fields!PredictedDemand.Value. However, the values are still incorrect. I also checked the Python script execution and confirmed it’s using model version 2.1. The dataset preview shows the correct 1250 values, but the chart renders 850. This is really puzzling.