ML-based anomaly alerts not triggering in IoT Central monitoring dashboard for streaming sensor data

We’ve configured real-time ML anomaly detection in IoT Central for our manufacturing equipment, but alerts aren’t firing even when we know anomalies are occurring. The ML model is running on Azure Stream Analytics and outputs an anomaly score to IoT Central device telemetry.

Our setup: 50 machines send sensor data every 30 seconds. Stream Analytics processes this data, runs the ML model, and sends back an anomalyScore property (0.0 to 1.0). We’ve created alert rules in IoT Central that should trigger when anomalyScore > 0.7, but we’re not receiving any notifications despite seeing scores above 0.9 in the device data explorer.

I’ve verified the ML model output property is correctly mapped in the device template. The anomalyScore appears in raw telemetry, but the alert rules seem to be ignoring it. We’ve had three incidents in the past week where equipment failures occurred with high anomaly scores, but operators received no alerts.

Has anyone successfully configured alert rule mapping for ML model outputs in IoT Central? I’m wondering if there’s something specific about how the pipeline handles computed properties versus direct sensor readings.

Another thing to check is the alert rule condition syntax. IoT Central’s rule builder can be finicky with decimal comparisons. Try using >= instead of > for your threshold, and make sure you’re not accidentally using string comparison. Also enable diagnostics logging on the rule itself to see if it’s being evaluated at all.

Check if the anomalyScore is being sent as device telemetry or as a property update. Alert rules in IoT Central only work on telemetry streams, not on device property changes. If Stream Analytics is sending it as a property, that would explain why alerts aren’t triggering.

That’s interesting. I need to check with our data engineering team on how Stream Analytics is sending the results back. The data explorer shows it under the telemetry tab, but maybe it’s not being classified correctly for alert evaluation.

I’ll provide a comprehensive solution covering alert rule mapping, ML model output property configuration, and diagnostics for pipeline errors.

1. Alert Rule Mapping Configuration: The core issue is likely how your alert rule is configured to evaluate the ML model output. In IoT Central, create your alert rule with these specific settings:

  • Target Devices: Select the device template or device group for your manufacturing equipment
  • Conditions: Add a telemetry condition for anomalyScore
    • Telemetry: anomalyScore
    • Aggregation: Maximum (not Average, to catch peak anomalies)
    • Operator: is greater than or equal to
    • Value: 0.7
    • Time aggregation: 5 minutes (adjust based on your data frequency and acceptable latency)
  • Actions: Configure email, webhook, or Azure Monitor action group
  • Enabled: Ensure the rule is actively enabled, not in draft mode

2. ML Model Output Property Configuration: Verify your device template capability model has anomalyScore defined correctly:

  • Open your device template in IoT Central
  • Navigate to the capability model
  • Ensure anomalyScore is defined as:
    • Capability type: Telemetry (NOT Property or Command)
    • Schema: double
    • Semantic type: None (or Event if appropriate)
    • Display unit: (leave blank or use “score”)
  • Publish the template if you made changes

In your Stream Analytics query, ensure the output is formatted correctly for IoT Central:

  • The output must be sent as device telemetry messages, not device twin updates
  • Include the original device ID so IoT Central can route it correctly
  • Send anomalyScore as a JSON property in the telemetry payload
  • Ensure timestamp alignment between original sensor data and ML output

3. Diagnostics and Pipeline Errors: Enable comprehensive diagnostics to identify where the pipeline is failing:

IoT Central Diagnostics:

  • Go to Settings > Diagnostics in IoT Central
  • Enable diagnostic logs for: Device connectivity, Device telemetry, Rules evaluation
  • Send logs to Log Analytics workspace for querying
  • Look for errors like: “Telemetry validation failed”, “Rule evaluation skipped”, “Property type mismatch”

Stream Analytics Diagnostics:

  • In your Stream Analytics job, check Activity Logs and Diagnostic Logs
  • Look for output errors or data conversion issues
  • Verify the output is successfully writing to IoT Central (check output metrics)

Common Issues to Check:

  1. Data Type Mismatch: If Stream Analytics sends anomalyScore as string “0.85” instead of double 0.85, IoT Central won’t evaluate it correctly
  2. Missing Timestamps: Ensure telemetry includes proper timestamp field that IoT Central recognizes
  3. Device Association: Verify the device ID in Stream Analytics output matches registered devices in IoT Central
  4. Rule Evaluation Frequency: IoT Central evaluates rules every 5 minutes by default; adjust time aggregation accordingly
  5. Alert Action Configuration: Ensure your action group has valid recipients and isn’t being filtered by email rules

Testing and Validation:

  • Use IoT Central’s “Raw data” view to confirm anomalyScore is arriving as expected
  • Manually trigger a test by sending a message with anomalyScore > 0.7 from a test device
  • Check rule evaluation history in IoT Central to see if the rule is being triggered but actions are failing
  • Set up a simple test rule with a very low threshold (e.g., anomalyScore > 0.1) to verify the evaluation pipeline works

The most common root cause is the capability model definition not matching what Stream Analytics is sending. Ensure perfect alignment between the data type, capability type (telemetry vs property), and field name.

We had a similar issue where the ML output was arriving at IoT Central with a slight delay, and the alert rule’s time window wasn’t configured correctly. Make sure your alert rule’s aggregation window accounts for any processing latency in your Stream Analytics pipeline. If the anomaly score arrives 2-3 minutes after the original sensor reading, a 1-minute aggregation window might miss it.

I’ve seen this issue before. The problem is often with the device template capability model. Make sure anomalyScore is defined as a telemetry capability with type double, not as a property. Also verify that the semantic type and unit are properly configured. IoT Central’s alert engine is very particular about data types and capability definitions matching exactly what’s being sent.

Check the diagnostic logs in IoT Central under Settings > Diagnostics. Look for any errors related to rule evaluation or telemetry ingestion. Sometimes there are validation failures that prevent the telemetry from being processed by the rules engine, but the data still shows up in the explorer because it’s stored regardless of validation status.