We’re running Cisco IoT Cloud Connect 25 and have a critical issue with our MQTT integration. When MQTT payloads fail to process at the edge gateway, no alerts are being generated in the IoT Operations Center. We’ve configured alert policies in the integration module, but they’re simply not firing for failed message delivery.
Our edge-to-cloud integration handles thousands of sensor messages daily, and we’re missing critical failures. Here’s a sample payload that’s failing silently:
{"deviceId":"sensor_3421","status":"error","code":500}
The MQTT error mapping seems correct in our configuration, but the alert policy isn’t triggering. We’ve checked the logs and see failed payloads, but zero alerts reach our monitoring dashboard. This is causing us to miss critical device failures until manual reviews. Has anyone dealt with alert policy configuration issues in the MQTT integration layer?
Raj, the service restart helped partially. We’re now seeing some alerts, but not all failed payloads generate alerts. The topic subscription uses ‘devices/+/telemetry’ which should match our sensor topics. Is there a way to test the alert policy configuration without waiting for actual failures?
Thanks Sarah. I checked the alert mappings and found that 500 errors were mapped to ‘warning’ severity, but our alert policy only triggers on ‘critical’ and ‘high’ levels. However, even after updating the mapping to ‘critical’, alerts still aren’t showing up. The edge-to-cloud integration dashboard shows the failed payloads in the event log, but nothing propagates to the alerting system. Could there be a delay or caching issue?
You might need to restart the alert service after changing severity mappings. In cciot-25, alert policy changes don’t always take effect immediately. Also verify that your MQTT topic subscriptions in the alert policy match the topics where failures are occurring. We had an issue where wildcard patterns weren’t matching properly.
I’ve seen similar behavior in our deployment. Check if your MQTT error codes are properly mapped to alert severity levels. The alert policy might be configured but not linked to the specific error conditions you’re encountering. Navigate to Integration Settings > Alert Mappings and verify that HTTP 500 errors are mapped to a severity that matches your alert policy triggers.