We’re running Watson IoT Platform v25 and facing a critical issue with ML model updates in our device registry. After registering new sensor types with custom attributes (temperature_variance, pressure_delta, vibration_threshold), the ML analytics models aren’t picking up these new device schemas for retraining.
The device type schema synchronization seems broken - new devices appear in the registry but the ML pipeline doesn’t recognize the custom attribute mapping. We’ve tried manual model retraining workflow triggers but they fail silently.
Error from API logs:
POST /api/v0002/device/types/sensor_v3/mappingRules
Response: 202 Accepted
Warning: ML model sync pending - attributes not indexed
Our analytics dashboard shows zero predictions for the new sensor types even though data is flowing. Has anyone dealt with device type schema changes not propagating to ML models? This is blocking our predictive maintenance rollout for 500+ new industrial sensors.
We had similar issues in wiot-24. The problem was timing - if you register devices before the logical interface schema is fully published and indexed, the ML model cache doesn’t update. Try deleting the device type, waiting 10-15 minutes for cache invalidation, then recreate with all custom attributes in the initial schema definition. Also verify that your custom attribute mapping uses the correct data types (float vs double matters for ML feature extraction).
Also check your Watson IoT Platform service instance limits. If you’re near the maximum device types or custom attributes quota, the ML pipeline may silently fail to register new schemas. We hit this in production when we exceeded 150 device types - new registrations worked but ML indexing stopped. The quota isn’t enforced at registration time, only during background sync processes. Check your service plan limits in the IBM Cloud dashboard.
Good catch on the quota limits. Let me add one more thing about custom attribute data types - if your attributes use nested JSON structures or arrays, the ML feature extraction can fail without clear errors. The analytics engine expects flat numeric or categorical values for model training.
I’ve seen this exact behavior when custom attributes aren’t properly defined in the logical interface schema before device registration. The ML model retraining workflow requires explicit feature definitions in the schema metadata. Check if your custom attributes have the ‘analyticsEnabled’ flag set to true in the device type definition. Without this flag, the ML pipeline ignores new attributes even if data flows correctly.