Your issue stems from the interaction between three retention policy layers in Watson IoT wiot-24, and understanding this hierarchy is critical for preventing future data loss.
Retention Policy Configuration Hierarchy:
Watson IoT applies retention policies in this order:
- Device-type-specific retention (highest priority)
- Organization-level storage quota policies
- Global retention policy (lowest priority)
Your global 90-day policy was being overridden by the 30-day device-type policies on temperature-sensors and pressure-monitors. To fix this systematically:
# Use the bulk update API to set all device types to 90 days:
PUT /api/v0002/device/types/bulk/retention
{
"retentionDays": 90,
"deviceTypes": ["temperature-sensors", "pressure-monitors", "*"]
}
Storage Quota Conflict Resolution:
Even though you’re at 45% of total storage, check the per-device-type quotas:
Settings → Storage → Device Type Quotas
If any device type exceeds its individual quota, Watson IoT will delete the oldest data regardless of retention policy settings. Increase device-type quotas to match your expected 90-day data volume.
Audit Logging for Retention Events:
Enable comprehensive audit logging to track all retention-related deletions:
{
"auditConfig": {
"logRetentionDeletions": true,
"logArchivalEvents": true,
"logQuotaViolations": true,
"alertOnUnexpectedDeletion": true
}
}
This creates an audit trail for compliance and helps identify future retention policy conflicts before data loss occurs.
For your deleted data: Check your AWS S3 archival bucket immediately. Watson IoT archives data before deletion if archival is configured. Your data should be in S3 with folder structure: {orgId}/{deviceType}/{year}/{month}/{day}/. If archival wasn’t running due to configuration issues, that data is permanently lost and you’ll need to document the gap for your compliance audit.
Going forward, set up archival validation alerts to ensure data is successfully archived before retention cleanup runs. This provides a safety net against similar data loss scenarios.