Cloud Monitoring alerts not triggering for Cloud SQL CPU spikes above threshold

We configured Cloud Monitoring alert policies for Cloud SQL CPU utilization monitoring but alerts fail to trigger when CPU actually exceeds 80%. The instance clearly shows 95% CPU in the metrics explorer during incidents, but no notifications are sent.

Our alert policy configuration:

conditions:
  displayName: "Cloud SQL CPU High"
  conditionThreshold:
    filter: resource.type="cloudsql_database" metric.type="cloudsql.googleapis.com/database/cpu/utilization"
    comparison: COMPARISON_GT
    thresholdValue: 0.80

The metric filter syntax appears correct when tested in metrics explorer. We’ve verified notification channels are configured and working (test notifications succeed). What could cause Cloud Monitoring alert policies to miss actual CPU utilization threshold breaches?

I added the database_id filter but still no alerts. The metrics explorer shows data fine with that filter. How do I properly configure the alignment and duration parameters that monitoring_expert mentioned?

Another gotcha: make sure your threshold value is in the right units. CPU utilization is reported as a fraction (0.0 to 1.0), not percentage. Your thresholdValue: 0.80 is correct for 80%, but I’ve seen people use 80 thinking it’s percentage and wonder why alerts never fire.

You’re missing the aggregation and duration parameters in your alert policy. Without these, the condition doesn’t know how to evaluate the metric over time. Add alignmentPeriod and duration to your conditionThreshold.