Log Analytics workspace storage capacity alerts not triggering despite exceeding quota

Our Log Analytics workspace has exceeded its daily storage quota multiple times in the past week, risking data loss, but the alerts we configured aren’t triggering. We set up an alert rule to notify us when the workspace approaches 90% capacity, but we’ve received zero notifications despite the portal showing we hit 100% quota twice and had ingestion throttling.

The alert rule is configured to monitor the Usage metric with a threshold of 90% of our 5GB daily cap. The action group is properly configured with email and SMS notifications that work fine when we test them manually. The alert rule shows as “Enabled” in the portal. I’m confused why the alert isn’t firing when the condition is clearly being met. Is there something specific about Log Analytics workspace alerts that I’m missing?

Also verify that your workspace hasn’t been configured with a daily cap that stops ingestion entirely when reached. If the daily cap is set to “Stop ingestion when daily cap is reached” rather than “Alert only”, you won’t get meaningful alert behavior because ingestion stops immediately. Check Workspace settings > Usage and estimated costs > Daily cap. The setting should be on “Alert” mode, not “Stop” mode, if you want alerts to fire before data loss occurs. In Stop mode, the workspace simply rejects new data without generating alert conditions.

Ah, so I should be using a log query alert instead of a metric alert? What would the KQL query look like to properly calculate the ingestion volume against our daily cap? And how do I ensure the alert evaluates frequently enough to catch capacity issues before we hit 100%?

Here’s a KQL query I use for this exact purpose:


Usage
| where TimeGenerated > ago(24h)
| summarize DataVolume = sum(Quantity) / 1024
| extend PercentOfCap = (DataVolume / 5.0) * 100
| where PercentOfCap > 90

This calculates total data ingestion in GB over the last 24 hours and compares it to your 5GB cap. Set the alert to evaluate every 15 minutes with a lookback period of 24 hours. Make sure your action group’s notification preferences are set correctly - sometimes email notifications get filtered as spam or SMS messages fail silently due to carrier issues.

Let me provide a comprehensive solution covering all three key areas: alert rule configuration, the correct Usage metric query, and proper action group setup.

Alert Rule Configuration:

You need to create a log query alert rule, not a metric alert. In the Azure portal, go to your Log Analytics workspace > Alerts > New alert rule. The critical difference is that workspace capacity monitoring requires querying the Usage table directly.

For the alert query, use this KQL:


Usage
| where TimeGenerated > ago(1h)
| where IsBillable == true
| summarize TotalGB = sum(Quantity) / 1024
| extend DailyCap = 5.0
| extend PercentUsed = (TotalGB / DailyCap) * 100
| where PercentUsed > 90

This query calculates billable data ingestion over the last hour and projects it against your daily cap. The key is using IsBillable == true to exclude free data types from the calculation.

Alert Rule Settings:

  • Measurement: Table rows (not metric measurement)
  • Aggregation granularity (Period): 1 hour
  • Frequency of evaluation: Every 15 minutes
  • Threshold: Greater than 0 (because the query filters for >90% in the KQL itself)
  • Severity: Sev 1 (Critical)

Understanding the Usage Metric:

The Usage table updates approximately every hour, which is why your alert rule needs appropriate timing. The table contains these key columns:

  • DataType: The type of data ingested (SecurityEvent, Perf, Syslog, etc.)
  • Quantity: Volume in MB
  • IsBillable: Boolean indicating if this data counts toward your cap
  • TimeGenerated: When the usage record was created

The workspace’s daily cap resets at midnight UTC, so your alert should account for this by checking cumulative ingestion since midnight:


Usage
| where TimeGenerated > startofday(now())
| where IsBillable == true
| summarize TotalGB = sum(Quantity) / 1024
| extend PercentOfDailyCap = (TotalGB / 5.0) * 100

Action Group Setup:

Verify your action group configuration:

  1. Action group name should be descriptive (e.g., “LogAnalytics-Capacity-Alerts”)
  2. Add multiple notification channels: Email, SMS, and consider adding a webhook to a ticketing system
  3. Test the action group using “Test action group” feature in the portal - this sends test notifications through all configured channels
  4. Check that email addresses are verified and SMS numbers include country codes
  5. Review the Alert History under the action group to see if notifications were attempted but failed

Common Issues:

  1. Alert Suppression: If your workspace briefly exceeds the threshold then drops below it, the alert might fire but then immediately resolve. Configure “Alert state” to track state changes and set a proper “Check workspace data” delay.

  2. Workspace Daily Cap Mode: Verify Settings > Usage and estimated costs > Daily cap is set to the correct value (5GB in your case) and that “Alert only” is selected, not “Stop ingestion when daily cap is reached.”

  3. Action Group Rate Limiting: Azure limits SMS to 1 per 5 minutes per phone number. If alerts fire frequently, notifications get throttled. Add email notifications as backup.

  4. RBAC Permissions: Ensure the user who created the alert rule has “Log Analytics Contributor” role on the workspace. Insufficient permissions can cause silent alert failures.

Monitoring Alert Effectiveness:

After implementing these changes, monitor the alert rule’s fire history:

  • Go to Alerts > Alert rules > [Your rule] > History
  • Verify “Fired” events appear when workspace ingestion exceeds 90%
  • Check action group execution logs to confirm notifications were sent
  • Review the Alert Processing Rules section to ensure no suppression rules are interfering

Implementing these configurations with the correct KQL query, appropriate evaluation frequency, and verified action group setup should resolve your alerting issues and provide early warning before hitting the daily cap.

The issue is likely how you’re referencing the Usage metric in your alert rule. Log Analytics workspace capacity isn’t monitored through a simple “Usage” metric percentage. You need to query the Usage table in the workspace itself using a log query alert, not a metric alert. The Usage table contains records of data ingestion volume per data type. Your alert rule should be based on a KQL query that calculates total ingestion against your daily cap, not a metric threshold.

One more thing to check - your action group might be hitting rate limiting. Azure applies rate limits to action groups: max 1 SMS per 5 minutes, max 1 voice call per 5 minutes, and max 100 emails per hour per action group. If your workspace is generating multiple alert instances rapidly (which can happen with log query alerts that evaluate frequently), subsequent notifications get throttled. Check the action group’s Alert History to see if notifications are being suppressed due to rate limiting. You might need to adjust your alert rule’s evaluation frequency or aggregation granularity.