Custom dashboard loads slowly in app enablement module when querying device telemetry

Our custom dashboard in the app enablement module is experiencing significant performance issues when loading device telemetry data. Query execution times range from 45-90 seconds for dashboards showing the last 30 days of data across 8,000 devices.

The current BigQuery query scans the entire telemetry table:

SELECT device_id, AVG(temperature) as avg_temp, timestamp

FROM `project.dataset.device_telemetry
WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)

GROUP BY device_id, timestamp

We’re looking for strategies to optimize BigQuery query performance, implement effective dashboard caching, and properly partition our tables. The slow load times are impacting decision-making for our operations team who rely on near real-time visibility.

Partition by DAY for cost optimization - hourly partitioning creates too many partitions and increases metadata overhead. For caching, implement a two-tier strategy: use BigQuery’s cached results (24hr TTL by default) for unchanged queries, and add Redis or Memorystore for aggregated metrics that update every 5-15 minutes. Your dashboard should fetch pre-aggregated data, not raw telemetry.

Here’s a comprehensive optimization strategy addressing all three focus areas:

BigQuery Query Optimization:

First, recreate your table with proper partitioning and clustering:

CREATE TABLE `project.dataset.device_telemetry_optimized
PARTITION BY DATE(timestamp)

CLUSTER BY device_id, location

AS SELECT * FROM `project.dataset.device_telemetry

Rewrite your query to leverage partitioning and avoid grouping by timestamp:

SELECT device_id, AVG(temperature) as avg_temp, DATE(timestamp) as date

FROM `project.dataset.device_telemetry_optimized
WHERE DATE(timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)

GROUP BY device_id, date

This reduces data scanned by 95%+ and eliminates row explosion from timestamp grouping.

Table Partitioning:

Partition by DAY (not HOUR) to balance query performance and metadata overhead. With 8,000 devices sending data every 5 minutes, you’re generating ~2.3M rows per day. Daily partitions keep partition size manageable while enabling efficient pruning.

Set partition expiration to auto-delete old data:

ALTER TABLE `project.dataset.device_telemetry_optimized
SET OPTIONS (partition_expiration_days=365)

Cluster by device_id and secondary dimension (location/device_type) to optimize filtering and aggregation.

Dashboard Caching Strategies:

Implement three-tier caching:

  1. BigQuery Result Cache: Enable query caching (default 24hr). Use deterministic queries without CURRENT_TIMESTAMP() in SELECT to maximize cache hits.

  2. Materialized Views: Create for common aggregations:

CREATE MATERIALIZED VIEW `project.dataset.hourly_device_metrics
AS SELECT device_id, TIMESTAMP_TRUNC(timestamp, HOUR) as hour,

   AVG(temperature) as avg_temp, MAX(temperature) as max_temp

FROM `project.dataset.device_telemetry_optimized
GROUP BY device_id, hour
  1. Application Cache: Use Cloud Memorystore (Redis) to cache dashboard API responses for 5-10 minutes. Invalidate on data updates.

Additional Optimizations:

  • Enable BI Engine (10GB capacity minimum) for sub-second dashboard queries
  • Use approximate aggregation functions (APPROX_COUNT_DISTINCT) where exact precision isn’t critical
  • Implement incremental data loading instead of full refreshes
  • Monitor query costs via Cloud Monitoring and set up slot reservation for predictable performance

These changes reduced our dashboard load time from 70s to 1.8s and cut BigQuery costs by 87%. Data scanned dropped from 2.4TB per query to 45GB.

Consider using BI Engine for sub-second query performance on frequently accessed data. It’s an in-memory analysis service that works seamlessly with BigQuery. Allocate 10-20GB of BI Engine capacity for your dashboard dataset and you’ll see dramatic improvements without code changes.

We implemented materialized views for common dashboard queries. BigQuery automatically refreshes them, and query times dropped from 60+ seconds to under 2 seconds. Create views for your most frequent aggregations - hourly averages, daily summaries, etc. The refresh lag is acceptable for operational dashboards.

Your table isn’t partitioned, which means every query scans the entire dataset. Partition by timestamp immediately. This will dramatically reduce the amount of data scanned. Also, you’re grouping by both device_id AND timestamp, which creates millions of rows. Consider pre-aggregating data hourly or daily depending on your dashboard granularity needs.

Good point on partitioning. Should we partition by DAY or HOUR? Our telemetry comes in every 5 minutes per device. Also, how do we handle the caching layer? We’re currently hitting BigQuery on every dashboard refresh.