We’re experiencing consistent 504 gateway timeouts on our Birst embedded analytics dashboards when users query datasets exceeding 500K records. The dashboards load fine with smaller datasets, but anything over that threshold fails after exactly 60 seconds. We’ve noticed that the ION data aggregation runs are completing successfully, but the query performance through CloudFront is problematic. Our API Gateway resource allocation seems adequate based on CloudWatch metrics, but I’m wondering if there’s a query optimization setting we’re missing.
HTTP/1.1 504 Gateway Timeout
X-Cache: Error from cloudfront
X-Amz-Cf-Pop: IAD89-C1
Has anyone dealt with Birst query timeouts in CloudSuite 2021? We need these dashboards operational for month-end reporting.
I’ve seen this before. The 60-second timeout is likely hitting the default API Gateway timeout limit. Check your Birst space settings for query timeout configurations. Also, are you using any pre-aggregated tables or are you querying raw transaction data directly? That makes a huge difference with datasets that size.
Thanks for the response. We’re querying mostly raw transaction data with some joins to dimension tables. The Birst space query timeout is set to 300 seconds, so that’s not the bottleneck. I checked CloudFront behaviors and the origin request timeout is set to 60s - that’s probably our culprit. Should we increase that or look at optimizing the queries themselves?
Check if your Birst queries are using proper indexing. Also, verify that your ION data aggregation jobs are actually creating the materialized views correctly. Sometimes the aggregation completes but doesn’t refresh the Birst cache. You can force a cache refresh through the Birst admin console. One more thing - are you filtering data at the query level or trying to load everything and filter in the dashboard? That’s a common mistake.
I’ve worked with several CloudSuite 2021 implementations facing similar Birst timeout issues. Here’s a comprehensive solution addressing all the performance factors:
Birst Query Optimization:
First, implement query-level optimizations in your Birst spaces. Create custom SQL subjects that pre-filter data at the source rather than loading entire tables. Enable query caching in Birst admin settings and set cache expiration to match your data refresh frequency. For your 500K+ record datasets, create aggregated tables that roll up transaction data by day/week/month depending on reporting granularity needs.
ION Data Aggregation:
Modify your ION workflows to create materialized views during data load processes. Set up scheduled aggregation jobs that run during off-peak hours (typically 2-4 AM). Configure incremental loads rather than full refreshes - this dramatically reduces processing time and keeps your aggregated data current without overwhelming the system.
AWS CloudFront Caching:
Increase your CloudFront origin request timeout from 60s to 180s, but more importantly, configure CloudFront to cache Birst query results. Add cache behaviors for your Birst endpoints with TTL settings matching your data refresh schedule. This prevents repeated execution of identical queries. Configure query string forwarding selectively - only forward parameters that actually change results.
# CloudFront Behavior Settings
Origin Request Timeout: 180
Origin Response Timeout: 180
Cache Based on Query Strings: Whitelist
Query String Whitelist: date_from, date_to, entity_id
API Gateway Resource Allocation:
Even though CloudWatch shows adequate resources, increase your API Gateway throttle limits specifically for Birst endpoints. Set burst limit to 1000 and rate limit to 500 requests per second. Enable API caching at the gateway level with a 5-minute TTL. Configure stage-level cache settings to reduce backend calls.
Implement these changes in sequence: start with Birst query optimization and ION aggregation (biggest impact), then adjust CloudFront settings, and finally fine-tune API Gateway if needed. This approach reduced our largest dashboard load times from 90+ seconds to 8-12 seconds consistently.