Performance analysis API response time degrades significantly

Our real-time operations dashboard is experiencing severe performance degradation when requesting performance data through the API. When we query multiple KPIs (OEE, throughput, quality rate, downtime) for a 30-day period across 15 production lines, API response times have grown from 3-4 seconds to 25-30 seconds over the past month.

The API call structure:


GET /api/performance/metrics?lines=L1,L2,...,L15&kpis=oee,throughput,quality,downtime&startDate=2025-01-01&endDate=2025-01-30

We’re making sequential calls for each dashboard widget, which means users wait 2+ minutes for the full dashboard to load. The database doesn’t seem to be the bottleneck - our DBA says query execution is under 2 seconds. I’m wondering if parallel API requests would help, or if we need to limit the date range. Has anyone dealt with KPI data aggregation performance issues?

Good suggestions. I checked with our admin team and we don’t have pre-aggregated tables enabled. That’s probably a big part of the issue. For parallel requests, should we be concerned about overwhelming the API server? We have about 50 concurrent dashboard users during peak hours.

Also consider implementing progressive loading in your dashboard. Load critical KPIs first (maybe just OEE for current shift), then lazy-load historical trends and secondary metrics. This gives users immediate value while background requests complete. We reduced perceived load time from 2 minutes to 5 seconds using this approach, even though total data load time stayed similar.

Let me provide a comprehensive solution addressing all the performance issues you’re experiencing.

1. Implement Parallel API Requests Your current sequential approach is the biggest performance killer. Instead of waiting for each KPI request to complete before starting the next, fire all requests simultaneously:

const requests = [
  fetch('/api/performance/metrics?kpi=oee&...'),
  fetch('/api/performance/metrics?kpi=throughput&...'),
  fetch('/api/performance/metrics?kpi=quality&...'),
  fetch('/api/performance/metrics?kpi=downtime&...')
];
Promise.all(requests).then(results => updateDashboard(results));

This reduces your dashboard load time from 120 seconds (4 × 30s sequential) to approximately 30 seconds (parallel execution). The API server can easily handle 4 concurrent requests per user - it’s designed for this.

2. Optimize Date Range Limitation A 30-day range is excessive for real-time operational dashboards. Implement tiered date ranges based on dashboard context:

  • Real-time view: Current shift only (8-12 hours) - loads in 1-2 seconds
  • Daily operations: Last 7 days - loads in 3-5 seconds
  • Weekly review: Last 30 days - loads in 8-12 seconds
  • Historical analysis: Custom range with date picker - user expects longer load time

Default to the 7-day view for most users. This reduces data volume by 75% while still providing actionable insights. Users can expand to 30 days when needed, but won’t wait for that data on every dashboard load.

3. Enable KPI Data Aggregation Your Apriso instance needs pre-aggregated performance tables enabled. This is critical for performance at scale:

Configuration steps:

  1. Navigate to System Administration > Performance Analysis Configuration
  2. Enable “Hourly Aggregation” and “Daily Aggregation” options
  3. Set aggregation schedule to run every hour for hourly data, midnight for daily data
  4. Allow 24-48 hours for initial backfill of historical aggregates
  5. Verify aggregation jobs are completing successfully in system logs

Once enabled, queries for date ranges beyond 24 hours will automatically use aggregated tables instead of raw event data. This typically reduces query time by 80-90% for multi-day ranges. The API automatically routes to aggregated tables when appropriate - no code changes needed.

4. Database Indexing Optimization Even with aggregation enabled, proper indexing is essential. Work with your DBA to verify these indexes exist:

Critical indexes for performance queries:

  • Composite index: (production_line_id, metric_timestamp, kpi_type, shift_id)
  • Composite index: (metric_timestamp, production_line_id) for time-range filters
  • Index on (aggregation_level, metric_timestamp) for aggregated tables

Run execution plans on your typical queries. You should see “Index Seek” operations, not “Table Scan”. If you see table scans, the indexes aren’t being used effectively and need adjustment.

5. Additional Performance Optimizations

Client-side caching: Cache API responses for 2-5 minutes depending on data freshness requirements. Most KPI data doesn’t need real-time updates every second. Implement a refresh button for users who want immediate updates.

Progressive loading: Load critical metrics first, then secondary data:


// Pseudocode - Progressive dashboard loading:
1. Immediately load current shift OEE for all lines (priority 1)
2. Display OEE widgets while background loads continue
3. Parallel load throughput and quality metrics (priority 2)
4. Update dashboard as each metric completes
5. Finally load historical trend data (priority 3)
6. Show loading spinners only for incomplete sections
// Users see actionable data within 3-5 seconds

Request batching: Instead of separate requests per KPI, batch related metrics:


GET /api/performance/metrics?lines=L1-L15&kpis=oee,throughput,quality&range=7d

Single request with multiple KPIs reduces HTTP overhead and allows server-side optimization.

Response compression: Ensure GZIP compression is enabled on API responses. Performance data compresses extremely well (often 80-90% reduction) since it’s mostly numeric arrays.

6. Monitoring and Alerting Implement performance monitoring to catch degradation early:

  • Track API response times per endpoint (alert if >10s)
  • Monitor database query execution times (alert if >5s)
  • Track aggregation job completion (alert on failures)
  • Log slow queries for analysis (anything >3s)
  • Set up dashboard load time tracking in your frontend (alert if >15s)

7. Expected Results After implementing these optimizations:

  • Dashboard load time: 5-8 seconds (down from 120s)
  • API response time per request: 2-4 seconds (down from 25-30s)
  • Concurrent user capacity: 100+ simultaneous users without degradation
  • Database load: 60-70% reduction in query volume

The combination of parallel requests, date range optimization, pre-aggregation, and proper indexing will transform your dashboard performance. Prioritize enabling aggregation first (biggest impact), then implement parallel requests (quick win), then optimize indexing (sustained performance).

The problem is you’re requesting 30 days of raw data for 15 lines across 4 KPIs - that’s potentially millions of data points. The API has to aggregate all that data in real-time. Check if your Apriso instance has pre-aggregated performance tables enabled. These roll up hourly/daily metrics and can dramatically speed up queries for longer date ranges.

One thing people often miss - check your database indexing on the performance metrics tables. Even with aggregation enabled, you need proper indexes on (line_id, timestamp, kpi_type) for fast filtering. We had similar slowness issues that persisted even after enabling aggregation until we added the right composite indexes. Run an EXPLAIN PLAN on your queries to see if you’re getting index seeks or table scans.