Let me provide a comprehensive solution addressing all the performance issues you’re experiencing.
1. Implement Parallel API Requests
Your current sequential approach is the biggest performance killer. Instead of waiting for each KPI request to complete before starting the next, fire all requests simultaneously:
const requests = [
fetch('/api/performance/metrics?kpi=oee&...'),
fetch('/api/performance/metrics?kpi=throughput&...'),
fetch('/api/performance/metrics?kpi=quality&...'),
fetch('/api/performance/metrics?kpi=downtime&...')
];
Promise.all(requests).then(results => updateDashboard(results));
This reduces your dashboard load time from 120 seconds (4 × 30s sequential) to approximately 30 seconds (parallel execution). The API server can easily handle 4 concurrent requests per user - it’s designed for this.
2. Optimize Date Range Limitation
A 30-day range is excessive for real-time operational dashboards. Implement tiered date ranges based on dashboard context:
- Real-time view: Current shift only (8-12 hours) - loads in 1-2 seconds
- Daily operations: Last 7 days - loads in 3-5 seconds
- Weekly review: Last 30 days - loads in 8-12 seconds
- Historical analysis: Custom range with date picker - user expects longer load time
Default to the 7-day view for most users. This reduces data volume by 75% while still providing actionable insights. Users can expand to 30 days when needed, but won’t wait for that data on every dashboard load.
3. Enable KPI Data Aggregation
Your Apriso instance needs pre-aggregated performance tables enabled. This is critical for performance at scale:
Configuration steps:
- Navigate to System Administration > Performance Analysis Configuration
- Enable “Hourly Aggregation” and “Daily Aggregation” options
- Set aggregation schedule to run every hour for hourly data, midnight for daily data
- Allow 24-48 hours for initial backfill of historical aggregates
- Verify aggregation jobs are completing successfully in system logs
Once enabled, queries for date ranges beyond 24 hours will automatically use aggregated tables instead of raw event data. This typically reduces query time by 80-90% for multi-day ranges. The API automatically routes to aggregated tables when appropriate - no code changes needed.
4. Database Indexing Optimization
Even with aggregation enabled, proper indexing is essential. Work with your DBA to verify these indexes exist:
Critical indexes for performance queries:
- Composite index: (production_line_id, metric_timestamp, kpi_type, shift_id)
- Composite index: (metric_timestamp, production_line_id) for time-range filters
- Index on (aggregation_level, metric_timestamp) for aggregated tables
Run execution plans on your typical queries. You should see “Index Seek” operations, not “Table Scan”. If you see table scans, the indexes aren’t being used effectively and need adjustment.
5. Additional Performance Optimizations
Client-side caching: Cache API responses for 2-5 minutes depending on data freshness requirements. Most KPI data doesn’t need real-time updates every second. Implement a refresh button for users who want immediate updates.
Progressive loading: Load critical metrics first, then secondary data:
// Pseudocode - Progressive dashboard loading:
1. Immediately load current shift OEE for all lines (priority 1)
2. Display OEE widgets while background loads continue
3. Parallel load throughput and quality metrics (priority 2)
4. Update dashboard as each metric completes
5. Finally load historical trend data (priority 3)
6. Show loading spinners only for incomplete sections
// Users see actionable data within 3-5 seconds
Request batching: Instead of separate requests per KPI, batch related metrics:
GET /api/performance/metrics?lines=L1-L15&kpis=oee,throughput,quality&range=7d
Single request with multiple KPIs reduces HTTP overhead and allows server-side optimization.
Response compression: Ensure GZIP compression is enabled on API responses. Performance data compresses extremely well (often 80-90% reduction) since it’s mostly numeric arrays.
6. Monitoring and Alerting
Implement performance monitoring to catch degradation early:
- Track API response times per endpoint (alert if >10s)
- Monitor database query execution times (alert if >5s)
- Track aggregation job completion (alert on failures)
- Log slow queries for analysis (anything >3s)
- Set up dashboard load time tracking in your frontend (alert if >15s)
7. Expected Results
After implementing these optimizations:
- Dashboard load time: 5-8 seconds (down from 120s)
- API response time per request: 2-4 seconds (down from 25-30s)
- Concurrent user capacity: 100+ simultaneous users without degradation
- Database load: 60-70% reduction in query volume
The combination of parallel requests, date range optimization, pre-aggregation, and proper indexing will transform your dashboard performance. Prioritize enabling aggregation first (biggest impact), then implement parallel requests (quick win), then optimize indexing (sustained performance).