We’re experiencing significant dashboard performance issues in our production environment running HM 2022.2. Every morning when the shift supervisor opens the KPI dashboards around 6 AM, they take 30-45 seconds to load, sometimes timing out completely.
The pattern is consistent - this only happens after our nightly batch jobs complete around 5 AM. These jobs aggregate production data, calculate efficiency metrics, and update material consumption records. The dashboards include real-time production counts, OEE calculations, quality metrics, and downtime analysis.
I suspect there’s a connection between the batch job scheduling and how the dashboard queries are hitting the database. The cache seems to be cleared or invalidated somehow. During the day, dashboard performance is acceptable (3-5 second load times), but that first morning access is painful.
Has anyone dealt with similar dashboard lag after batch processing? We need our supervisors to have immediate access to overnight production data.
The dashboard configuration in HM 2022.2 has a known issue with cache invalidation after batch operations. Check your dashboard refresh settings - there’s a parameter that controls whether the cache persists across batch job completion. We typically set the cache retention policy to ‘persistent’ rather than ‘auto-refresh’ for overnight scenarios. This prevents the automatic cache clear that’s triggered by the batch job completion event.
This is a multi-layered optimization problem that requires addressing batch job scheduling, database query optimization, and dashboard cache management systematically.
Batch Job Scheduling Optimization:
First, restructure your batch job sequence to separate data aggregation from database maintenance. Move your statistics update to run at 4 AM before the aggregation jobs start, not after. This ensures your queries benefit from updated statistics during aggregation, and the cache built during batch processing remains valid for morning dashboard access. Configure your batch scheduler to include a cache warm-up phase as the final step - execute the top 10 most-used dashboard queries with a 15-minute stagger to distribute the load.
Database Query Optimization:
Analyze your KPI dashboard queries using the Performance Analysis module. Focus on three areas: eliminate table scans on your aggregated fact tables by adding filtered indexes on date ranges and production line dimensions, implement query hints to force index usage on your OEE calculation queries, and partition your historical data tables by shift or day to reduce the scan volume. Your 30-45 second load times suggest you’re scanning millions of rows unnecessarily.
Dashboard Cache Management:
Configure your dashboard cache retention policy to ‘persistent’ mode in the Performance Analysis module settings. This prevents automatic invalidation when batch jobs complete. Set up a tiered caching strategy: Level 1 cache (in-memory) for the current shift data with 5-minute TTL, Level 2 cache (database-backed) for the previous 24 hours with 30-minute TTL, and Level 3 (cold storage) for historical trends. Implement cache preloading by scheduling a background task at 5:30 AM that executes dashboard queries for all active production lines and stores results in the Level 1 cache.
Implementation Steps:
- Reschedule statistics update to 4:00 AM
- Add cache warm-up phase to batch job at 5:45 AM
- Create filtered indexes on aggregation tables (production_date, line_id, shift_id)
- Update dashboard configuration to persistent cache mode
- Monitor query execution plans for the first week to identify remaining bottlenecks
This approach should reduce your morning dashboard load times to under 5 seconds consistently. The key is treating the batch job completion as an opportunity to prepare the system for the next shift, not as an event that resets performance.
I’ve seen this exact pattern before. The batch jobs are likely rebuilding indexes or updating statistics, which invalidates your dashboard cache. Check if your batch process includes any database maintenance tasks that run after the data aggregation. Those maintenance operations can clear the query cache entirely, forcing cold queries on the first dashboard access.
Pre-warming the cache is definitely worth trying, but you need to identify which specific dashboard queries are the bottlenecks. Run an execution plan analysis on your KPI dashboard queries - I bet you’ll find some table scans or missing indexes on your aggregated fact tables. The statistics update is necessary, but if your queries aren’t optimized, even a warm cache won’t help much. Also check if your dashboard is pulling all historical data or just the relevant timeframe.