WIP reporting performance degrades during peak hours in performance analysis module

During peak production hours, WIP (work-in-progress) reports in the performance analysis module take 3-5 minutes to generate instead of the usual 10-15 seconds. We’re running GPSF 2023.1 with approximately 15,000 active work orders and 200+ concurrent users.

The slowdown happens consistently between 9 AM - 11 AM and 1 PM - 3 PM when production is at maximum capacity. I’ve monitored the database and see heavy table scans on the WorkOrderStatus and MaterialMovement tables during these periods. The query execution plans show:

SELECT wo.WorkOrderID, wo.Status, COUNT(mm.MaterialID)
FROM WorkOrders wo
JOIN MaterialMovements mm ON wo.WorkOrderID = mm.WorkOrderID
WHERE wo.Status IN ('RELEASED', 'IN_PROGRESS')
GROUP BY wo.WorkOrderID, wo.Status

This query is hitting 800K+ rows in MaterialMovements even though we only need current WIP data. Has anyone implemented SQL query optimization or table partitioning strategies to improve WIP report performance during high-load periods?

You can customize the query templates safely by creating a custom report view. Don’t modify the base WIP query directly. Instead, create a materialized view that pre-aggregates the WIP data and refresh it every 5-10 minutes. Point your dashboard to the materialized view instead of the base tables. This shifts the heavy computation to a background process and gives users instant results. We did this for our WIP reporting and saw response times drop from minutes to under 2 seconds even with 50K active work orders.

Good point, but the report definition doesn’t expose a date filter parameter - it’s generated by the standard WIP dashboard widget. I’d need to modify the underlying query template. Is that supported or would it break during upgrades?

That query needs date filtering. You’re scanning the entire MaterialMovements history when you probably only care about recent movements. Add a WHERE clause for mm.MovementDate >= CURRENT_DATE - 7 or whatever your WIP window is. That should dramatically reduce the row count.

Absolutely rebuild your statistics - stale statistics are a common cause of poor query plan choices. But also look at query caching. GPSF has a built-in query result cache for performance reports. Enable it for your WIP dashboard and set an appropriate TTL (time-to-live). Even a 2-minute cache can eliminate redundant executions when multiple users view the same dashboard during peak hours.

I’ll address all three optimization areas comprehensively since they work together for optimal WIP reporting performance.

SQL Query Optimization:

Your current query has several inefficiencies. First, it’s missing temporal filtering which causes full table scans of historical data. Second, the GROUP BY on both WorkOrderID and Status is redundant since WorkOrderID is unique. Here’s an optimized version:

SELECT wo.WorkOrderID, wo.Status,
       COUNT(mm.MaterialID) as MaterialCount
FROM WorkOrders wo
JOIN MaterialMovements mm ON wo.WorkOrderID = mm.WorkOrderID
WHERE wo.Status IN ('RELEASED', 'IN_PROGRESS')
  AND mm.MovementDate >= DATEADD(day, -7, GETDATE())
  AND mm.IsActive = 1
GROUP BY wo.WorkOrderID, wo.Status

The MovementDate filter limits the scan to recent data (adjust the 7-day window to match your WIP definition). The IsActive flag ensures you’re only counting current movements, not historical corrections or cancelled transactions.

More importantly, create a filtered index specifically for this query pattern:

CREATE INDEX IX_MaterialMovements_WIP

ON MaterialMovements(WorkOrderID, MovementDate, MaterialID)

WHERE IsActive = 1 AND MovementDate >= DATEADD(day, -30, GETDATE())

This filtered index dramatically reduces index size and improves query performance by only indexing relevant WIP data.

Table Partitioning:

The MaterialMovements table should be partitioned by MovementDate using a sliding window approach. For GPSF 2023.1, implement monthly partitions with automated maintenance:

  1. Create partition function for monthly boundaries
  2. Partition MaterialMovements on MovementDate
  3. Keep 3 months of data in active partitions (current month + 2 prior)
  4. Archive older partitions to separate filegroup or table

This reduces the working dataset from 800K+ rows to typically under 100K for current WIP queries. The database optimizer will use partition elimination to skip irrelevant historical data automatically.

Critically, ensure your WIP queries include the partition key (MovementDate) in the WHERE clause. Without it, partition elimination won’t occur and you’ll still scan all partitions.

Query Caching:

Enable the GPSF query result cache for performance analysis reports. Navigate to Performance Analysis > Configuration > Cache Settings:

  • Enable Query Result Caching: Yes
  • Cache TTL for WIP Reports: 120 seconds (2 minutes)
  • Cache Invalidation Events: Add ‘MATERIAL_MOVEMENT_CREATED’, ‘WORK_ORDER_STATUS_CHANGED’
  • Maximum Cache Size: 500 MB

The 2-minute TTL balances data freshness with performance. During peak hours when 50+ users might view the WIP dashboard simultaneously, the cache serves most requests from memory instead of executing the database query.

The event-based invalidation ensures cache accuracy - when material movements occur or work order statuses change, the relevant cache entries are cleared immediately. This prevents stale data while maintaining performance benefits.

Additionally, implement a materialized view for the most common WIP aggregations:

CREATE MATERIALIZED VIEW MV_WIP_Summary

REFRESH EVERY 5 MINUTES

AS

SELECT wo.Status, wo.ProductLine,

   COUNT(*) as OrderCount,

   SUM(mm.Quantity) as TotalQuantity

FROM WorkOrders wo

JOIN MaterialMovements mm ON wo.WorkOrderID = mm.WorkOrderID

WHERE wo.Status IN (‘RELEASED’, ‘IN_PROGRESS’)

AND mm.MovementDate >= DATEADD(day, -7, GETDATE())

GROUP BY wo.Status, wo.ProductLine

This pre-computed view refreshes every 5 minutes in the background, providing instant results for summary-level WIP dashboards. Detailed drill-down reports can still use the optimized base query.

Finally, schedule statistics updates during your maintenance window (likely between shifts). Run UPDATE STATISTICS on WorkOrders and MaterialMovements tables nightly to ensure the query optimizer has accurate cardinality estimates. This prevents it from choosing inefficient execution plans during peak hours.

Implement these changes in sequence: query optimization first (immediate impact), then caching (moderate effort, high impact), then partitioning (higher effort, long-term benefit). Monitor query execution times after each phase to measure improvement.

We do have indexes on those columns, but they’re not being used according to the execution plan. The optimizer is choosing full table scans instead. Could this be a statistics issue? Should I rebuild the index statistics during off-peak hours?

Materialized views help but don’t address the root cause. Your MaterialMovements table needs partitioning. With 800K rows being scanned, you’re likely dealing with millions of historical records. Partition by MovementDate (monthly or weekly depending on your data volume) and ensure your queries include the partition key. The database will then only scan relevant partitions instead of the full table. Also check if you have proper indexes on WorkOrderID and Status columns.