Compensation management salary review calculations stuck at 60% completion

Our annual merit review process is grinding to a halt. We’re running the salary review batch calculation for 8,500 employees across 12 departments, and it consistently fails at around 60% completion after 4-5 hours. The batch job timeout is currently set to 6 hours in our system configuration, so we should have runway. I’ve checked the compensation calculation rules and they seem properly indexed, but I’m wondering if there’s something with how the database statistics are being refreshed during the run. We’re also not chunking by department currently - everything runs as one massive batch. The error log shows:


Batch Process Timeout Warning
Compensation_Review_Calc exceeded 85% time threshold
Processed: 5,100/8,500 employees
Remaining chunks: Department rollup pending

This is blocking our entire compensation cycle. Has anyone dealt with batch performance issues in merit reviews at this scale? What’s the recommended chunking strategy?

This is a textbook case of batch optimization needed. Your 8,500 employee batch is too large for a single run, especially with complex compensation rules. The 60% failure point indicates you’re hitting resource constraints - likely a combination of memory pressure and database connection pooling limits. I’d focus on three things: First, implement departmental chunking with max 2K employees per chunk. Second, verify your calculation rule indexes are current. Third, schedule a database statistics update before each compensation cycle. The timeout setting is less important than fixing the underlying performance bottleneck.

The lack of departmental chunking is definitely hurting you at that scale. We process 12K employees and learned the hard way that single-batch processing doesn’t scale well past 5K. The system has to maintain state for all in-progress calculations, and memory pressure builds up. I’d recommend breaking this into department-level chunks with a maximum of 1,500 employees per batch. You’ll also want to look at your compensation plan configuration - make sure calculation rules are using indexed fields for eligibility checks. What’s your current timeout configuration in the batch job settings? The default 6 hours might need adjustment, but chunking will likely solve this before you need to extend it.

We went through this exact scenario last year with 9K employees. The solution involved both configuration changes and process redesign. For the technical side, we implemented department-based chunking which reduced individual batch sizes to under 1,500 employees. We also discovered that our compensation plan had accumulated multiple inactive calculation rules over the years that were still being evaluated - cleaning those up gave us a 30% performance boost. Make sure you’re running database maintenance before the compensation cycle starts.

Your timeout configuration is actually fine at 6 hours - the real issue is that you’re not chunking effectively and your calculation rules need optimization. Let me walk through the complete solution we implemented for a similar scenario.

Batch Timeout Configuration: First, verify your current settings. The timeout should be in your batch process configuration:

<batch-config>
  <timeout-hours>6</timeout-hours>
  <chunk-size>1500</chunk-size>
  <parallel-processing>true</parallel-processing>
</batch-config>

Compensation Calculation Rule Indexing: This is critical. Navigate to your compensation plan configuration and verify that all calculation rules are using indexed fields for eligibility criteria. We found that custom fields used in merit matrices weren’t indexed, causing full scans. Work with your DBA to add indexes on:

  • Employee compensation grade fields
  • Performance rating fields
  • Any custom eligibility criteria fields

Database Statistics Refresh: Before each compensation cycle, run a statistics update. This ensures the query optimizer makes good decisions. Schedule this as a pre-process step:

EXEC sp_updatestats;
REBUILD INDEX ALL ON compensation_data;
UPDATE STATISTICS compensation_employee_view;

Chunking Strategy by Department: This is your biggest win. Redesign your batch process to chunk by organizational unit with a maximum of 1,500 employees per batch. In your batch configuration, set up department-level processing with sequential execution. This prevents memory pressure and allows each chunk to complete cleanly.

The key insight: your 60% failure point suggests you’re running out of resources (memory/connections) as state accumulates. Chunking resets that state between batches. Combined with proper indexing and fresh statistics, you should see each chunk complete in 45-60 minutes instead of the current 4-5 hour attempt.

Implement all four of these focus areas together - they work synergistically. We went from 70% failure rate to 100% success with processing time dropping from 5+ hours to 2 hours total across all chunks.

I’ve seen this exact pattern before. The 60% mark suggests you’re hitting a calculation complexity wall, not just a timeout issue. Check your compensation rules - are you using nested eligibility criteria or complex matrix calculations? Those can exponentially increase processing time as the batch progresses. Also, verify your database statistics were refreshed recently. Stale stats can cause the query optimizer to choose terrible execution plans for the later portions of the batch.