Let me provide a comprehensive solution covering all the optimization dimensions you need.
Batch Job Optimization:
First, modify your job variant to enable packet processing with optimal batch sizes:
// Pseudocode - Key implementation steps:
1. Set packet size parameter: PACKET_SIZE = 5000 transactions
2. Enable commit work after each packet
3. Configure parallel processing: MAX_PARALLEL_JOBS = 4
4. Set company code ranges: Job1=[1000-1003], Job2=[2000-2003], etc.
5. Implement checkpoint logging to Z_RECON_CHECKPOINT table
// See documentation: SAP Batch Job Optimization Guide
This breaks your 50K transactions into 10 packets of 5K each, with 4 parallel jobs processing different company code ranges simultaneously. Each packet commits independently, releasing locks every 5-10 minutes instead of holding them for hours.
Table Partitioning Strategy:
Implement a hybrid partitioning scheme on BSEG combining fiscal year and company code:
ALTER TABLE BSEG PARTITION BY RANGE (GJAHR)
SUBPARTITION BY LIST (BUKRS)
(PARTITION p2023 VALUES LESS THAN ('2024')
(SUBPARTITION p2023_1000 VALUES ('1000'),
SUBPARTITION p2023_2000 VALUES ('2000')),
PARTITION p2024 VALUES LESS THAN ('2025'))
This creates isolated data segments that your parallel jobs can access without cross-partition locking. Your UPDATE operations will only lock specific subpartitions (e.g., 2024/company 1000) rather than the entire BSEG table.
Lock Timeout Tuning:
Adjust database lock parameters in HANA configuration:
- Increase statement timeout: SET PARAMETER statement_timeout = ‘600000’ (10 minutes)
- Enable lock wait monitoring: SET PARAMETER lock_wait_timeout = ‘180000’ (3 minutes)
- Configure deadlock detection: SET PARAMETER deadlock_detection_interval = ‘1000’ (1 second)
These settings give individual statements more breathing room while detecting actual deadlocks quickly. However, with proper packet sizing, you shouldn’t hit these limits.
Auto-Restart Configuration:
Implement a resilient job framework using SAP Job Scheduling Service:
Create a monitoring job (Z_MONITOR_RECON) that runs every 10 minutes:
SELECT jobname, status, last_checkpoint
FROM TBTCO JOIN Z_RECON_CHECKPOINT
WHERE jobname = 'FICO_INTERCO_RECON'
If status = ‘FAILED’ and error = ‘LOCK_TIMEOUT’:
- Read last successful checkpoint from Z_RECON_CHECKPOINT
- Submit new job variant with parameter START_FROM_PACKET = last_checkpoint + 1
- Log restart event to monitoring table
- Send notification to finance team
Modify FICO_INTERCO_RECON to write checkpoints:
LOOP AT lt_packets INTO ls_packet.
PERFORM process_packet USING ls_packet.
COMMIT WORK.
INSERT INTO z_recon_checkpoint VALUES
(sy-datum, sy-uzeit, ls_packet-number).
ENDLOOP.
Additional Optimizations:
- Index Strategy: Ensure composite indexes exist on BSEG for (BUKRS, GJAHR, BELNR) and (BUKRS, AUGBL) to support reconciliation queries
- Parallel Processing Safety: Configure company code ranges so each parallel job works on non-overlapping master data - eliminates SKA1/T001 contention
- Memory Management: Set job memory limit to 4GB per parallel process to prevent swapping
- Monitoring Dashboard: Create a Fiori app displaying real-time progress from Z_RECON_CHECKPOINT table
With these changes, your 50K transactions should process in under 90 minutes with automatic recovery from transient failures. The combination of partitioning, parallel processing, and checkpoint-based restarts provides both performance and resilience for month-end close operations.