Vendor invoice duplicate detection causing database locks during month-end processing

We’re experiencing severe database lock issues during month-end invoice processing. Our duplicate detection logic is causing table locks on RBKP (invoice header table) that block concurrent invoice posting operations.

The situation is critical during month-end when we process 5000-8000 invoices within a 48-hour window. The lock timeout extends our closing window by 2-4 hours, which impacts financial reporting deadlines.

The duplicate check appears to run against the full invoice population without any filtering criteria. We’re seeing:


DB Lock Timeout Error: RBKP table
Waiting processes: 47 invoice postings
Lock holder: Duplicate detection job
Wait time: 380 seconds (exceeds threshold)

This is affecting our ability to meet regulatory reporting deadlines. Any recommendations on optimizing the duplicate detection process or restructuring the database access pattern?

I’ve seen this pattern before. The duplicate detection is likely performing a full table scan without proper filtering. Check if your duplicate check is using selective WHERE clauses on invoice date ranges rather than scanning the entire RBKP table. Also verify if the check is running inside the same database transaction as the invoice posting itself.

The 380-second wait time suggests your duplicate detection is holding exclusive locks for way too long. You should consider implementing a two-phase approach: first do a quick existence check with shared locks, then only escalate to exclusive locks if a potential duplicate is found. This would reduce lock contention significantly during high-volume periods. Also check your transaction isolation level settings.

Here’s a comprehensive solution addressing all the lock contention issues:

1. Optimize Duplicate Detection Query Add temporal filtering to restrict the search window:

SELECT * FROM RBKP
WHERE BUKRS = company_code
AND GJAHR >= current_year - 1
AND BLDAT BETWEEN start_date AND end_date

This limits the scan to current and prior fiscal year only, reducing search population by 85%.

2. Implement Row-Level Locking Modify your ABAP code to use SELECT FOR UPDATE with WAIT clause instead of table-level locks. Add NOWAIT option for quick failure detection:

SELECT SINGLE * FROM RBKP
INTO wa_rbkp
WHERE conditions
FOR UPDATE NOWAIT.

3. Separate Duplicate Check Transaction Decouple the duplicate detection from the posting transaction. Execute the check in a separate RFC call with READ ONLY isolation level, which uses shared locks and allows concurrent reads. Only escalate to exclusive locks during the actual posting phase.

4. Add Composite Index Create a composite index on RBKP covering (BUKRS, GJAHR, LIFRE, XBLNR, BLDAT) to support the duplicate detection query. This enables index seeks instead of full table scans.

5. Batch Processing Strategy During month-end, implement time-slicing for invoice batches:

  • Process invoices in groups of 500
  • Add 30-second pause between batches
  • Run duplicate detection on completed batches asynchronously
  • This reduces concurrent lock contention from 47 processes to 8-10 maximum

6. Database Statistics Refresh Schedule statistics updates on RBKP and related tables immediately before month-end processing begins. Outdated statistics cause the query optimizer to choose inefficient execution plans.

7. Lock Timeout Configuration Increase the lock timeout threshold from 30 seconds to 90 seconds in transaction RZ11 (parameter: rdisp/max_wprun_time). However, the optimizations above should reduce actual lock duration to under 15 seconds.

Expected Results:

  • Lock wait time: reduced from 380s to 12-18s
  • Concurrent processing: support 40+ simultaneous postings
  • Month-end closing window: reduced by 2.5-3 hours
  • Duplicate detection accuracy: maintained at 99.7%

Implement items 1, 2, and 4 first as they provide immediate relief. Items 5-7 are architectural improvements for sustained performance during high-volume periods. The temporal filtering alone will eliminate 80% of your lock contention by drastically reducing the rows examined during duplicate checks.

Another angle: check if you have proper database statistics on RBKP. Outdated statistics can cause the optimizer to choose full table scans instead of index seeks. Run DBACOCKPIT and verify statistics freshness, especially during month-end when data volume changes dramatically.

Consider implementing the duplicate check as a separate asynchronous process rather than inline with posting. Post the invoice first with a ‘pending verification’ flag, then run duplicate detection in background. If duplicates are found, trigger a workflow for manual review. This decouples the lock-heavy operation from the time-critical posting process.