Pricing table synchronization fails with SQL deadlock error during bulk price updates in pricing management module

We’re experiencing critical failures during bulk price updates in our pricing management module. Our nightly batch job processes around 15,000 price records across multiple price lists, but we’re consistently hitting SQL deadlock errors that cause the entire job to roll back.

The error occurs specifically when:


Msg 1205, Level 13, State 51
Transaction (Process ID 87) was deadlocked on lock resources
with another process and has been chosen as the deadlock victim

The bulk update job runs at 2 AM daily, updating promotional prices across retail and wholesale price lists. When the deadlock occurs, all changes roll back and we have to manually retry smaller batches. This is creating significant operational overhead and delaying price activations for our sales channels.

Has anyone dealt with similar deadlock issues during bulk pricing operations? I’m trying to understand if this is a locking strategy issue or if we need to restructure how our batch jobs handle concurrent updates.

For consistency with chunked updates, implement an audit table that tracks which batches completed successfully. Each chunk commits independently, and you log the batch ID, start/end item range, and status. If the job fails midway, your retry logic can check the audit table and skip already-completed batches.

You might also want to add an “effective date” column to your price updates instead of directly modifying active prices. This way, your bulk job stages the changes, and a separate lightweight process activates them atomically. This pattern separates the heavy data processing from the actual price activation, reducing the window where locks are held on active pricing data.

I’ve seen this before with pricing updates. The deadlock typically happens when your bulk job locks records in a different order than other concurrent processes (like real-time price lookups or sales order processing). SQL Server detects the circular lock dependency and kills one transaction.

First step: Check if you have any scheduled jobs or integrations running around the same time. Even automated price synchronization from external systems could be causing conflicts. Also, look at your transaction isolation level - if it’s set too high, you’re increasing lock duration unnecessarily.

Thanks for the suggestions. I ran the deadlock trace and found that our bulk update is conflicting with the automated price synchronization from our POS systems, which runs every 30 minutes. Both processes are updating the same price list tables but accessing records in different orders (our bulk job sorts by item number, POS sync sorts by timestamp).

We’re currently processing all 15,000 records in one transaction. Breaking it into chunks makes sense, but I’m concerned about partial updates if the job fails midway. How do you handle consistency when you can’t wrap everything in a single transaction?

What’s your batch size for the bulk updates? Processing 15,000 records in a single transaction is asking for trouble. I’d recommend breaking it into smaller chunks - maybe 500-1000 records per transaction. This reduces lock duration and gives other processes a chance to access the pricing tables between batches.

Also check if you’re using UPDLOCK hints anywhere in your custom code. Sometimes developers add these thinking they’ll prevent conflicts, but they can actually make deadlocks more likely by holding locks longer than necessary.