Sales order update via API fails with concurrency error during peak processing hours

Our order management integration updates sales order statuses via the D365 10.0.42 REST API. During peak hours (typically 2-5 PM when order volume is highest), we’re experiencing frequent concurrency errors when trying to update order records.

The API returns HTTP 412 Precondition Failed with “The record has been modified by another user.” This happens even though we’re retrieving the current ETag value immediately before the update. The concurrency errors cause order status mismatches between our external system and D365, requiring manual reconciliation.

Here’s our update pattern:


GET /api/data/v9.0/salesorders('{orderId}')
// Extract ETag from response
PATCH /api/data/v9.0/salesorders('{orderId}')
If-Match: {etag_value}
{"orderStatus": "Processing"}

The time between GET and PATCH is typically under 500ms, but we still hit concurrency errors on about 15% of updates during peak hours. During off-peak hours, the error rate drops to 2-3%. How should I implement optimistic concurrency control and retry logic to handle these collisions?

The 500ms window is actually quite long in a high-concurrency environment. During peak hours, other processes (warehouse updates, customer service reps, automated workflows) are likely modifying the same orders simultaneously. Your retry logic needs to account for this. Implement exponential backoff with jitter - retry after 1s, 2s, 4s with random variation. This prevents all failed requests from retrying at the same time and creating a thundering herd problem.

Another angle to consider: are you processing orders sequentially or in parallel? If you’re running multiple integration threads that might operate on the same orders, you’re creating your own concurrency conflicts. Implement order-level locking in your integration layer - use a distributed lock (Redis, etc.) to ensure only one thread can process a given order at a time. This prevents your own processes from competing with each other, leaving only external D365 modifications as the source of concurrency errors.

Splitting updates into separate calls would definitely make it worse. Instead, you need to understand D365’s ETag generation. The ETag changes whenever ANY field on the order is modified, not just the fields you care about. During peak hours, background processes (inventory allocation, pricing updates, tax calculations) are constantly touching orders. Your 500ms window means the ETag is likely stale by the time you issue the PATCH.

Consider implementing a conditional update pattern where you only update if specific fields haven’t changed, rather than relying on the entire entity ETag. You can do this by including a custom timestamp field that tracks when your integration last modified the record.

I looked into the status change action endpoint - that’s a good suggestion. However, our integration also updates other order fields like shipping method and delivery date, not just status. These updates seem to be the main source of concurrency conflicts. Would it help to split the updates into separate API calls, or would that make the concurrency problem worse by increasing the number of operations?

Let me provide a comprehensive solution for handling concurrency errors in your sales order updates. You’re dealing with optimistic concurrency control in a high-volume environment, which requires a multi-layered approach:

1. Optimistic Concurrency Control - Proper ETag Usage: The core issue is that D365’s entity-level ETags are too broad for your use case. Any modification to the sales order invalidates your ETag, even if it’s an unrelated field. Implement field-level change detection:


// Pseudocode - Enhanced update with field-level validation:
1. GET sales order with specific fields: status, shippingMethod, deliveryDate, lastModifiedDateTime
2. Store retrieved values and ETag
3. Build PATCH payload only for fields that need updating
4. Before PATCH, add conditional logic:
   If-Match: {etag}
   If-Unmodified-Since: {lastModifiedDateTime}
5. On 412 error, re-GET and compare ONLY your target fields
6. If target fields unchanged, retry with new ETag

This approach distinguishes between conflicts that matter (someone changed your target fields) versus noise (background processes modified other fields).

2. ETag Usage - Intelligent Retry Strategy: Implement a smart retry mechanism that analyzes the conflict:


// Pseudocode - Conflict resolution logic:
1. On 412 error, immediately re-GET the order
2. Compare original values vs. current values for your target fields
3. If NO conflict in target fields:
   - Apply your changes to current version
   - Retry PATCH with new ETag (max 3 attempts)
4. If CONFLICT in target fields:
   - Log conflict details for manual review
   - Implement business logic: merge changes or defer to latest
5. Use exponential backoff: wait 500ms, 1.5s, 4s between retries

3. Order Update Retry Logic - Architectural Improvements: Reduce the window of vulnerability:

Pattern A - Atomic Operations: Use D365 action endpoints for atomic updates when available:

  • UpdateOrderStatus action: Handles status transitions with built-in concurrency management
  • UpdateShippingDetails action: Updates shipping-related fields as a single unit

Pattern B - Optimistic Batch Updates: For multiple order updates, use batch requests with individual change sets. This ensures partial success - some orders update even if others fail.

Pattern C - Application-Level Locking: Implement distributed locking for your integration:


// Pseudocode - Prevent self-inflicted concurrency:
1. Before processing order, acquire distributed lock: LOCK:ORDER:{orderId}
2. Set lock timeout: 30 seconds
3. Perform GET, validation, PATCH operations
4. Release lock after completion or on error
5. If lock acquisition fails, queue order for retry (another thread processing it)

Pattern D - Queue-Based Processing: During peak hours, implement a processing queue:

  • Deduplicate order updates (only process latest update per order)
  • Process orders in batches of 10-15 with delay between batches
  • This reduces simultaneous update attempts on popular orders

Monitoring and Adaptation: Implement metrics to track:

  • Concurrency error rate by hour of day
  • Average retry count per successful update
  • Orders requiring manual reconciliation

Use these metrics to dynamically adjust retry behavior - increase retry attempts and backoff delays during known peak hours.

Specific Recommendation for Your 15% Error Rate: Your high error rate suggests multiple processes are modifying orders simultaneously. Investigate:

  1. Check if warehouse management system is auto-updating orders during peak hours
  2. Verify if customer service team is manually modifying orders during this window
  3. Review scheduled batch jobs that might run at 2-5 PM

Once you identify competing processes, coordinate update timing or implement the queue-based approach to serialize updates. Combined with intelligent retry logic, you should reduce your error rate from 15% to under 3% even during peak hours.

Beyond retry logic, you should examine whether your integration actually needs to update the full order record. Order status updates often trigger business logic that locks the entire order entity. Consider using the order status change action endpoint instead of direct PATCH operations. This endpoint is designed for concurrent status updates and uses internal locking mechanisms that are more efficient than entity-level optimistic locking. It also ensures proper status transition validation.