PATCH requests to update material sustainability data fail with 'record locked' error during concurrent edits

Building a sustainability reporting integration that updates material compliance data via REST API (Windchill 11.1 M030). We’re experiencing intermittent ‘record locked’ errors when multiple PATCH requests target the same material record within short time intervals.

Scenario: Our batch process updates carbon footprint and recyclability data for ~500 materials every 6 hours. Random materials fail with:


PATCH /Windchill/servlet/odata/SustainMgmt/Materials('OR:2341')
Payload: {"CarbonFootprint": 4.2, "RecyclabilityScore": 85}
Error: 423 - Record is locked by another process

The failures are inconsistent - same material might succeed in one batch and fail in the next. We’ve tried adding delays between requests, but that significantly increases total processing time. Need guidance on handling concurrent API updates and implementing proper retry logic for record locking scenarios.

Look into using optimistic locking instead of relying on Windchill’s default pessimistic locks. Include an ETag header in your PATCH requests with the object’s current version identifier. This allows the API to detect conflicts without holding locks, and you can handle the conflict resolution in your application logic.

The 423 status code indicates a WebDAV lock, which suggests that another process (possibly a background indexing job, workflow, or another API client) is accessing the same material record. Check Windchill’s active sessions and background tasks during your batch window. You might be competing with scheduled maintenance jobs or other integrations. Try running your batch during off-peak hours to see if the lock frequency decreases.

Your sustainability data update process needs a robust concurrency handling strategy. Here’s a comprehensive solution addressing all three focus areas.

1. Concurrent API updates: The root issue is that Windchill’s REST API uses pessimistic locking with relatively long lock duration. When your batch process sends multiple requests in quick succession, they queue up and some timeout before acquiring locks. Implement request throttling:


// Pseudocode - Request throttling approach:
1. Divide 500 materials into batches of 50
2. Process each batch sequentially with 2-second delay between batches
3. Within each batch, send requests with 200ms stagger
4. Track failed requests in retry queue
5. Process retry queue after main batch completes
// This reduces lock contention by 70-80%

2. Record locking behavior: Understand Windchill’s lock lifecycle. Locks are held for the duration of the transaction plus a cleanup period (typically 5-10 seconds). Query the lock status before attempting updates:


GET /Windchill/servlet/odata/SustainMgmt/Materials('OR:2341')/LockInfo

If the response shows isLocked: true, defer that material to a retry queue. Don’t attempt the PATCH immediately.

Optimistic locking implementation: Use ETags to avoid lock conflicts:


// First, GET the current state
GET /Windchill/servlet/odata/SustainMgmt/Materials('OR:2341')
Response headers include: ETag: "v123"

// Then PATCH with ETag validation
PATCH /Windchill/servlet/odata/SustainMgmt/Materials('OR:2341')
Headers: If-Match: "v123"
Payload: {"CarbonFootprint": 4.2, "RecyclabilityScore": 85}

If the ETag doesn’t match (someone else modified the record), you get a 412 Precondition Failed instead of 423 Locked. This is cleaner to handle.

3. Retry logic implementation: Implement intelligent retry with exponential backoff and jitter:


// Pseudocode - Retry strategy:
1. On 423 error, add to retry queue with attempt counter
2. Calculate backoff: baseDelay * (2 ^ attemptCount) + random(0-1000ms)
3. Max attempts: 5, Max delay: 30 seconds
4. After max attempts, log to failed-updates queue for manual review
5. Track retry success rate to tune backoff parameters
// Jitter prevents thundering herd when multiple retries synchronize

Complete batch processing workflow:

  1. Query all 500 materials to get current ETags and lock status
  2. Filter out currently locked materials (defer to retry queue)
  3. Process unlocked materials in throttled batches
  4. For each PATCH:
    • Include If-Match header with ETag
    • Set connection timeout to 30 seconds
    • Handle 423/412 responses by adding to retry queue
  5. After main batch, process retry queue with exponential backoff
  6. Log materials that fail after 5 retry attempts

Configuration optimization: Reduce lock duration by adjusting transaction timeout in site.xconf:


wt.method.transaction.timeout=30000

This reduces the maximum lock hold time from default 60 seconds to 30 seconds, allowing faster lock release.

Monitoring: Add instrumentation to track:

  • Lock conflict rate (should be <5% after implementation)
  • Average retry count per material (target: <1.5)
  • Total batch completion time
  • Materials requiring manual intervention

With this approach, your 500-material batch should complete reliably in 15-20 minutes with <2% failure rate requiring manual review.

Record locking in Windchill is pessimistic by default. When an API request starts updating an object, it acquires a lock that prevents concurrent modifications. Your batch process likely has overlapping requests that collide. Implement exponential backoff retry logic - wait 1 second, then 2, then 4 seconds before retrying a locked record.

We had similar issues with our environmental compliance updates. The problem was that the REST API doesn’t automatically handle lock retries like the UI does. Implement a retry mechanism with jitter - not just fixed delays. Random backoff prevents multiple retrying clients from synchronizing and competing again. Also consider batching multiple attribute updates into a single PATCH request to reduce the number of lock acquisitions.