Our asset synchronization process is failing when updating asset lifecycle records through the REST API on ICS 2021. Multiple concurrent API calls result in record locking errors.
POST /AssetLifecycle/update
Response: 423 Locked
Error: "Asset record locked by another process"
We’re running parallel API updates from our external asset management system, and approximately 30% of requests fail with lock conflicts. The concurrent API updates cause record locking behavior that blocks legitimate updates. Our retry logic attempts the same request immediately, which often fails again.
Is there a recommended approach for handling concurrent updates to asset records? Should we implement exponential backoff in our retry logic, or is there a way to configure the API to queue requests instead of rejecting them?
Interesting point about rate limiting. We haven’t implemented any throttling on our side, so we could easily be exceeding 20 requests per second during peak sync periods. I’ll add rate limiting to our API client. For the retry logic, what’s a reasonable maximum retry count before giving up on a request? And should we implement a circuit breaker pattern if we see sustained lock errors?
If you’re getting locks on different asset records, it might be table-level or index-level locking. ICS 2021 has a known behavior where high concurrent write operations on the asset lifecycle table can trigger lock escalation from row-level to page-level or even table-level locks, especially if your database hasn’t been tuned for concurrent writes. Check your database lock escalation thresholds and consider reducing your batch size from 50 to maybe 10-15 concurrent requests. Also verify your API calls include proper transaction isolation level headers.
We faced similar issues and found that the problem was actually in how we were handling the API responses. When a 423 is returned, you need to implement proper retry logic with backoff AND check if your requests are idempotent. If you’re sending the same update payload on retry without checking the current asset state, you might be creating conflicts. Also, ICS 2021 REST API has a rate limiting mechanism that isn’t well documented - exceeding around 20 requests per second can trigger temporary blocks that manifest as lock errors.
Yes, circuit breaker is a good idea for sustained failures. We use a max of 5 retries with exponential backoff (2s, 4s, 8s, 16s, 32s). After 3 consecutive failures on different assets, we open the circuit breaker for 60 seconds to let the system recover. During that time, we queue the requests locally. Also make sure you’re reading the response headers - CloudSuite sometimes returns a ‘Retry-After’ header that tells you exactly how long to wait.