We’re using ETQ’s REST API to batch update change control records from our PLM system. The integration works most of the time, but fails when some records in the batch are locked by users actively editing them. The API returns 200 OK but only processes unlocked records, silently skipping locked ones.
Batch request example:
{"updates": [
{"recordId": "CC-001", "status": "Approved"},
{"recordId": "CC-002", "status": "Approved"}
]}
The batch processing logic doesn’t distinguish between successful updates and skipped records. Error logging shows “Batch completed successfully” even when half the records weren’t updated due to locks. Our downstream systems then have inconsistent data. How do others handle record lock handling in batch API operations?
For ETQ 2021, there isn’t a bulk lock status query endpoint unfortunately. But you can optimize by implementing a two-phase approach: attempt the batch update first, then parse the response to identify failed records. The response should include error details for each record. Then retry only the failed records individually with proper error handling. This way you get the performance benefit of batching for unlocked records while still handling locked records appropriately.
Thanks Pat, that’s a good approach but adds significant API call overhead for large batches. We’re processing 500+ change control records per sync. Is there a way to get lock status for multiple records in a single API call? Or should we handle this differently in the batch processing logic itself?
This is a known challenge with batch operations. ETQ’s API considers a batch “successful” if it processes without throwing an exception, even if individual records fail. You need to parse the response body carefully - it should contain details about which records were updated and which were skipped. Check if the response includes a results array with per-record status.
We had this exact issue. The solution is to implement individual record validation before batching. Query each record’s lock status via a separate API call first, then only include unlocked records in your batch. It adds overhead but ensures you know which records will fail before attempting the update. For locked records, queue them for retry after a delay.