Our automated batch updater processes CAPA records nightly to sync field values from our external risk management system. Recently, batch updates started failing with 423 Locked errors for certain records:
PUT /api/v23.2/objects/capa__c
Payload: [{"id": "00001", "priority__c": "High"}, {"id": "00002", "priority__c": "Medium"}]
Response: {"data": [{"responseStatus": "SUCCESS"}, {"responseStatus": "FAILURE", "errors": [{"type": "RECORD_LOCKED"}]}]}
The entire batch doesn’t fail, but individual records return 423 status. These are records currently in workflow states like “Under Review” or “Pending Approval.” The problem is our integration doesn’t handle partial failures well, and we’re losing track of which records need retry. Is there a way to detect locked status before attempting updates, or should we implement better partial failure handling?
From a business process perspective, if records are locked due to workflow, they’re likely under active review by users. Automatically updating fields during this state could conflict with manual changes or approval decisions. I’d recommend implementing a “skip and alert” pattern - log these records, notify the responsible team, and let them decide whether to force the update after workflow completion.
The 423 Locked status is Vault’s way of enforcing workflow integrity. When a CAPA record enters certain workflow states, it becomes locked to prevent conflicting updates. You can’t bypass this lock, but you can query the record’s workflow status before attempting updates. Check the state__v and status__v fields to identify records in locked states and exclude them from your batch.
Another approach is to use the record lock API endpoint to check lock status explicitly before updates. You can query /api/v23.2/objects/capa__c/{id}/lock to determine if a record is locked and by whom. This gives you more context than just getting a 423 error. You can then make intelligent decisions about retry timing or escalation based on who holds the lock and why.