I’m experiencing critical issues with batch updates to program milestones via REST API in Aras 12.0. When updating multiple milestone records (typically 15-20 at once), the entire transaction rolls back if any single item fails validation. This is causing data loss concerns since we lose all valid updates when one record has an issue.
The batch transaction handling seems all-or-nothing, and we’re not getting detailed error logging to identify which specific milestone caused the failure. I’ve seen partial commit scenarios work in the UI, but the API behaves differently. Here’s a sample of our batch request structure:
POST /server/odata/Milestone
[
{"id": "M001", "completion_percent": 85},
{"id": "M002", "completion_percent": 92}
]
Is there a way to enable partial commits or at least get granular error details per item in the batch?
The partial commit scenario you mentioned in the UI works because it uses different server methods. For REST API batch operations, you need to implement your own rollback logic. I recommend wrapping your batch in a try-catch on the client side, and maintaining a success/failure log. Another approach is to validate all items client-side before sending the batch to minimize transaction failures. What validation rules are causing the most rollbacks?
Thanks for the responses. The validation failures are mostly due to workflow state conflicts - some milestones are locked by active workflows. Client-side validation helps, but we can’t always predict workflow locks. The individual call approach might be our only option, though it’ll impact performance with 50+ programs being updated hourly.
I’ve seen teams implement a hybrid pattern that balances performance and reliability. Split your batch into smaller chunks of 5 items each, and process chunks sequentially with error accumulation. This gives you pseudo-partial commits where you lose at most 5 items per failure instead of the entire batch. The performance hit is minimal compared to individual calls.
For workflow lock detection, you could query the workflow state before attempting updates. Add a preliminary GET request to check the current_state and is_locked properties. This adds overhead but prevents unnecessary batch failures. We use a two-phase approach: validation phase queries all items, then update phase only processes validated items. Cuts our rollback rate by about 70%.
I’ve encountered this exact behavior. Aras REST API treats batch operations as atomic transactions by default. The challenge is that there’s no built-in partial commit flag in the standard OData batch endpoint. You might want to look at the error response payload - it should contain an array of results with individual status codes, though the logging isn’t always verbose enough.
Have you tried processing items individually instead of batching? I know it’s not ideal for performance, but it gives you fine-grained control. We implemented a client-side transaction manager that tracks successes and failures, then retries failed items. The API error logging improved significantly when we switched to individual calls because each response is isolated.