I’ll provide a comprehensive solution covering all three optimization areas:
Timeout Configuration:
First, properly configure timeout settings in automation-config.properties:
automation.script.timeout.default=1800000
automation.script.timeout.release=3600000
automation.async.enabled=true
This sets 30-minute default and 60-minute release-specific timeouts, but more importantly, enables async execution mode.
Parallel Execution Implementation:
Refactor your script to use parallel processing with batch optimization:
import groovyx.gpars.GParsPool
GParsPool.withPool(4) {
items.collate(50).eachParallel { batch ->
// Bulk fetch all data for batch
def coverage = bulkFetchTestCoverage(batch*.id)
def defects = bulkFetchDefectLinks(batch*.id)
def approvals = bulkFetchApprovals(batch*.id)
// Validate in-memory
batch.each { item ->
validateItem(item, coverage, defects, approvals)
}
}
}
Key optimizations:
- GParsPool with 4 threads (adjust based on server CPU)
- Batch size of 50 items (balance between memory and network overhead)
- Bulk API calls per batch instead of per item
- In-memory validation after data fetch
Async API Usage:
For very large releases (>300 items), use the async job pattern:
import com.intland.codebeamer.automation.JobService
def jobId = JobService.submitJob(
type: 'ReleaseValidation',
params: [releaseId: release.id],
async: true,
callback: '/api/webhooks/validation-complete'
)
// Job implementation (runs in background)
class ValidationJob {
void execute(params) {
// Process items in chunks
// Update progress tracker
// No timeout constraints
}
}
The async job runs outside the workflow timeout window. It can process items over hours if needed and triggers a callback when complete.
Optimized Bulk API Calls:
Replace individual API calls with bulk operations:
// Instead of:
items.each { item ->
def coverage = GET("/items/${item.id}/test-coverage")
}
// Use bulk query:
def coverageMap = POST('/api/v3/items/bulk-query', {
itemIds: items*.id,
fields: ['testCoverage', 'linkedDefects', 'approvalStatus']
})
Single bulk call replaces 200+ individual calls. Response time drops from 5-10 seconds per item to <2 seconds for entire batch.
Implementation Strategy:
- For releases <100 items: Use parallel execution with batching (5-10 minute runtime)
- For releases 100-300 items: Use parallel + bulk APIs (10-20 minute runtime)
- For releases >300 items: Use async job pattern (runs in background, no timeout)
Test with progressively larger releases. Monitor server CPU and memory - if parallel execution causes resource issues, reduce thread pool size or increase batch size. The goal is balancing parallelism with resource constraints.
After implementation, your 200-item release validation should complete in under 5 minutes instead of timing out at 10 minutes.