Automation scripts timing out during release validation codebeamer 22

Our automation-mgmt scripts for release validation in codebeamer 22 are exceeding timeout limits. We have a Groovy script that validates 200+ release items by checking test coverage, linked defects, and approval status. The script runs fine for small releases (<50 items) but times out on larger ones.

Current script structure:

items.each { item ->
  validateTestCoverage(item)
  checkDefectLinks(item)
  verifyApprovals(item)
}

Execution log shows timeout after 300 seconds with only 87 items processed. This forces manual intervention to complete release validation. We’ve increased the timeout to 600 seconds but that just delays the inevitable on our quarterly releases with 300+ items. Is there a way to optimize these automation scripts for parallel execution or use async API calls?

Consider splitting your validation into multiple smaller scripts that run in parallel. Instead of one monolithic script checking everything, have separate scripts for test coverage, defect links, and approvals. Run them concurrently as separate automation jobs. They can all update a validation tracker item with their results, and a final aggregator script checks if all validations passed.

For cb-22, you should use the async job API for long-running validations. Submit the validation as a background job that doesn’t block the release workflow. The job can run for hours if needed and update release status when complete. This is better than trying to squeeze everything into a 10-minute timeout window. Look at the JobService API in the Automation SDK.

Increasing timeout isn’t the solution - you need to optimize the validation logic. Are you making separate API calls for each item? That’s extremely inefficient. Use bulk API endpoints instead. The REST API has /api/v3/items/bulk-query that can retrieve test coverage for multiple items in one call. Restructure your script to fetch all data upfront, then validate in memory.

Your sequential processing is the bottleneck. Each validateTestCoverage() call probably makes API requests that block. Try batching your items into groups and processing groups in parallel using Groovy’s GPars library. Something like items.collate(25).eachParallel { batch -> processBatch(batch) }. This can reduce execution time by 70-80% on large datasets.

I’ll provide a comprehensive solution covering all three optimization areas:

Timeout Configuration: First, properly configure timeout settings in automation-config.properties:


automation.script.timeout.default=1800000
automation.script.timeout.release=3600000
automation.async.enabled=true

This sets 30-minute default and 60-minute release-specific timeouts, but more importantly, enables async execution mode.

Parallel Execution Implementation: Refactor your script to use parallel processing with batch optimization:

import groovyx.gpars.GParsPool

GParsPool.withPool(4) {
  items.collate(50).eachParallel { batch ->
    // Bulk fetch all data for batch
    def coverage = bulkFetchTestCoverage(batch*.id)
    def defects = bulkFetchDefectLinks(batch*.id)
    def approvals = bulkFetchApprovals(batch*.id)

    // Validate in-memory
    batch.each { item ->
      validateItem(item, coverage, defects, approvals)
    }
  }
}

Key optimizations:

  • GParsPool with 4 threads (adjust based on server CPU)
  • Batch size of 50 items (balance between memory and network overhead)
  • Bulk API calls per batch instead of per item
  • In-memory validation after data fetch

Async API Usage: For very large releases (>300 items), use the async job pattern:

import com.intland.codebeamer.automation.JobService

def jobId = JobService.submitJob(
  type: 'ReleaseValidation',
  params: [releaseId: release.id],
  async: true,
  callback: '/api/webhooks/validation-complete'
)

// Job implementation (runs in background)
class ValidationJob {
  void execute(params) {
    // Process items in chunks
    // Update progress tracker
    // No timeout constraints
  }
}

The async job runs outside the workflow timeout window. It can process items over hours if needed and triggers a callback when complete.

Optimized Bulk API Calls: Replace individual API calls with bulk operations:

// Instead of:
items.each { item ->
  def coverage = GET("/items/${item.id}/test-coverage")
}

// Use bulk query:
def coverageMap = POST('/api/v3/items/bulk-query', {
  itemIds: items*.id,
  fields: ['testCoverage', 'linkedDefects', 'approvalStatus']
})

Single bulk call replaces 200+ individual calls. Response time drops from 5-10 seconds per item to <2 seconds for entire batch.

Implementation Strategy:

  1. For releases <100 items: Use parallel execution with batching (5-10 minute runtime)
  2. For releases 100-300 items: Use parallel + bulk APIs (10-20 minute runtime)
  3. For releases >300 items: Use async job pattern (runs in background, no timeout)

Test with progressively larger releases. Monitor server CPU and memory - if parallel execution causes resource issues, reduce thread pool size or increase batch size. The goal is balancing parallelism with resource constraints.

After implementation, your 200-item release validation should complete in under 5 minutes instead of timing out at 10 minutes.