Automated test execution sync via CI/CD connectors reduced defect leakage by 65%

Sharing our implementation story of automated test execution integration with ELM using GitHub Actions. Before automation, we had significant defect leakage due to manual test result tracking and inconsistent coverage validation.

We built a CI/CD connector that syncs JUnit test results directly into ELM test execution records using the OSLC authentication framework. The GitHub Actions ELM publisher plugin handles result mapping and coverage gate enforcement automatically.

Key outcome: Defect leakage dropped from 23% to 8% in three months. The automation ensures every code change has validated test coverage before deployment. Would be happy to share technical implementation details if others are interested in similar automation.

How do you enforce coverage gates in the workflow? Do you fail the GitHub Actions job if coverage drops below threshold, or just report warnings to ELM? We’re trying to balance strict quality gates with developer velocity.

The coverage gate enforcement is crucial for the defect reduction you’re seeing. We implemented similar automation and found that blocking merges below 80% coverage cut regression defects by 40%. The key is making the feedback loop fast - developers need coverage results within 5 minutes of pushing code, not 30 minutes later.

Great questions. For OSLC authentication, we use GitHub encrypted secrets to store ELM service account credentials. The authentication token is refreshed at the start of each workflow run and cached for the duration. Token expiry is handled by the connector with automatic retry logic.

For JUnit mapping, we developed a custom transformer that maps JUnit test cases to ELM test execution records based on naming conventions. The plugin handles the heavy lifting of OSLC resource creation and linking.

Impressive results! How did you handle the OSLC authentication in GitHub Actions? We’ve struggled with token management and refresh cycles when integrating with ELM APIs from external CI systems.

Here’s the complete technical implementation that achieved our 65% defect reduction:

GitHub Actions ELM Publisher: We use a custom GitHub Action that wraps the ELM OSLC API client. The action is triggered on every pull request and main branch push:

- uses: company/elm-publisher@v2
  with:
    elm-server: ${{ secrets.ELM_SERVER_URL }}
    project-area: ${{ vars.PROJECT_AREA_UUID }}
    junit-results: 'target/surefire-reports/*.xml'

The publisher authenticates using OAuth tokens stored in GitHub secrets. Token refresh is handled automatically with a 2-hour expiry and 15-minute refresh window.

JUnit Result Mapping: Our custom transformer maps JUnit XML to ELM test execution records using a three-stage process:

  1. Parse JUnit XML and extract test case metadata (name, class, duration, status)
  2. Query ELM for matching test case definitions using naming convention: `{TestClass}.{TestMethod}
  3. Create or update test execution records with results, linking to the source commit SHA

The mapping handles nested test suites and parameterized tests by creating separate execution records for each parameter combination. Failed tests automatically create defect records with stack traces and failure context.

Coverage Gates: We enforce three coverage thresholds:

  • Line coverage: 80% minimum (blocks merge)
  • Branch coverage: 70% minimum (blocks merge)
  • Changed lines coverage: 95% minimum (warning only)

The coverage gate runs after test execution and queries the JaCoCo report:

- name: Check Coverage Gates
  run: |
    coverage=$(grep -oP 'line-rate="\K[0-9.]+' coverage.xml)
    if (( $(echo "$coverage < 0.80" | bc -l) )); then
      echo "Coverage $coverage below 80% threshold"
      exit 1
    fi

Coverage results are published to ELM as test metrics, creating a historical trend visible in project dashboards.

OSLC Authentication: Authentication uses the OAuth 2.0 client credentials flow with service account:

def get_elm_token():
    response = requests.post(
        f"{elm_server}/jts/auth/oauth2/token",
        data={
            "grant_type": "client_credentials",
            "client_id": os.getenv("ELM_CLIENT_ID"),
            "client_secret": os.getenv("ELM_CLIENT_SECRET")
        }
    )
    return response.json()["access_token"]

Tokens are cached in GitHub Actions cache for the workflow duration. The connector includes retry logic with exponential backoff for transient authentication failures.

Key Implementation Details:

  • Test execution records are created asynchronously to avoid blocking the CI pipeline
  • Failed tests trigger automatic defect creation with priority based on test category
  • Flaky test detection: Tests that fail intermittently are flagged but don’t block merges
  • Performance tracking: Test execution duration is compared to historical baseline, alerts on >20% regression
  • Traceability: Every test execution links to the commit SHA, pull request, and related requirements

Results and Metrics: After six months of operation:

  • Defect leakage: 23% → 8% (65% reduction)
  • Test execution latency: 45 min → 12 min (automation reduced manual steps)
  • Coverage: 67% → 84% (developers respond to immediate feedback)
  • False positive rate: <2% (flaky test detection works well)
  • Developer satisfaction: 8.2/10 (fast feedback loop appreciated)

The automation pays for itself through reduced defect triage time and faster release cycles. Our release frequency increased from monthly to weekly with higher quality.

The JUnit result mapping is what interests me most. ELM’s test execution model is quite different from standard JUnit XML. Did you build custom transformers, or does the GitHub Actions plugin handle the format conversion automatically? We’re evaluating similar automation and format mapping is our biggest concern.