Automated traceability matrix cut defect escape rate 40% using AppSDK custom app

Sharing our implementation of an automated traceability matrix that reduced defect escapes by 40% over six months. We built a custom Rally AppSDK application that continuously monitors requirement-to-test coverage and flags gaps before releases.

The problem was manual traceability reviews were inconsistent and happened too late in the sprint cycle. By the time we identified untested requirements, we were already in UAT. The AppSDK app runs coverage analysis daily using Lookback API snapshots and sends alerts when coverage drops below our 85% threshold. This gives teams 3-5 days to write missing tests before sprint end.

The app also generates automated audit reports showing coverage trends over time, which our compliance team uses for release approvals. Implementation took about 3 weeks with one developer. Happy to share technical details if anyone’s interested in building something similar.

How do you integrate the coverage alerts with your CI/CD pipeline? We’d want to block releases automatically if coverage drops below threshold rather than just sending alerts. Also interested in whether the app tracks test execution results or just test case existence.

Here’s the complete technical implementation of our automated traceability matrix AppSDK application:

AppSDK Traceability App Architecture: Built as a Rally AppSDK 2.1 custom app using JavaScript and the Rally SDK libraries. The app runs in a custom dashboard page and executes coverage analysis on a configurable schedule (default: daily at 6 AM). Core components:

// Coverage calculation engine
Rally.data.lookback.SnapshotStore.load({
  find: { _TypeHierarchy: 'Requirement', Iteration: iterationRef },
  fields: ['ObjectID', 'TestCases', '_ValidFrom', '_ValidTo']
}).then(calculateCoverage);

The app queries Requirements using Lookback API to get point-in-time snapshots, then cross-references with TestCase relationships to calculate coverage percentages. We chose Lookback over WSAPI because it provides historical trend data and handles large datasets more efficiently with server-side aggregation.

Lookback Snapshots for Performance: To handle performance with large datasets (we have 2000+ requirements per release), we implemented three optimizations:

  1. Batch Processing: Query requirements in batches of 200 by iteration, process each batch separately, then aggregate results. This keeps memory usage manageable and avoids timeout errors.

  2. Incremental Updates: Store previous coverage calculations in Rally custom fields on the Requirement object. Only recalculate coverage for requirements that changed since last run (detected via _ValidFrom timestamp). This reduces processing time by 70% for stable releases.

  3. Cached Lookback Results: Cache Lookback snapshot results for 4 hours using Rally’s client-side cache. Multiple users viewing the dashboard share the same cached data, reducing API load.

Coverage Thresholds Configuration: Thresholds are configurable at three levels:

  • Workspace Default: Set in app settings panel (default 85%)
  • Project Override: Custom field on Project object allows project-specific thresholds
  • Requirement Type: Different thresholds for User Stories (85%), Defects (70%), and Technical Tasks (50%)

The app reads these settings on startup and applies the appropriate threshold when evaluating coverage. Requirements tagged with “Coverage_Exempt” are excluded from calculations - we use this for documentation changes, configuration updates, and third-party library upgrades that don’t require test cases.

Audit Reporting Output: The app generates three types of audit reports:

  1. Daily Coverage Dashboard: Real-time view showing current coverage percentage by project, iteration, and requirement type. Includes drill-down to see specific uncovered requirements.

  2. Trend Analysis Report: Uses Lookback snapshots to show coverage trends over the past 90 days. Exports to CSV with columns: Date, Project, Coverage%, Requirements Count, Test Cases Count, Gaps Count. This feeds our compliance dashboards and executive reporting.

  3. Release Readiness Report: Generated on-demand before releases, shows coverage status for all requirements in the release scope. Includes risk assessment (Red: <70%, Yellow: 70-85%, Green: >85%) and lists specific gaps. Exports to PDF for compliance documentation.

All reports use Rally’s REST API to export data in JSON format, which we transform to CSV or PDF using server-side scripts. Historical trend analysis queries Lookback API with date range filters to show coverage improvements across releases.

CI/CD Integration: We integrated coverage alerts with our Jenkins pipeline using Rally’s WSAPI webhooks. When coverage drops below threshold, the app creates a Rally Defect tagged “Coverage_Gap_Critical” and triggers a webhook to Jenkins. Our pipeline has a gate that queries Rally for open Coverage_Gap_Critical defects and blocks deployment if any exist. This enforces coverage requirements automatically without manual intervention.

The app also tracks test execution results, not just test case existence. It queries TestCaseResult objects to verify tests have been executed and passed within the current iteration. A requirement is only considered “covered” if it has associated test cases AND those tests have passing results in the current sprint.

Implementation Summary: Three-week timeline with one developer:

  • Week 1: AppSDK app skeleton, Lookback API integration, basic coverage calculation
  • Week 2: Threshold configuration, alert system, dashboard UI
  • Week 3: Audit reporting, CI/CD integration, performance optimization

Key technical decisions: Using Lookback API over WSAPI for historical data access, implementing incremental updates to handle scale, and integrating with CI/CD via webhooks rather than polling. These choices enabled the 40% defect escape reduction by catching coverage gaps 3-5 days earlier in the sprint cycle, giving teams time to write missing tests before release.

The automated audit reporting piece is crucial for us. We spend hours manually compiling traceability data for compliance reviews. What format does your app export the audit reports in? Can it generate historical trend analysis showing coverage improvements over multiple releases?