Here’s the complete technical implementation of our automated traceability matrix AppSDK application:
AppSDK Traceability App Architecture: Built as a Rally AppSDK 2.1 custom app using JavaScript and the Rally SDK libraries. The app runs in a custom dashboard page and executes coverage analysis on a configurable schedule (default: daily at 6 AM). Core components:
// Coverage calculation engine
Rally.data.lookback.SnapshotStore.load({
find: { _TypeHierarchy: 'Requirement', Iteration: iterationRef },
fields: ['ObjectID', 'TestCases', '_ValidFrom', '_ValidTo']
}).then(calculateCoverage);
The app queries Requirements using Lookback API to get point-in-time snapshots, then cross-references with TestCase relationships to calculate coverage percentages. We chose Lookback over WSAPI because it provides historical trend data and handles large datasets more efficiently with server-side aggregation.
Lookback Snapshots for Performance: To handle performance with large datasets (we have 2000+ requirements per release), we implemented three optimizations:
-
Batch Processing: Query requirements in batches of 200 by iteration, process each batch separately, then aggregate results. This keeps memory usage manageable and avoids timeout errors.
-
Incremental Updates: Store previous coverage calculations in Rally custom fields on the Requirement object. Only recalculate coverage for requirements that changed since last run (detected via _ValidFrom timestamp). This reduces processing time by 70% for stable releases.
-
Cached Lookback Results: Cache Lookback snapshot results for 4 hours using Rally’s client-side cache. Multiple users viewing the dashboard share the same cached data, reducing API load.
Coverage Thresholds Configuration: Thresholds are configurable at three levels:
- Workspace Default: Set in app settings panel (default 85%)
- Project Override: Custom field on Project object allows project-specific thresholds
- Requirement Type: Different thresholds for User Stories (85%), Defects (70%), and Technical Tasks (50%)
The app reads these settings on startup and applies the appropriate threshold when evaluating coverage. Requirements tagged with “Coverage_Exempt” are excluded from calculations - we use this for documentation changes, configuration updates, and third-party library upgrades that don’t require test cases.
Audit Reporting Output: The app generates three types of audit reports:
-
Daily Coverage Dashboard: Real-time view showing current coverage percentage by project, iteration, and requirement type. Includes drill-down to see specific uncovered requirements.
-
Trend Analysis Report: Uses Lookback snapshots to show coverage trends over the past 90 days. Exports to CSV with columns: Date, Project, Coverage%, Requirements Count, Test Cases Count, Gaps Count. This feeds our compliance dashboards and executive reporting.
-
Release Readiness Report: Generated on-demand before releases, shows coverage status for all requirements in the release scope. Includes risk assessment (Red: <70%, Yellow: 70-85%, Green: >85%) and lists specific gaps. Exports to PDF for compliance documentation.
All reports use Rally’s REST API to export data in JSON format, which we transform to CSV or PDF using server-side scripts. Historical trend analysis queries Lookback API with date range filters to show coverage improvements across releases.
CI/CD Integration: We integrated coverage alerts with our Jenkins pipeline using Rally’s WSAPI webhooks. When coverage drops below threshold, the app creates a Rally Defect tagged “Coverage_Gap_Critical” and triggers a webhook to Jenkins. Our pipeline has a gate that queries Rally for open Coverage_Gap_Critical defects and blocks deployment if any exist. This enforces coverage requirements automatically without manual intervention.
The app also tracks test execution results, not just test case existence. It queries TestCaseResult objects to verify tests have been executed and passed within the current iteration. A requirement is only considered “covered” if it has associated test cases AND those tests have passing results in the current sprint.
Implementation Summary: Three-week timeline with one developer:
- Week 1: AppSDK app skeleton, Lookback API integration, basic coverage calculation
- Week 2: Threshold configuration, alert system, dashboard UI
- Week 3: Audit reporting, CI/CD integration, performance optimization
Key technical decisions: Using Lookback API over WSAPI for historical data access, implementing incremental updates to handle scale, and integrating with CI/CD via webhooks rather than polling. These choices enabled the 40% defect escape reduction by catching coverage gaps 3-5 days earlier in the sprint cycle, giving teams time to write missing tests before release.