Quality metrics dashboard using DRE and escape rate cut cycl

We implemented a comprehensive quality metrics dashboard in Rally that significantly reduced our release cycle time by improving visibility into defect trends. The dashboard tracks Defect Removal Efficiency (DRE) across all teams, monitors escape rates from each testing phase, and provides real-time analytics on quality trends.

Our approach combines Rally’s native reporting capabilities with custom API queries to pull data on defects found in each phase versus those escaping to production. We calculate DRE as (defects found in testing / total defects) and track escape rates by comparing production defects to pre-release totals. The dashboard updates hourly using scheduled API calls:

var queryConfig = {
  type: 'Defect',
  fetch: ['State', 'Environment', 'CreatedDate', 'Release'],
  filters: [{property: 'CreatedDate', operator: '>', value: startDate}]
};

This enables side-by-side team comparisons and helps identify process gaps. Has anyone else built similar quality dashboards with DRE tracking?

Great questions from everyone. Let me provide comprehensive implementation details that address all the focus areas.

DRE Metrics Implementation: We calculate DRE at multiple levels using Rally’s WSAPI. Our formula is: DRE = (Defects Found Pre-Production / Total Defects) × 100. We query defects by Environment field and aggregate by Release and Sprint. The key is consistent defect categorization - we enforce Environment selection through workflow validation rules that prevent defect creation without this field.

Escape Rate Tracking: Escape rates are calculated as the inverse of DRE, specifically tracking defects that reach production. We categorize escapes by root cause using a custom field (Requirements Gap, Test Coverage Gap, Environment Difference, etc.). This gives us actionable data beyond simple percentages. For defects discovered post-release that trace to earlier phases, we track them as “latent escapes” in a separate metric to distinguish from immediate escapes.

Real-Time Analytics: We use Rally’s Custom HTML apps framework to build the dashboard, which queries the API every 2 hours during business hours. The app calculates trends using 30-day rolling averages and highlights teams that deviate more than 10% from baseline DRE. We also implemented alerts for escape rate spikes (>15% increase week-over-week) that trigger automated notifications to quality leads.

Team Comparisons: The dashboard displays normalized DRE scores across teams, accounting for project complexity using a weighting factor based on story points and defect severity distribution. We found raw DRE comparisons unfair when teams worked on different complexity levels. Side-by-side views show each team’s trend over the last 6 sprints, with color-coded indicators (green >85% DRE, yellow 70-85%, red <70%).

For visualization, we stayed with Rally’s Custom HTML apps to avoid integration complexity, but we export data weekly to Tableau for executive reporting. The combination of real-time Rally dashboards for teams and polished Tableau reports for leadership works well.

Implementation took about 6 weeks with a team of 3 (developer, quality analyst, and agile coach). The biggest challenge was achieving consistent defect tagging across 12 teams, which we solved through mandatory training and workflow enforcement. Since launch, our average release cycle time dropped 23% due to earlier defect detection, and production escapes decreased 41%.

Happy to share our Custom HTML app code and WSAPI query examples if anyone wants to replicate this approach.

We use a combination approach. The Environment field (Unit Test, System Test, UAT, Production) is mandatory for all defects, enforced through workflow rules. Additionally, we created a custom field called Detection Phase that maps to our quality gates. This dual-tagging ensures we can calculate DRE at different levels and compare teams using standardized categories. The key was getting buy-in from all teams during initial setup.

This is exactly what we’ve been looking to implement. How are you categorizing defects by testing phase in Rally? We struggle with consistent tagging across teams, which makes DRE calculation unreliable. Do you use custom fields or rely on the Environment field?

We settled on 2-hour refresh intervals during business hours and 4-hour intervals overnight to balance freshness with performance. For DRE tracking, we do both cumulative release-level metrics and sprint-level trends. Sprint-level DRE helps teams see immediate impact of process changes, while release-level shows overall quality trajectory. The sprint view proved more actionable for retrospectives and drove faster improvement.

What visualization tools are you using for the dashboard? Rally’s built-in charts or something external? We’re considering integrating with Tableau or Power BI for more advanced analytics and team comparisons. Curious if you’ve explored that route.