After seeing this discussion and reflecting on implementations across multiple organizations, here’s what I’ve observed works best:
Automated Metric Collection Configuration:
Set up Rally dashboards to automatically track quantitative quality indicators:
- Defect density per sprint/release
- Test execution pass rate trends
- Defect escape rate from each testing phase
- Mean time to detect/resolve defects
- Code coverage from CI/CD integration
- Regression test stability metrics
These metrics should update automatically through Rally’s integration with your test automation framework and CI/CD pipeline. Configure alerts when metrics cross thresholds (e.g., defect density >15 per sprint, pass rate <85%).
Manual Test Result Integration:
Maintain manual test documentation in Rally for:
- Exploratory testing sessions (time-boxed)
- Complex integration scenarios requiring human judgment
- Usability and user experience validation
- Risk-based testing of critical business flows
- Edge cases discovered during development
Structure manual test cases with consistent fields: Risk Level, Test Type, Expected Result, Actual Result, Defects Found. This allows dashboard aggregation alongside automated metrics.
Dashboard Aggregation Strategy:
Create role-specific quality dashboards:
Team Dashboard - Real-time automated metrics (defect trends, test pass rates, velocity) with links to supporting manual test details. Updated continuously.
Leadership Dashboard - Aggregated quality health across all teams, combining automated KPIs with manual risk assessments. Updated weekly.
Product Dashboard - Feature-level quality view showing both automated test coverage and manual validation status. Updated per sprint.
The aggregation should show automated metrics as the primary view with drill-down capability into manual test insights for context.
Risk-Based Test Selection:
This is where manual and automated approaches must work together. Use Rally’s custom fields to tag features/stories with risk levels (High/Medium/Low).
High-risk items get both automated regression coverage AND manual exploratory testing. Medium-risk items rely primarily on automated tests with periodic manual validation. Low-risk items use automated tests only.
Configure your Rally dashboards to show test coverage by risk level, highlighting gaps where high-risk features lack sufficient manual validation despite passing automated tests.
Practical Balance Recommendation:
For mature agile teams:
- 80% of ongoing quality assessment through automated metrics
- 20% through manual testing documentation and analysis
For new products/teams:
- 60% automated metrics (still essential for trend visibility)
- 40% manual testing (higher exploration and learning phase)
The balance should shift over time as product maturity increases and automated test coverage expands. Review quarterly and adjust based on defect escape trends and team feedback.
Most importantly: Don’t let perfect be the enemy of good. Start with basic automated metric collection in Rally, then incrementally add manual test integration as you identify gaps in quality visibility. The goal is actionable insights, not comprehensive documentation.