Our organization recently upgraded to Rally 2024 and we’re debating whether to track quality metrics using custom fields on defect objects versus relying on Rally’s standard quality metrics in portfolio dashboards. We have 8 teams across 4 different product lines, and each team currently has their own custom field configuration for severity scoring, customer impact ratings, and technical debt classification.
The portfolio dashboard shows metrics based on standard fields like Priority and State, but these don’t align with how our teams actually measure quality. For example, our mobile team uses a custom “Performance Impact” field that’s critical to their release decisions, but it doesn’t feed into the standard quality dashboard calculations.
I’m curious about others’ experiences with WSAPI metric queries - can custom fields provide the same level of cross-team standardization and reporting accuracy as the built-in quality metrics? What are the tradeoffs in dashboard performance and maintenance overhead?
The dashboard inaccuracy you mentioned is a real concern with custom fields. We had a situation where two teams used identically named custom fields (“Customer Impact”) but with completely different value sets and scoring logic. The portfolio dashboard aggregated them together, giving executives totally misleading quality trends. It took weeks to untangle. Rally’s standard quality metrics have well-defined semantics that prevent this kind of confusion. If your teams need different severity models, consider using Rally’s built-in Severity field with team-specific mappings in your workflow automation rather than creating separate custom fields.
We went through this exact evaluation last year. Custom fields give you flexibility but create serious problems for cross-team standardization. Each team starts adding their own fields, and within 6 months you have 50+ custom fields with overlapping purposes. The portfolio dashboard becomes meaningless because there’s no consistent way to roll up metrics. We ended up mapping our custom severity scores back to Rally’s standard Priority field using workflow rules, which gave us both team-specific granularity and executive-level consistency.
I’d push back on the idea that standard fields can’t handle team-specific needs. Rally’s quality metrics are designed to be extended through calculated fields and custom queries. For your mobile team’s Performance Impact tracking, you could create a calculated field that combines standard Priority with custom attributes, then expose it through the portfolio dashboard using WSAPI filtered queries. This gives you both standardization and customization without fragmenting your data model. The key is having a governance model for when custom fields are actually necessary versus when you can adapt standard fields.