WSJF vs MoSCoW vs RICE for requirements prioritization in regulated industries

I’m leading a portfolio planning initiative for a healthcare software company where we manage 200+ requirements across 8 product lines. We’re evaluating three prioritization frameworks and I’d like to hear from teams who’ve implemented these in Azure DevOps:

WSJF (Weighted Shortest Job First): SAFe’s approach using business value, time criticality, and risk reduction divided by job size. Seems mathematically rigorous but requires significant estimation effort.

MoSCoW (Must/Should/Could/Won’t): Simple categorical prioritization. Easy to understand but lacks granularity for portfolio-level decisions where we need to rank 50+ “Must Have” items.

RICE (Reach, Impact, Confidence, Effort): Product management framework that scores based on user reach and business impact. Appears more product-focused than compliance-focused.

Our constraints: FDA regulatory requirements mean we need audit trails for prioritization decisions, quarterly release cycles, and we must balance innovation features against compliance updates. How have teams structured custom fields in Azure DevOps to support these frameworks? Which method provides the best traceability for regulatory audits?

Consider how each framework handles dependency chains and technical debt. WSJF naturally accounts for risk reduction which covers technical debt, but it doesn’t model dependencies well. MoSCoW requires manual dependency management through work item links. RICE ignores dependencies entirely. In regulated industries, you often have compliance requirements that must be sequenced due to validation dependencies-this is where Azure DevOps link types become critical. We use a custom “Depends On” link type combined with WSJF scoring, and our portfolio board shows dependency violations when high-priority items are blocked by low-priority dependencies.

We implemented WSJF across 12 teams in a medical device company. Created custom fields for Cost of Delay components (business value, time criticality, risk reduction) and job size, plus a calculated field for the WSJF score. The calculated field uses a simple formula that updates automatically when inputs change. For audit trails, every field change is logged in work item history with user and timestamp. The main challenge is getting consistent estimation across teams-we run quarterly calibration sessions where teams compare reference stories.

MoSCoW works well for our pharma software products, but we added numeric sub-priorities within each category. So “Must Have” items get scored 1-100, “Should Have” items get 101-200, etc. This gives us the simplicity of categories with the granularity of numeric ranking. In Azure DevOps, we use a custom picklist for MoSCoW category and an integer field for sub-priority. Portfolio queries sort by category first, then sub-priority. The approach scales better than pure WSJF when you have non-technical stakeholders who struggle with abstract scoring models.

From a regulatory audit perspective, RICE provides the weakest traceability because “Confidence” is subjective and hard to defend in audits. WSJF is stronger since each component maps to business justification, but the division operation can produce counterintuitive results that auditors question. We’ve had success with a hybrid approach: MoSCoW for initial categorization (which aligns with regulatory risk levels), then WSJF scoring within the Must/Should categories. This satisfies auditors who want clear compliance prioritization while giving product teams flexibility for feature ranking.