Let me provide a detailed walkthrough of our automated defect prioritization implementation in cb-24 that you can adapt for your environments.
Architecture Overview:
Our system uses three layers of automation rules working together:
- Data enrichment rules that calculate component criticality scores
- Priority scoring rules that evaluate defect attributes
- Release assignment rules that match defects to appropriate releases based on capacity
Component Criticality Configuration:
We maintain a Component Master tracker with fields for each component including a calculated “Criticality Score” (1-10 scale). This score adapts dynamically based on:
- Production incident count (last 90 days)
- User base size affected by component
- Regulatory/compliance requirements
- Manual override by architecture team
An automation rule recalculates these scores weekly:
// Pseudocode - Component criticality calculation:
1. Query production incidents for component (last 90 days)
2. Calculate incident_weight = (critical_count * 10) + (high_count * 5)
3. Retrieve user_base size from component metadata
4. Set base_score = (incident_weight * 0.6) + (user_base/1000 * 0.3)
5. Apply compliance_multiplier if component is regulated
6. Update component.criticalityScore field
Priority Scoring Automation Rule:
When a defect is created or updated, this rule calculates the priority score:
// Trigger: Defect created or severity/impact changed
// Conditions: Status != Closed
// Actions:
1. Get severity_value (Critical=100, High=75, Medium=50, Low=25)
2. Count affected_customers from custom reference field
3. Get component.criticalityScore from linked component
4. Calculate age_factor = MIN(days_since_creation / 30, 1.0)
5. Calculate final_score =
(severity_value * 0.4) +
(affected_customers * 5 * 0.3) +
(component.criticalityScore * 10 * 0.2) +
(age_factor * 100 * 0.1)
6. Set defect.priorityScore = final_score
7. Set defect.confidence = calculated based on field completeness
Release Assignment Logic:
A separate automation rule runs nightly to assign defects to releases:
// Pseudocode - Release assignment:
1. Query all unassigned defects WHERE priorityScore >= 60 AND confidence >= 70
2. Get active releases ordered by planned_date ASC
3. For each release:
a. Calculate remaining_capacity = planned_points - assigned_points
b. If remaining_capacity > 20% of total:
- Assign highest priority defects up to 80% capacity threshold
c. Tag defects as "Auto-assigned" for review
4. Defects below confidence threshold → "Manual Review" queue
5. Send daily digest email to release managers with assignments
Edge Case Handling:
We built in several safety mechanisms:
Override Workflow: Team leads can manually change priority with required justification. The automation respects manual overrides and won’t recalculate unless explicitly reset.
Review Queue: Defects with confidence < 70% (missing customer impact, unclear severity) go to a dedicated review board for manual triage.
Capacity Guards: Releases never exceed 80% capacity through automation - the remaining 20% is reserved for manual additions and urgent fixes.
Validation Reports: Weekly dashboard shows automation accuracy - we track how often manual overrides occur and adjust weights quarterly based on patterns.
Implementation Steps:
-
Phase 1 (Week 1-2): Set up custom fields (Affected Customers reference, Priority Score number, Confidence percentage, Component Criticality)
-
Phase 2 (Week 3-4): Build and test data enrichment rules in a sandbox tracker with historical defects
-
Phase 3 (Week 5-6): Implement priority scoring rules with conservative thresholds, run in “advisory mode” where scores are calculated but not acted upon
-
Phase 4 (Week 7-8): Enable release assignment automation for one release as pilot, compare results against manual triage
-
Phase 5 (Week 9+): Roll out to all releases, establish quarterly review cadence for weight adjustments
Maintenance and Tuning:
We version our automation rules in a configuration tracker and review quarterly:
- Analyze override patterns to identify systematic scoring errors
- Adjust weights based on business priority shifts (we’ve modified weights 3 times in 8 months)
- Review component criticality scores monthly as production patterns change
- Collect feedback from release managers on automation accuracy
Results After 8 Months:
- Manual triage time reduced from 4-5 hours/week to 1-1.5 hours/week
- Prioritization consistency improved (measured by inter-rater agreement among release managers)
- Faster defect resolution - high-priority defects now assigned to releases within 24 hours vs 1-2 weeks previously
- Better release predictability - fewer last-minute priority changes during sprint planning
Key Success Factors:
- Start with conservative automation - don’t try to automate everything immediately
- Maintain human oversight through confidence thresholds and review queues
- Make the automation transparent - team members can see exactly how scores are calculated
- Build in feedback loops so the system improves over time
- Document the business logic clearly for audit and onboarding purposes
The cb-24 automation capabilities are robust enough to handle complex prioritization logic while maintaining the flexibility to adapt as your business needs evolve. The key is starting simple and iterating based on real usage patterns.