I wanted to start a discussion about the performance trade-offs we’re experiencing with risk matrix calculations in ETQ 2022. Our organization has grown to managing about 3,500 active risk assessments, and we’re seeing some interesting challenges.
We initially implemented fully automated risk scoring using calculated fields and workflow rules. The system automatically multiplies severity and probability scores to generate risk levels. However, with our large risk register, we’re experiencing 8-12 second delays when users open risk assessment forms. The system recalculates dependencies for related risks, which cascades through multiple levels.
Some team members have suggested switching to manual scoring to improve performance, but I’m concerned about losing auditability and consistency. Others argue that the automated approach ensures standardization but acknowledge the user experience is suffering.
What approaches have others taken? How do you balance performance with audit trail requirements?
Another consideration is whether your risk matrix complexity matches your actual needs. Some organizations implement overly sophisticated multi-dimensional matrices with 10+ factors that require extensive calculations. Sometimes simplifying to a standard 5x5 matrix improves both performance and user understanding without sacrificing risk management effectiveness.
Looking at this from a pure performance perspective, the issue isn’t automation versus manual - it’s about when and how calculations execute. Real-time cascading calculations across 3,500 records will always create lag. Consider implementing lazy loading for related risks and deferring non-critical calculations to background jobs.
We faced this exact dilemma last year with 2,800 risk records. The lag was killing user adoption. We ended up with a hybrid approach - automated scoring but with cached calculations that only refresh on save rather than real-time. Reduced our load times from 10 seconds to under 2 seconds.
Based on this discussion, I want to synthesize what we’ve learned about balancing these three critical factors:
Manual vs Automated Risk Scoring:
The consensus clearly favors automated scoring for consistency and compliance, but with important caveats. Pure manual scoring sacrifices auditability and introduces variability that auditors will challenge. However, naive automation that recalculates everything in real-time creates the performance problems we’re experiencing. The solution isn’t choosing one or the other, but implementing intelligent automation with proper caching and trigger optimization.
System Lag with Large Risk Registers:
Our 8-12 second delays stem from cascading calculations across related risks. The key insights are: (1) implement lazy loading so related risks don’t all load simultaneously, (2) use calculated field caching that refreshes only on relevant field changes, not every form interaction, (3) move non-critical cascade updates to scheduled batch jobs rather than real-time processing, and (4) configure workflow rules to trigger only on specific field updates (severity, probability, control effectiveness) rather than any form modification.
Auditability of Risk Decisions:
This is non-negotiable for regulated industries. Automated scoring provides demonstrable consistency and allows global risk matrix recalibration with full audit trail. The calculation methodology must be documented in controlled procedures, and the system must log when and why risk scores change. Manual scoring makes it nearly impossible to prove systematic risk evaluation during audits.
Our Implementation Plan:
We’re adopting a hybrid approach: (1) maintain automated risk scoring for auditability, (2) implement calculated field caching with targeted refresh triggers, (3) move related risk updates to a nightly batch process instead of real-time cascading, (4) optimize workflow rules to execute only on specific risk factor changes, and (5) simplify our risk matrix from 7 factors to 5 to reduce calculation complexity.
We’re also documenting our calculation logic in a controlled SOP and configuring ETQ to log all automated score changes with timestamps and triggering events. This maintains full auditability while addressing performance concerns.
Thanks for the excellent perspectives - the key takeaway is that this isn’t a binary choice but rather an architecture optimization challenge that requires addressing all three dimensions simultaneously.
From an audit standpoint, I strongly recommend maintaining automated scoring but implementing smart caching. Document your calculation logic in a controlled procedure, then use ETQ’s calculated field caching with explicit refresh triggers. This gives you both auditability and performance. We configure our risk matrices to recalculate only when severity, probability, or control effectiveness fields change - not on every form interaction.
The auditability argument is critical here. We’re ISO 31000 certified and our auditors specifically verify that risk scores are calculated consistently using documented methodology. Manual scoring introduces human error and makes it impossible to demonstrate systematic risk evaluation. That said, 8-12 second delays are unacceptable. I’d investigate whether your related risk calculations are necessary - do you really need real-time cascading updates across all linked risks, or could that be a scheduled batch process?
Manual scoring is a step backward from a compliance perspective. Your auditors will question the consistency of risk decisions, and you lose the ability to globally recalibrate your risk matrix. The performance issue is usually caused by inefficient workflow triggers firing on every field change. Have you looked at optimizing your business rules to execute only on specific field updates rather than any form modification?