We’re about six months into our process mining rollout, initially focused on order-to-cash and procure-to-pay. The tool is flagging dozens of bottlenecks—some are obvious (approval delays, missing owner assignments), but plenty are more subtle variations or edge cases we hadn’t documented. We’ve been manually triaging each one and deciding whether to fix via RPA, adjust the process model, or leave it as-is with human handling.
The problem is that this triage itself is becoming a bottleneck. Our small automation team can’t keep up, and we’re seeing cases where the same type of issue reappears across different process variants but gets routed inconsistently. We’re starting to experiment with a lightweight router that scores each discovered issue by complexity and frequency, then recommends automation for high-volume, low-complexity stuff and escalates the rest to process owners. But we’re not confident in the thresholds yet, and we’re worried about auto-routing something that actually needs human judgment.
Has anyone implemented intelligent routing between RPA deployment and manual review for process mining findings? What signals or attributes did you use to make the split, and how did you handle edge cases where the router got it wrong?
We tackled something similar last year when our process mining uncovered about eighty recurring issue types in invoice processing. What worked for us was a simple decision matrix: frequency (how often it happens), data quality (are all the inputs clean and structured?), and business risk (what’s the cost of getting it wrong?). High frequency, clean data, low risk goes straight to the RPA backlog. Everything else gets flagged for a process owner review first. We run a weekly sync where someone from finance, IT, and the COE reviews the flagged items and either approves them for automation or marks them as exceptions requiring permanent human handling. It’s not perfect, but it cut our triage time by maybe sixty percent.
We use a hybrid approach. The router makes an initial recommendation, but it always goes to a human approver before execution. The approver can override, and every override gets logged with a reason code. After a few months we retrain the scoring model using those override decisions as ground truth. It’s a bit slower at first, but the router gets smarter over time and the approval step catches edge cases before they become production issues.
We had the same triage overload problem. Our solution was to implement a three-tier routing model. Tier one is fully automated RPA deployment for anything that meets strict criteria: appears in more than fifty cases per month, has complete structured data, and has zero regulatory or compliance flags. Tier two is semi-automated where we generate the RPA script but require sign-off from a process owner before deployment. Tier three is manual investigation for everything else. We also built in a feedback loop where if a tier-one automation gets rolled back or causes issues, that pattern gets moved to tier two permanently. It’s been running for about eight months now and we’ve only had to roll back two automations out of maybe forty deployed.
What process mining platform are you using? Some of the newer ones have built-in simulation capabilities that let you model the impact of automating a specific bottleneck before you commit resources. We run a quick sim on anything that scores borderline—helps us see whether fixing that bottleneck just shifts the constraint somewhere else or actually improves end-to-end throughput. Saves us from wasting dev cycles on automation that doesn’t move the needle.