We rolled out an AI-based anomaly detection system in our MES a few months ago to catch equipment issues early and reduce unplanned downtime. The model performance looked solid in testing—around 92% accuracy—and leadership was excited. But now we’re seeing a real problem on the floor: operators are starting to ignore the alerts because we’re getting too many false positives.
A typical example: the system flags a vibration spike on a motor, maintenance goes to check it, and everything is fine. Or it triggers an alert during a normal product changeover because it thinks the temperature pattern is unusual. After a few weeks of this, the experienced crew just started dismissing warnings without checking. One supervisor told me bluntly that the system “cries wolf” and wastes their time. Now I’m worried we’re conditioning people to ignore alerts right when we need them to pay attention.
We’ve tried tuning thresholds and adding some process context like production schedules, but it hasn’t been enough. The alerts still don’t match what operators actually see as problems. How do other teams handle this? What kind of data or context integration actually moves the needle on false positives, and how do you rebuild trust once operators have already lost confidence in the system?