Quality management module not detecting sensor calibration drift

Our quality management module in FT MES 11.0 is failing to detect calibration drift in temperature and pressure sensors connected via OPC-UA Gateway. We have 45 sensors across three production lines, and manual inspection revealed several sensors drifting beyond acceptable tolerances, but the system never flagged them.

This is creating false compliance reports - products are passing quality checks when they shouldn’t. The sensors are sending data through OPC-UA, and we can see the readings in real-time dashboards, but the drift detection isn’t working.

We’ve configured quality thresholds and the OPC-UA data aggregation appears functional:


Sensor: TEMP_LINE1_01
Expected: 185°C ±2°C
Actual Drift: +4.5°C over 3 weeks
System Status: PASSED (incorrect)

The drift velocity calculation should trigger alerts when sensors deviate gradually over time, but it’s only catching sudden spikes. How do we configure proper anomaly detection for gradual calibration drift?

The issue is likely in how you’re aggregating the OPC-UA data for trend analysis. Real-time threshold checks only catch instantaneous violations, not gradual drift. You need to enable historical data aggregation with a sliding window - typically 7 to 14 days - to calculate drift velocity. Check if your quality module is configured to store and analyze time-series data, not just current values.

I encountered this exact problem last year. The quality management module needs specific configuration for drift detection algorithms. By default, it only does threshold checks. You have to enable the ‘Sensor Health Monitoring’ feature in the quality module settings and configure baseline calibration profiles for each sensor type. Without baseline data, the system has no reference point to calculate drift from.

I found the Sensor Health Monitoring option - it was disabled! I’ve enabled it now and started defining baseline calibration profiles. But I’m not sure about the drift velocity calculation parameters. What’s a reasonable threshold for detecting gradual drift versus normal sensor variation? Our temperature sensors have ±0.5°C natural variation.

Don’t forget about the OPC-UA data quality flags. If your sensors aren’t reporting Good quality status consistently, the drift detection won’t work properly. Check that your OPC-UA gateway is configured to pass through the quality codes from the sensors. We had issues where the gateway was stripping quality metadata and everything appeared as ‘Good’ even when sensors reported ‘Uncertain’ during warm-up periods.

For temperature sensors with ±0.5°C natural variation, set your drift velocity threshold at 0.2°C per week. This gives you early warning before hitting your ±2°C tolerance limit. The key is balancing sensitivity - too tight and you get false alarms, too loose and you miss real drift. We use a three-tier alert system: yellow warning at 0.2°C/week, orange at 0.35°C/week, and red critical at 0.5°C/week. This gives maintenance time to schedule recalibration before quality is impacted. Also make sure your aggregation window is at least 14 days to smooth out normal variation.

Your calibration drift detection failure is due to missing configuration in multiple areas. Here’s the comprehensive solution addressing all four focus areas:

1. Drift Velocity Calculation: Enable advanced drift analytics in the quality module. Configure drift velocity thresholds based on sensor specifications:


driftVelocityThreshold: 0.2 // °C per week
baselineWindow: 14 // days
minDataPoints: 100 // readings

For your ±2°C tolerance with ±0.5°C natural variation, use 0.2°C/week as warning threshold. The system calculates drift by comparing current rolling average against baseline calibration profile using linear regression over the baseline window.

2. OPC-UA Data Aggregation: Your real-time dashboard shows current values, but drift detection requires historical aggregation. Configure time-series data collection:


aggregationInterval: 300 // 5 minutes
retentionPeriod: 90 // days
aggregationMethod: AVERAGE
qualityFilter: GOOD_ONLY

This creates 5-minute averaged data points, filtering out any readings with non-Good OPC-UA quality status. Store 90 days of aggregated data to support long-term trend analysis.

3. Anomaly Detection Algorithms: The default threshold-based checking won’t catch gradual drift. Enable statistical anomaly detection with these algorithms:

  • Moving Average Convergence Divergence (MACD): Detects when short-term trend diverges from long-term baseline
  • Standard Deviation Bands: Flags readings beyond 2σ from historical mean
  • Linear Regression Slope: Calculates drift rate over sliding window

Configure multi-tier alerting:

  • Yellow Alert: Drift velocity > 0.2°C/week OR reading > 1.5σ from baseline
  • Orange Alert: Drift velocity > 0.35°C/week OR reading > 2σ from baseline
  • Red Alert: Drift velocity > 0.5°C/week OR reading exceeds tolerance (±2°C)

4. Sensor Health Monitoring: Create comprehensive health profiles for each sensor type. For your temperature sensors:

  • Baseline calibration date and reference values
  • Expected drift rate (from manufacturer specs)
  • Calibration interval (typically 6-12 months)
  • Historical drift patterns
  • Last known good calibration

Implement automated health scoring (0-100 scale):

  • Score = 100 - (currentDrift/toleranceLimit × 50) - (daysSinceCalibration/calibrationInterval × 30) - (qualityIssues × 20)

Schedule automatic recalibration when health score drops below 70.

Implementation Steps:

  1. Enable Sensor Health Monitoring in Quality Management module settings
  2. Import or manually create baseline profiles for all 45 sensors
  3. Configure OPC-UA gateway to preserve quality codes and timestamps
  4. Set up time-series aggregation with 5-minute intervals
  5. Enable statistical anomaly detection algorithms
  6. Configure three-tier drift velocity alerts
  7. Create dashboard showing sensor health scores and drift trends
  8. Run 30-day baseline learning period before enforcing alerts

After the 30-day learning period, the system will have sufficient historical data to accurately detect both sudden spikes and gradual drift. Your false compliance reports will be eliminated as the system properly flags sensors exceeding drift thresholds before they impact product quality.