ML-powered location inference fails for moving assets with intermittent GPS

Running into issues with our ML-based location inference for fleet tracking on c8y 1019.0.8. We have vehicles that move through areas with poor GPS coverage (tunnels, urban canyons), and we’re using an ML microservice to infer location from accelerometer, gyroscope, and last known GPS.

The problem is when GPS signal is lost, the ML model predictions become unreliable and the location property doesn’t update correctly:


// Current location update attempt
if (gpsData.fix === "VALID") {
  device.c8y_Position = gpsData.position;
} else {
  device.c8y_Position = mlModel.predict(sensorData);
}

We’re seeing tracking gaps of 10-15 minutes during GPS outages, and when the vehicle emerges, the location jumps instead of showing the inferred path. The sensor data preprocessing might not be handling the transition properly. Any suggestions on implementing proper fallback logic?

Here’s a comprehensive solution addressing all three focus areas:

Fallback Logic for Missing GPS: Implement a multi-tier fallback strategy with confidence-based weighting:


// Pseudocode - Fallback hierarchy:
1. Check GPS fix quality and age
2. If GPS valid (<30s old): Use GPS directly, confidence=1.0
3. If GPS invalid: Calculate time since last valid fix
4. Apply confidence decay: confidence = 0.95^(minutes_elapsed)
5. Use ML prediction weighted by confidence
6. If confidence < 0.3: Mark location as uncertain

The key is maintaining state between updates - store last valid GPS position, heading, velocity, and timestamp. Use these as baseline for ML inference.

Sensor Data Preprocessing: Your ML model needs properly preprocessed sequential data:

  1. Normalize sensors relative to vehicle frame: Convert accelerometer readings to vehicle coordinate system using last known heading
  2. Create feature windows: Feed the model sequences (e.g., last 60 seconds of sensor data) rather than point-in-time snapshots
  3. Calculate derived features: Velocity magnitude from accelerometer integration, heading change from gyroscope integration
  4. Handle noise: Apply Kalman filtering to smooth sensor readings, especially during high-vibration conditions
  5. Detect stationary periods: If accelerometer variance is very low, vehicle is likely stopped - don’t update position

Preprocess data in a sliding window buffer before feeding to the ML model. This dramatically improves prediction accuracy during GPS outages.

Location Property Update: Implement incremental position updates with quality indicators:


device.c8y_Position = {
  lat: predictedLat,
  lng: predictedLng,
  alt: lastKnownAlt
};
device.c8y_PositionQuality = {
  source: "ML_INFERENCE",
  confidence: confidenceScore
};

Update position every 30-60 seconds during GPS outage (not just at the end) to create a continuous path. When GPS returns, use the first valid GPS fix to correct any drift accumulated during the outage.

Testing Recommendations:

  • Validate ML predictions against known routes where you have complete GPS coverage
  • Test specifically in your problem areas (tunnels, urban canyons) with ground truth data
  • Monitor prediction accuracy degradation over time without GPS
  • Set confidence thresholds that trigger alerts when location uncertainty is too high

This approach eliminates tracking gaps and provides smooth location continuity even during extended GPS outages, while maintaining transparency about prediction confidence.

Don’t forget to update the location property incrementally. Instead of jumping from last GPS to ML prediction, create a smooth transition by updating position every 30-60 seconds during GPS outage using the ML model’s incremental predictions. This gives you the continuous path visualization you’re looking for. Store both the inferred position and the confidence level in separate fragments.

For confidence scoring, consider time decay - confidence drops exponentially with time since last GPS fix. Also factor in sensor quality metrics like accelerometer variance (high variance = vehicle turning/maneuvering = lower confidence in linear dead reckoning). You might want to create intermediate position events with a quality indicator so your tracking dashboard can visualize uncertainty.