Mobile device fails to sync production data with shop-floor-control module

We’re experiencing intermittent data sync failures with AVEVA MES Mobile (AM 2021.2) on our shop floor. Operators enter production counts and quality data on tablets, but when connectivity drops, the app doesn’t properly queue the data for later sync.

When connection returns, we’re seeing duplicate records in the database and timestamp mismatches. Some entries show future timestamps or overlap with already-synced data.

Current sync code attempt:

SyncManager.queueData(productionEntry);
if (NetworkUtil.isConnected()) {
    SyncManager.flush();
}

How do we implement proper offline-first synchronization with deduplication and timestamp validation? This is causing production reporting discrepancies and operators are losing trust in the mobile system.

We’re not currently using local transaction IDs. The app just queues the production entry object as-is. Should we be generating UUIDs client-side before queueing? Also, how do we handle the timestamp validation - should we store both client timestamp and sync timestamp?

Here’s a complete solution addressing all three focus areas - offline-first sync, deduplication, and timestamp validation:

Offline-First Sync Implementation: Implement a persistent local queue using SQLite with a sync status table. Generate client-side UUIDs immediately when operators save data:

String txnId = UUID.randomUUID().toString();
productionEntry.setTransactionId(txnId);
productionEntry.setClientCreated(System.currentTimeMillis());
localDB.insertPendingSync(productionEntry);

Use WorkManager for reliable background sync that respects network constraints and survives app restarts. Configure it to retry with exponential backoff.

Record Deduplication: On the server side, add a unique constraint on transaction_id in your shop_floor_data table. Your sync endpoint should use INSERT IGNORE or upsert logic:

INSERT INTO shop_floor_data (transaction_id, ...)
VALUES (?, ...) ON CONFLICT (transaction_id) DO NOTHING;

This prevents duplicate records even if the client retries a sync that partially succeeded. The client should only remove entries from the local queue after receiving a 200 OK response.

Timestamp Validation: Implement three-timestamp tracking:

  • client_created: When operator entered data
  • client_queued: When added to sync queue
  • server_received: Server timestamp on sync

Server-side validation pseudocode:


// Pseudocode - Timestamp validation steps:
1. Calculate time_delta = server_received - client_created
2. If time_delta > 24 hours, flag for review (possible clock skew)
3. If client_created > server_received, reject (future timestamp)
4. Use server_received for production reporting to ensure consistency
5. Store all three timestamps for audit trail

Add clock synchronization checks when the app starts. If device clock differs from server by more than 5 minutes, show a warning to operators.

Additional Recommendations:

  • Implement a sync status indicator in the UI showing pending count
  • Add manual sync button for operators to trigger immediate sync
  • Log all sync attempts with outcomes for troubleshooting
  • Consider batch sync for efficiency (sync up to 50 records per request)
  • Implement conflict resolution if same work order is edited offline by multiple operators

This approach has resolved sync issues for multiple shop floor deployments. The key is treating mobile as the source of truth initially (offline-first), then reconciling with server state using UUIDs and proper timestamp validation.

Yes, client-side UUIDs are essential for offline-first architecture. Generate a UUID when the operator saves data, before it goes into the queue. For timestamps, you need three: client_created (when operator entered), client_queued (when added to sync queue), and server_received (when sync completes). The server should validate that client_created is within acceptable bounds and use server_received for actual production reporting. This prevents timestamp drift issues. Also implement a sync status table locally to track what’s been successfully synced versus what’s still pending.

We had duplicate records because the sync retry logic wasn’t checking if a record with the same UUID already existed on the server. The flush operation would retry failed syncs, but without server-side deduplication, each retry created a new database entry. Make sure your server endpoint checks for existing UUIDs before inserting. We added a unique constraint on the transaction_id column in the shop floor data table.

I’ve seen similar issues with offline sync. The main problem is your sync logic doesn’t handle the offline queue properly. You need to add unique identifiers to each entry before queuing and validate timestamps against server time when syncing. Are you storing a local transaction ID with each queued record? Without that, the system can’t detect duplicates when the same data gets queued multiple times during connection failures.

Another thing to watch for is the network check timing. Don’t just check if connected before flushing - that race condition can cause issues. Better to always attempt the sync and handle the network exception. The sync queue should persist across app restarts too. Are you using SQLite locally for the queue, or just in-memory storage? In-memory queues get lost if the app crashes or device reboots.