Kepware connection timeout when mapping remote Thing to data storage for real-time sensor ingestion

Our data-storage module is failing to ingest sensor data from Kepware-connected PLCs. We’re running twx-96 with Kepware 6.13 and have about 45 remote Things mapped to various OPC UA tags. The Kepware timeout configuration seems fine (30s connect timeout, 60s read timeout), but we’re still seeing connection failures. When mapping remote Thing properties to Kepware channels, the connection drops after 15-20 seconds, well before our configured timeouts. We’ve verified network stability between ThingWorx and Kepware servers - ping times are under 5ms consistently. The Kepware logs show ‘client disconnect during tag browse’ errors. This is causing critical gaps in our sensor data collection. Has anyone successfully resolved Kepware timeout issues with remote Thing mappings in high-tag-count scenarios?

Here’s the complete solution addressing all three focus areas:

Kepware Timeout Configuration: Your initial timeouts were actually too short. For 45 remote Things, you need longer timeouts to handle connection establishment overhead:


ConnectionTimeout = 45000
ReadTimeout = 90000
ReconnectInterval = 10000
MaxReconnectAttempts = 5

Also enable Kepware’s connection pooling in the OPC UA driver settings. Set MaxSessionsPerChannel to 10 instead of default 5.

Remote Thing Mapping Optimization: Don’t map all tags directly to remote Thing properties. Use tag groups with priority-based scanning:


// High-priority tags (alarms, critical sensors): 1000ms
ThingShape: CriticalTags, ScanRate: 1000

// Medium-priority (process values): 3000ms
ThingShape: ProcessTags, ScanRate: 3000

// Low-priority (status, diagnostics): 10000ms
ThingShape: DiagnosticTags, ScanRate: 10000

This reduces simultaneous tag reads from 540 to roughly 180 per second, which is manageable.

Network Stability Checks: Even with 5ms ping times, you need to verify sustained throughput. Add network monitoring:

  • Enable Kepware diagnostics logging for connection metrics
  • Monitor ThingWorx subscription queue depth (target: under 100 pending)
  • Set up alerts for connection state changes
  • Implement exponential backoff reconnection strategy

In platform-settings.json, increase subscription threads:


"SubscriptionProcessing": {
  "ThreadPoolSize": 50,
  "QueueCapacity": 2000,
  "ThreadTimeout": 120000
}

For 45 remote Things with your tag count, 50 threads is appropriate. Also enable connection health monitoring in your remote Thing template by adding a ConnectionStatus property that updates every 30 seconds. This helps identify which specific Things are timing out.

The root cause was a combination of aggressive scan rates overwhelming the subscription processor and insufficient timeout buffers for the connection handshake phase. With these changes, your connection stability should improve significantly, and you’ll have better visibility into any remaining issues through the health monitoring.

540 tags at 1000ms is definitely your problem. Kepware can handle it, but your network layer or ThingWorx subscription processing can’t keep up. The 15-20 second timeout you’re seeing is likely ThingWorx’s internal subscription timeout, not Kepware’s. When the subscription queue gets backed up, ThingWorx assumes the connection is dead and kills it. You need to either reduce tag count per Thing, increase scan rates to 2500-5000ms, or implement tag grouping with different priorities. Critical tags at 1000ms, non-critical at 5000ms or higher.

Increased scan rates to 3000ms and it helped but still seeing occasional timeouts during peak hours. The thread pool suggestion makes sense - where exactly in platform-settings.json should I adjust the subscription threads? And what’s a safe value for 45 remote Things?

Also check your ThingWorx subscription processing thread pool. Default is 20 threads, which isn’t enough for 540 high-frequency subscriptions. You’ll need to increase that in platform-settings.json.