Data stream visualization lags and times out when rendering large telemetry datasets

We’re experiencing severe performance issues with real-time data stream charts in IoT Operations Dashboard (iod-23). When rendering telemetry data from 500+ devices, the charts freeze or lag significantly, sometimes timing out completely.

Our current implementation fetches all data points:

streamAPI.subscribe('telemetry/all', (data) => {
  chartComponent.addDataPoints(data);
});

The browser becomes unresponsive when processing high-frequency updates (10-15 updates per second). We’ve tried throttling on the backend but that causes data loss. The chart rendering optimization seems insufficient for our scale. Are there data windowing techniques or frontend performance tuning strategies that work better with large datasets in iod-23?

Here’s a complete performance optimization strategy covering all three areas:

Chart Rendering Optimization: Switch to canvas-based rendering and implement batched updates. Replace your current approach:

const chartConfig = {
  type: 'line',
  options: {
    animation: false,
    elements: {line: {tension: 0}},
    plugins: {decimation: {enabled: true, algorithm: 'lttb'}}
  }
};

Disable animations for real-time charts - they’re unnecessary and consume CPU cycles. Use the LTTB (Largest Triangle Three Buckets) decimation algorithm which intelligently reduces points while preserving visual shape.

Data Windowing Techniques: Implement a sliding window buffer client-side:

class DataWindow {
  constructor(maxSize = 300) {
    this.buffer = [];
    this.maxSize = maxSize;
  }
  add(point) {
    this.buffer.push(point);
    if (this.buffer.length > this.maxSize) this.buffer.shift();
  }
}

For your 5-minute window, 300 points at 1-second intervals is sufficient. But I recommend aggregating to 5-second intervals (60 points total) - users won’t notice the difference visually.

Frontend Performance Tuning: Implement several optimizations:

  1. Batch Updates: Accumulate incoming data points and update charts every 500ms instead of on every message
  2. Virtual Scrolling: Only render charts in viewport using Intersection Observer API
  3. Web Workers: Offload data aggregation to a worker thread
  4. RequestAnimationFrame: Synchronize chart updates with browser repaint cycles

Implement batched updates:

let updateQueue = [];
streamAPI.subscribe('telemetry/all', (data) => {
  updateQueue.push(data);
});

setInterval(() => {
  if (updateQueue.length > 0) {
    processAndRenderBatch(updateQueue);
    updateQueue = [];
  }
}, 500);

For 500+ devices, also implement device grouping - don’t show all devices simultaneously. Use tabs or expandable sections to limit visible charts to 10-15 at a time. This reduces the rendering workload dramatically.

Finally, consider using a specialized time-series visualization library like uPlot or Plotly which are optimized for high-frequency data streams. They handle large datasets much better than general-purpose charting libraries.

With these optimizations, you should handle 1000+ devices without lag.

Currently displaying the last 1000 data points per device across 20 visible charts (one per device group). So roughly 20,000 total points being rendered. The charts show the last 5 minutes of data with 1-second granularity per device.

That’s way too many points for real-time rendering. You need data downsampling and aggregation. Instead of showing every data point, aggregate to 5-second or 10-second intervals. For visualization purposes, users can’t distinguish between 1-second and 5-second granularity on a 5-minute window anyway. Also implement proper data windowing - only keep the visible time range in memory.

Tom’s right about aggregation. Additionally, use canvas-based rendering instead of SVG for high-frequency data. SVG creates DOM nodes for every element which kills performance. Libraries like Chart.js have canvas renderers that handle thousands of points much better. Also consider virtual scrolling if you’re showing multiple charts - don’t render charts that aren’t in viewport.

You’re hitting the DOM manipulation bottleneck. Adding individual data points triggers a full chart re-render each time. You need to batch updates and use a sliding window to limit the number of rendered points. How many data points are you trying to display simultaneously?

Yes, implement client-side windowing. Maintain a circular buffer that automatically drops old data points. The streaming API will keep sending all updates, but your client only retains what’s needed for visualization. This separates data ingestion from rendering concerns.