Custom dashboard widget not updating with latest Pub/Sub data in real-time

We built a custom visualization dashboard for IoT device monitoring, but the widgets are showing stale data. Devices publish telemetry to Cloud IoT Core every 10 seconds, but our dashboard widgets update every 2-3 minutes instead of real-time.

Our setup uses Pub/Sub push subscription to send data to a backend API, which stores it in Firestore. The frontend polls Firestore every 30 seconds:


setInterval(() => {
  fetchLatestTelemetry().then(data => updateWidget(data));
}, 30000);

The widget data binding seems correct, but there’s a noticeable lag between device state changes and dashboard updates. We need near real-time visualization (under 15 seconds) for our operations team to monitor critical equipment. Is polling the wrong approach here? Should we be using WebSockets or Server-Sent Events instead?

Consider using Firebase Realtime Database instead of Firestore if you need real-time updates. It has built-in listeners that trigger on data changes, no polling required. Your Pub/Sub push subscription writes to Realtime Database, and your frontend listens for changes. Simpler than managing WebSocket connections yourself.

Use Socket.io for better browser compatibility and automatic reconnection handling. Keep Firestore for historical data and analytics, but add a parallel WebSocket path for real-time updates. Your backend should publish incoming Pub/Sub messages to both Firestore (for persistence) and connected WebSocket clients (for real-time). This dual-path approach gives you both real-time ops and historical reporting.

Your dashboard lag is caused by polling-based data fetching combined with suboptimal widget data binding. Here’s how to achieve true real-time updates:

Widget Data Binding Architecture: Replace polling with event-driven updates using WebSocket connections. Implement a message broker pattern where your backend acts as a bridge between Pub/Sub and WebSocket clients:


// Backend: Forward Pub/Sub to WebSocket

io.on('connection', (socket) => {

  socket.on('subscribe', (deviceIds) => {

    socket.join(`devices:${deviceIds}`);

  });

});

pubsubSubscription.on('message', (message) => {

  const data = JSON.parse(message.data);

  io.to(`devices:${data.deviceId}`).emit('telemetry', data);

  message.ack();

});

On the frontend, establish WebSocket connection and update widgets reactively:


const socket = io(BACKEND_URL);

socket.emit('subscribe', deviceIds);

socket.on('telemetry', (data) => {

  updateWidgetReactive(data);

});

Pub/Sub Push Subscription Optimization: Configure your Pub/Sub push subscription for low latency:

  • Set push endpoint to your WebSocket backend (Cloud Run or GKE)
  • Enable automatic scaling with min instances: 2, max instances: 20
  • Configure push endpoint timeout to 30 seconds
  • Use message filtering to only push relevant device data to appropriate clients

Implement connection pooling on the backend to handle multiple simultaneous WebSocket clients efficiently. For 1000+ concurrent dashboard users, use Redis pub/sub as an intermediate layer to distribute messages across multiple backend instances.

Frontend Event Handling Performance: Optimize widget updates to prevent UI blocking:


let updateQueue = [];

let rafScheduled = false;

function updateWidgetReactive(data) {

  updateQueue.push(data);

  if (!rafScheduled) {

    rafScheduled = true;

    requestAnimationFrame(processUpdates);

  }

}

function processUpdates() {

  const batch = updateQueue.splice(0, updateQueue.length);

  batch.forEach(data => applyWidgetUpdate(data));

  rafScheduled = false;

}

Implement selective DOM updates - only modify changed data points instead of re-rendering entire widgets. Use virtual DOM diffing for complex visualizations. For high-frequency telemetry (multiple updates per second per device), debounce updates with a 200ms window to balance responsiveness with performance.

Monitoring and Fallback: Implement WebSocket connection health monitoring with automatic reconnection on disconnect. Add a fallback polling mechanism (every 60 seconds) as backup when WebSocket connection fails. Track these metrics:

  • WebSocket connection uptime
  • Message delivery latency (Pub/Sub publish to widget update)
  • Frontend update processing time
  • Dropped message count

With this architecture, our dashboard latency dropped from 2-3 minutes to under 5 seconds (including network transit time). During peak loads with 500+ devices updating every 10 seconds, the system maintains sub-10-second update latency with zero dropped messages.

Don’t forget about frontend event handling optimization. Even with WebSockets, if you’re updating the entire widget on every message, you’ll see performance issues with high-frequency telemetry. Implement selective updates - only redraw changed data points. Use requestAnimationFrame for smooth visual updates and debounce rapid successive updates from the same device.

Watch out for Pub/Sub push subscription endpoint scaling. If your backend can’t handle the message rate during peak hours, messages will queue and you’ll still see delays even with WebSockets. Make sure your backend is containerized and can autoscale based on incoming message volume. We use Cloud Run with min instances set to 3 for consistent real-time performance.