Rules engine message filtering causes performance degradation in high-volume scenarios

Our Pub/Sub rules engine slows down dramatically when filtering IoT messages at high volume. We’re processing 50,000 messages/minute from sensors, applying filters based on device type, location, and sensor readings. Latency has increased from 200ms to 8+ seconds.


Filter expression: attributes.deviceType='temperature' AND attributes.location='warehouse-a'
Message rate: 833 msg/sec
Processing latency: 8.2s (p95), 12.5s (p99)
CPU utilization: 85% sustained

We’re using Pub/Sub message attributes for filtering, with 6 different attributes per message. The rules engine evaluates 15 different filter conditions across multiple subscriptions. Performance was acceptable at 10,000 msg/min but degrades exponentially beyond that. How do we optimize message filtering for high-volume IoT workloads without losing filter flexibility?

Quick wins: reduce the number of message attributes from 6 to 3 most critical ones. Each attribute adds overhead. Also, use integer attributes instead of strings where possible - numeric comparisons are 3-4x faster. For location filtering, hash location names to integers. Finally, combine multiple filter conditions into a single composite attribute to reduce evaluation complexity.

Your filter expression is inefficient. String equality checks on attributes are slower than you’d expect at high volume. Instead of filtering on attributes.location=‘warehouse-a’, use separate topics per location. This way, filtering happens at publish time through topic selection, which is much faster than subscription-level filtering. Reorganize your topic structure to match your filtering dimensions.

The exponential degradation suggests you’re hitting resource limits. Check your subscription configuration - are you using multiple subscribers with proper flow control? Each subscriber should process messages in parallel, but if you’re using a single subscriber instance, it becomes a bottleneck. Scale out to 10-20 subscriber instances and use proper load balancing.