Opportunity pipeline dashboard aggregation optimized for large datasets

We recently completed a major performance overhaul of our opportunity pipeline dashboard in SAP CX, and I wanted to share our implementation approach for teams dealing with similar challenges.

Our sales organization manages 80,000+ active opportunities across 15 regions, and our executive dashboard was taking 45-60 seconds to load pipeline analytics. The bottleneck was real-time aggregation of opportunity values, stage distributions, and win probability calculations across the entire dataset.

Our solution focused on three key optimization strategies: implementing pre-aggregation tables that update incrementally, leveraging in-memory caching for frequently accessed metrics, and creating composite indexes on critical opportunity fields like stage, owner, and expected close date.

The results have been transformative - dashboard load times dropped to 3-4 seconds, and we can now support drill-down analysis without performance degradation. I’ll walk through our technical implementation and lessons learned from production deployment.

Great question about the incremental updates. We implemented a hybrid approach using SAP CX’s custom event handlers. Whenever an opportunity is created, updated, or stage-changed, our event handler flags it for re-aggregation. We run a scheduled job every 15 minutes that processes only flagged records and updates the aggregate tables. For critical real-time metrics like today’s pipeline changes, we use a separate fast-path that updates immediately but only affects current-day aggregates. This keeps the bulk historical data stable while ensuring fresh metrics where it matters most. The key was identifying which metrics truly need real-time updates versus acceptable 15-minute delays.

I’m curious about the memory footprint of your in-memory caching solution. With 80K opportunities, how much memory are you allocating for the cache, and how do you handle cache invalidation when opportunity data changes? Also, did you implement any cache warming strategies for common dashboard views?

Impressive results! Can you elaborate on your indexing strategy for opportunity fields? We’ve added indexes before but didn’t see the performance gains we expected. What specific fields did you index and did you use composite indexes?

For indexing, we focused on query patterns rather than just adding indexes everywhere. Our composite indexes target the most common filter combinations: (sales_stage, expected_close_date), (owner_id, sales_stage, amount), and (region, sales_stage, created_date). Single field indexes on frequently filtered columns like opportunity_type and lead_source. The game-changer was analyzing actual dashboard query execution plans in our analytics database to identify table scans. We also partitioned our opportunity table by fiscal quarter, which dramatically improved queries filtering by close date ranges.