Our team is designing a cross-object alerting system for our analytics dashboard and we’re debating two architectural approaches. The requirement is to notify account managers when specific combinations of opportunity, case, and contract data meet alert criteria.
Option A uses Platform Events published from record-triggered flows, with subscribers handling the alert logic and creating notification records. Option B uses scheduled flows running every 15 minutes to query across objects and generate alerts based on current state.
The Platform Events approach gives us real-time updates which is appealing, but I’m concerned about governor limits if we have high transaction volumes. The scheduled flow approach is simpler and batches the work, but introduces latency that might not be acceptable for urgent alerts.
Has anyone implemented both patterns and can share experiences on scalability, maintainability, and performance trade-offs? Our org processes about 5000 opportunity updates daily across 200 users.
From a governance perspective, Platform Events give you much better audit trails and debugging capability. Each event publish is logged and you can monitor delivery success rates. With scheduled flows, if an alert doesn’t fire, it’s harder to trace why. The governor limit concern is valid but manageable - at 5000 daily updates, you’re nowhere near the 250K event delivery limit. Just make sure your subscribers are idempotent since platform events guarantee at-least-once delivery, not exactly-once.
Let me provide a comprehensive analysis of both approaches based on your specific requirements and the three key considerations.
Platform Events for Real-Time Updates:
With 5000 daily opportunity updates, Platform Events are well within safe limits. You’d publish events on record triggers, keeping payloads under 1MB (typically 2-5KB for alert data). The real-time nature means alerts fire within seconds of the triggering change. Key implementation pattern: publish lean events with just IDs and change metadata, then enrich in subscribers. This minimizes governor limit impact while maintaining responsiveness. Use Change Data Capture (CDC) for standard object changes to reduce custom event consumption.
Scheduled Flows for Batch Alerting:
Scheduled flows excel at digest-style notifications and complex multi-object queries that would be inefficient in real-time. Running every 15 minutes, you can process up to 250K records per run (with proper bulkification). The latency is acceptable for non-urgent alerts like daily summaries or trend analysis. Best practice: partition your scheduled flows by data volume - separate flows for high-volume vs. low-volume alert types to prevent timeouts.
Governor Limits and Scalability:
Your 5000 daily updates translate to roughly 200 events per hour, well under the 250K daily event delivery limit. Platform Events scale horizontally - multiple subscribers can process events in parallel without interfering. Scheduled flows have fixed CPU time limits (10 minutes) and can become bottlenecks as data grows. Critical consideration: Platform Events consume API calls in subscribers (1 per SOQL query), while scheduled flows use automation limits.
Recommendation for Your Architecture:
Implement a tiered approach based on urgency. Use Platform Events for alerts requiring <5 minute latency (opportunity stage changes, high-value deal updates). These represent maybe 20% of your alerts but 80% of business value. Use scheduled flows for analytical alerts that aggregate data over time periods (weekly trends, monthly summaries). This gives you real-time capability where it matters while keeping complexity manageable.
For maintainability, create a shared Apex class that contains your alert evaluation logic. Both Platform Event triggers and scheduled flows call this shared service, ensuring consistency without code duplication. The hybrid pattern also provides resilience - if Platform Events have issues, scheduled flows serve as backup.
Scalability-wise, you have headroom to grow to 50K daily updates before needing architecture changes. Monitor your event delivery metrics and scheduled flow execution times quarterly to identify bottlenecks early.
Have you considered a hybrid approach? Use Platform Events for high-priority alerts that need real-time delivery, and scheduled flows for lower-priority digest-style alerts. We implemented this pattern and it works really well. Critical opportunity stage changes fire Platform Events immediately, while weekly account health summaries run as scheduled flows. This way you optimize for both latency and resource efficiency. The scheduled flows also serve as a safety net to catch anything the event-driven system might have missed due to failures.
Don’t overlook the maintenance aspect. Platform Events require more sophisticated error handling and monitoring infrastructure. You need retry logic, dead letter queues for failed deliveries, and monitoring dashboards. Scheduled flows are much simpler to maintain - they either run successfully or fail, and you get clear error logs. For a team without strong event-driven architecture experience, scheduled flows might be the pragmatic choice even if they’re not the most elegant solution.
We went with scheduled flows initially for a similar use case and regretted it. The 15-minute latency became a real problem for our sales team who needed immediate visibility. We also hit issues with the scheduled flow timing out when processing large result sets. The batch nature sounds good in theory but you lose granularity in error handling - if one alert fails, it’s hard to isolate and retry just that one.
The hybrid approach is interesting. How do you handle the complexity of maintaining two different alert mechanisms? I’m worried about code duplication and keeping the logic synchronized between the event subscribers and the scheduled flows.