We’re designing a new integration layer for our financial accounting system on SAP S/4HANA 1809 and debating between event-driven architecture versus traditional polling for the SAL API. The key considerations are real-time monitoring requirements for treasury operations and maintaining a complete compliance audit trail for SOX reporting.
Event-driven would give us instant notification of journal entries and account changes, but I’m concerned about message ordering, guaranteed delivery, and the complexity of event sourcing for audit reconstruction. Traditional polling is simpler and we control the cadence, but introduces latency that might not meet treasury’s real-time dashboard needs.
What are the trade-offs between event-driven versus polling approaches when SAL API capabilities need to support both operational real-time monitoring and regulatory compliance audit trails? Has anyone implemented event-driven patterns successfully with S/4HANA financial APIs while maintaining audit integrity?
I’ve implemented both patterns for financial systems. Here’s what I’ve learned: SAL API event notifications work but have limitations around delivery guarantees and filtering capabilities. You’ll spend significant effort building reliability layers on top. The hybrid model is pragmatic but creates dual data flows to maintain. Consider vendor lock-in too - heavy investment in SAP-specific event mechanisms makes migration harder versus standard REST polling that works with any system.
Consider the cost-benefit analysis too. Event-driven requires infrastructure for message brokers, event stores, consumer groups, dead letter queues, monitoring, and operational complexity. You need skilled engineers who understand distributed systems, eventual consistency, and event sourcing patterns. Polling is operationally simpler - a scheduled job, some API calls, database writes. For 1809 specifically, the SAL API event capabilities are limited compared to newer versions. Unless you have hundreds of thousands of transactions per hour, polling every 30-60 seconds might be sufficient for treasury dashboards without the architectural complexity.
Speaking from the business side, our treasury team needs real-time visibility into cash positions and payment statuses. We can tolerate 1-2 minute latency but not 15-30 minute delays from infrequent polling. However, the real-time requirement is for operational dashboards, not for audit reporting. The audit trail can be batch-reconciled overnight. So maybe the hybrid approach makes sense - events for operational real-time views, polling for compliance audit trail generation.
Event-driven sounds appealing but you’re right to be cautious about audit trail integrity. In my experience, the biggest challenge is ensuring event ordering and handling duplicate events. Financial transactions must maintain strict chronological order for audit purposes, and event-driven systems can deliver events out of sequence during network issues or system restarts. You’ll need sequence numbers, idempotency keys, and event replay capability. That’s a lot of complexity versus simple polling with timestamp-based queries.