We’re a mid-sized manufacturer running SAP S/4HANA with some custom bolt-ons, and we’re trying to figure out the right architecture for embedding AI into our core finance and supply chain workflows. Our CFO wants to see better forecast accuracy and faster month-end close, while our supply chain team wants smarter demand planning and automated purchase order change handling.
We’ve been evaluating three paths: going all-in on SAP Joule (now that it’s generally available and we’re already on BTP), adding Oracle AI agents for specific functions where we have Oracle Fusion modules, or building custom AI layers on top of our existing stack using external APIs and retrieval-augmented generation. The challenge is that our competitive differentiation comes from a few very specific procurement workflows and custom pricing logic—commodity stuff we’re fine buying from a vendor, but the differentiating processes we’re nervous about handing over.
Has anyone gone through a similar decision process? How did you balance vendor-native co-pilots versus custom-built agents, and what were the unexpected integration or data quality challenges you hit once you moved past pilots?
The data quality piece can’t be overstated. We spent six months trying to get our demand forecasting agent to work, only to realize our item master data had inconsistent descriptions, duplicate SKUs, and missing hierarchies. No amount of fancy LLM magic fixes bad data. Whatever path you choose—vendor or custom—you need a centralized, clean data layer. We ended up building a curated dataset in Snowflake that pulls from S/4HANA, our legacy MES, and external market data, then both our custom agents and vendor tools consume from that. If your data is scattered and messy, neither approach will give you reliable results.
If you have legacy or brownfield ERP systems alongside your S/4HANA, the custom API layer approach gives you more flexibility. We run a hybrid landscape—some modules still on ECC, newer ones on S/4HANA, and a few third-party apps for specialized logistics. Vendor agents from SAP or Oracle don’t play nicely across that fragmented setup. We built an orchestration layer using external APIs and a RAG pipeline that sits on top of everything, pulling context from wherever it lives. It’s more work upfront, but it means we’re not locked into one vendor’s roadmap and we can handle the messiness of our real environment. Just be ready to invest in the integration infrastructure and ongoing maintenance.
From an integration standpoint, Joule’s architecture is pretty solid if you’re already on BTP and have your SAP systems connected via Graph or CAP APIs. The permission parity model they enforce is a big plus for governance—users can’t do anything through Joule they couldn’t do in the app itself. That said, extensibility through Joule Studio is still maturing. We built a couple of custom skills for our field service team, and the deployment lifecycle was clunky compared to standard BTP workflows. If you go the Joule route, make sure you’ve got people who understand SAP Build and are comfortable with API specs. It’s not plug-and-play for custom scenarios yet.
We had a similar conversation last year. We ended up adopting a hybrid approach: vendor AI for the commodity finance processes (like invoice matching, basic variance analysis, and document ingestion) and custom agents for our specialized revenue recognition workflows that involve multi-party contracts. The key was getting very honest about what actually differentiates us. Most finance close tasks don’t—automated reconciliation and ledger analysis from a vendor worked fine. But our contract lifecycle stuff is unique, so we built that ourselves using RAG over our internal policy docs and integrated it via API. Data quality was the real gatekeeper—had to spend three months cleaning up master data and fixing duplicate vendor records before either path worked reliably.
One thing we learned the hard way: don’t assume the vendor agent will understand your edge cases. We tried Oracle’s confirmed PO change agent and it worked great for standard scenarios, but our suppliers send change confirmations in about fifteen different formats (PDFs, emails, XML feeds, you name it). The out-of-the-box document processing choked on anything that wasn’t a clean structured file. We ended up layering a custom document parser in front of the Oracle agent, which added complexity but gave us the flexibility we needed. If your workflows are clean and standardized, vendor tools are a no-brainer. If you’ve got messy real-world data, budget for custom preprocessing layers.