Building trust and capability for AI forecasting rollout across finance teams

We started piloting an ML-based forecasting tool in our planning group about eighteen months ago. Early results were strong—accuracy improved, the model surfaced scenario options our analysts hadn’t considered, and leadership got excited. But when we tried to roll it out to the broader finance organization, adoption stalled hard. Controllers in three regions weren’t using it. Analysts were reverting to their old Excel templates within weeks.

The real issue wasn’t the technology. It was trust and capability. Finance teams wanted to understand how the model worked, what data it used, and whether they could override it when something looked wrong. They also needed new skills—not just how to click buttons, but how to interpret model outputs, validate recommendations, and know when human judgment should override the system. We ended up investing about 35% of the project budget into change management: workshops in each region to gather input on local needs, training that focused on transparency and control mechanisms, and a champion network of early adopters who could support their peers.

What actually moved the needle was showing people they could audit the AI, challenge it, and that it freed them up to do strategic work instead of manual data wrangling. We also learned that one-time training wasn’t enough—people needed ongoing coaching and a feedback loop where they could report friction points and see us respond. Adoption rates finally climbed to around 70% after we embedded those support structures. The lesson for us was that AI adoption in finance is fundamentally a people problem, not a tech problem.

Curious about the champion network—how did you identify and recruit those early adopters? In our organization, the people most excited about new tech are often not the ones with the most credibility among their peers. Did you prioritize enthusiasm or influence when building that group?

The 35% change management budget number is interesting. We’ve been told to keep it under 15%, and honestly it shows. Our procurement AI pilot worked fine in the test group, but when we expanded it to other buyers, usage dropped off fast. Nobody had time to learn it properly, and there was no ongoing support after the initial training session. I’m going to push back on that budget constraint—sounds like you proved that investing in people is what actually determines success.

This resonates. We had a similar experience with intelligent invoice processing. AP teams initially resisted because they didn’t trust the system to handle exceptions correctly, and they were worried about losing the expertise they’d built over years of manual review. What helped was being really transparent about where the AI could fail and giving them clear override paths. Also made sure we celebrated the shift from data entry to exception handling and supplier relationship work—reframing the role as more strategic, not diminished.

We’re at the pilot stage now with a similar forecasting tool, and your experience is a useful warning. One thing I’m wrestling with is how to balance standardization with local variation. Did you allow regional teams to customize how they used the tool, or did you enforce a common process? We’ve got different business models across regions, and I’m worried a one-size-fits-all approach will create resistance, but I also don’t want ten different implementations.

Good question. We looked for people who had both—credibility with their teams and genuine curiosity about the tool. In a couple of regions, that meant recruiting senior analysts who were initially skeptical but willing to test it if we addressed their concerns. Those turned out to be the most effective champions because when they endorsed it, others listened. We also made sure champions had a direct line to the project team so they could escalate issues and see quick responses. That kept them engaged and gave them real problem-solving authority, not just cheerleading duty.