We’re running pilot tests with AI assistants in Teamcenter for design validation and BOM optimization. The tools work technically, but we’re hitting resistance from senior mechanical engineers who don’t trust recommendations they can’t trace back to first principles or standard design rules. It’s not outright rejection, more like polite ignoring followed by manual rework using traditional methods.
The challenge isn’t the AI capability itself. When we show the team natural language BOM querying or image-based part search, they appreciate the time savings. But when AI suggests a material substitution or flags a compliance issue with a design, the reaction is different. Engineers want to understand why before they’ll act on it. They’re accountable for what gets released, and they’ve been burned before by software that promised intelligence but delivered garbage recommendations.
We’re trying to figure out the right balance between embedding AI into existing workflows versus asking engineers to fundamentally change how they validate designs. What’s worked for others in getting experienced engineers to genuinely engage with AI-assisted design work rather than treat it as another tool to route around?
We’re in automotive and ran into the same issue with certification and traceability. Engineers couldn’t use AI recommendations for safety-critical components because we had no formal verification and validation process for AI outputs. We ended up working with our compliance team to define what counts as acceptable V&V for AI-assisted design work—things like feature importance analysis, testing against representative datasets, and continuous monitoring for drift. Once we had a documented process that aligned with existing quality standards, engineers felt comfortable engaging with the tools. It’s not fast, but it’s necessary if you’re in a regulated domain.
One thing that helped us was reframing what tasks we handed to AI. Instead of asking it to recommend design changes directly, we use it for tedious validation work like cross-referencing BOMs against material compliance databases or checking revision histories for conflicting change orders. Engineers trust AI more when it’s doing work that’s high-effort but low-judgment. Once they see value there, they’re more willing to experiment with higher-level recommendations.
We had almost identical pushback in aerospace until we shifted the framing. Instead of positioning AI as a design assistant, we treat it as a first-pass reviewer that surfaces things humans might miss in complex assemblies. Engineers still do the validation work, but now they’re reviewing AI-flagged items instead of manually checking thousands of components. The key was making it clear that the engineer owns the decision and the AI just highlights areas worth closer inspection.
Something we learned the hard way is that top-down mandates backfire. When leadership announced AI tools as mandatory, adoption was performative at best. What worked was bottom-up engagement—identifying early adopters who were curious, giving them time to experiment without pressure to deliver immediate results, and then amplifying their stories internally. Psychological safety is huge here. Engineers need to feel safe admitting when they don’t understand AI outputs or when the recommendations don’t make sense. If the culture punishes asking questions, you’ll never build genuine trust.
We’re still figuring this out ourselves, but one pattern emerging is that engineers trust AI more when it operates within existing governance rather than bypassing it. Our Teamcenter copilot respects existing access controls and change order processes, so it can’t accidentally surface restricted information or suggest changes that violate approval workflows. That alone reduced a lot of skepticism because engineers could see the AI wasn’t some rogue agent operating outside normal PLM rules. The downside is it limits what the AI can do, but it’s a trade-off worth making for adoption.