Who owns AI-generated designs? Sorting out IP and approval workflows

We’re rolling out generative design tools for several product lines—mostly mechanical components and some assemblies—and the IP ownership question keeps coming up in every design review. When our engineers use AI to explore hundreds of design alternatives and then select one with modifications, who actually owns that design? Is it the engineer because they set the constraints and validated the output? The company because it happened during work hours on our infrastructure? Or does the AI vendor have some claim because their algorithm generated the solution?

Our legal team is telling us current IP frameworks don’t map cleanly to machine-generated outputs. Patents and copyrights assume human creativity and intentionality. We’re also in aerospace, so we have strict regulatory requirements for demonstrable human oversight in design decisions. We’re trying to update our approval workflows to embed clear decision authority and audit trails—who reviews AI recommendations, who has sign-off power, what gets escalated—but honestly we’re making this up as we go.

Curious how others are handling this. Are you documenting human contribution at every stage? Have you updated design governance policies to explicitly cover AI-generated work? And what does your change management process look like when AI detects dependencies and routes reviews automatically?

One thing we learned the hard way: you can’t graft governance onto AI workflows after the fact. We piloted a generative design tool where engineers were creating dozens of alternatives in minutes, but nobody defined upfront who reviews them, what the approval criteria are, or how we handle designs that look good but violate some unwritten company standard. It turned into chaos. Now we design the approval workflow first—who has authority at each stage, what triggers escalation, what documentation is required—before we turn on any new AI capability. Takes longer upfront but way fewer surprises.

You also need to think about the data side. If your AI is trained on proprietary historical designs, make sure you own the rights to that training data and the outputs. We had a situation where a vendor wanted to use our design data to improve their model for other customers, which would have been a huge IP leak. Contract language is critical. Also, if you’re pulling data from PLM, ERP, and supplier systems to give AI context for design decisions, you need solid data governance—metadata standards, access controls, audit logs. Otherwise you can’t prove where the AI’s recommendations came from, and that’s a problem both legally and for regulatory compliance.

Regulatory side is tricky. In our industry every design decision has to be auditable and traceable back to a qualified engineer who took responsibility. We can use AI to generate candidates or speed up simulation, but the human engineer must be able to explain and defend every choice. Our design review process now includes a specific checklist: did the engineer define the problem? Did they evaluate multiple AI-generated alternatives? Did they modify or refine the selected design? Can they justify why this design meets requirements? If we can’t answer yes to all of those, the design doesn’t move forward. It adds time but it’s the only way to stay compliant.

We’re still at the stage where AI suggests design improvements within our CAD tool—nothing crosses systems yet. Even at that basic level, we’ve updated our design standard operating procedures to say that any AI recommendation must be reviewed and validated by a qualified engineer before it’s incorporated. The engineer logs what the AI suggested, why they accepted or rejected it, and what modifications they made. It’s lightweight but it creates an audit trail. We’re not ready for autonomous workflows where AI routes change requests or detects dependencies across systems. That’s going to require way more governance infrastructure than we have right now.

From a legal perspective, the safest path is to assume AI-generated outputs without substantial human creative input won’t be protectable as IP. Recent rulings in the US and UK have been pretty clear that inventorship and authorship require a natural person. So your governance process should emphasize and document the human intellectual contribution—problem formulation, constraint specification, design evaluation, modifications, and validation. Also make sure your contracts with AI vendors explicitly address who owns outputs and training data. If the vendor’s model was trained on third-party design data, you could have licensing issues downstream. Better to clarify upfront than deal with disputes later.

The cross-system orchestration piece is still mostly theoretical for us. We’ve seen demos where AI detects a design change, checks supplier availability, estimates cost impact, and routes approvals automatically, but actually deploying that requires integration infrastructure most companies don’t have. You need APIs connecting PLM, ERP, MES, procurement, and quality systems with real-time data sync. You need consistent data semantics so the AI isn’t comparing apples to oranges. And you need governance rules that define when AI can act autonomously versus when it escalates to a human. We’re probably two years away from having the foundational systems in place to even attempt that.

We ran into this exact problem last year. Our approach has been to treat AI as a design assistant tool, not a creator. Engineers document their design brief—objectives, constraints, performance targets—and that becomes the basis for proving human authorship. The AI explores options, but the engineer selects, modifies, validates, and takes responsibility. We updated our PLM workflows to require a mandatory review gate where the engineer explicitly signs off that they’ve evaluated the AI recommendation against company standards. It’s more documentation overhead, but legal is comfortable that we can defend IP ownership if needed.