We’re piloting generative AI for supplier risk assessment and compliance document processing in our aerospace procurement function. The use case is straightforward: we want procurement and compliance teams to query supplier data, regulatory requirements, and audit histories in natural language instead of manually digging through PDFs and disparate systems.
The problem we’re running into is export control. Our legal team pointed out that if the LLM generates technical data that falls under ITAR or EAR—say, design specs for a controlled component or manufacturing process details—and a foreign national employee or unauthorized user accesses it, we’ve potentially committed a deemed export violation. The public domain exception doesn’t really help because determining whether an AI-generated output qualifies requires expert analysis, and we can’t do that in real time for every query.
We’ve looked at logging all interactions and implementing access controls based on user nationality and clearance, similar to a Technology Control Plan. But I’m curious how others are handling this in practice. Are you restricting LLM use to U.S. persons only? Filtering queries before they hit the model? Using some kind of output classification layer? What’s actually working for organizations managing ITAR/EAR compliance with AI in procurement?
We face the exact same issue. Our approach right now is restrictive but functional: LLM access is limited to U.S. citizens with active security clearances, and we log every query with timestamps and user IDs. It’s not perfect—logging doesn’t prevent a violation, it just gives us an audit trail—but it at least puts us in a defensible position if regulators come asking. We’re also working with our ITAR compliance officer to define which data sources the model can access. Anything classified or explicitly ITAR-controlled stays out of the RAG pipeline entirely.
From a legal perspective, the challenge is that ITAR’s definition of technical data doesn’t care how the information was generated. If the output would qualify as controlled if a human engineer wrote it, it qualifies when an AI generates it. The public domain exception is narrow and context-dependent, so you can’t assume training on public data solves the problem. A Technology Control Plan adapted for AI development is the right framework—comprehensive logging, access restrictions based on nationality, and regular audits. But be aware that even internal development use by foreign nationals can trigger deemed export rules if they elicit controlled outputs from internal models.
We’re still in early pilot phase, but one thing that’s been useful is separating our use cases. For general supplier information queries—contact info, lead times, quality ratings—we use the LLM freely because that’s not technical data. For anything involving component specifications, engineering details, or production processes, we fall back to manual review by authorized personnel. It’s not the seamless AI experience we wanted, but it keeps us compliant while we figure out better controls. The key is being very clear with users about what they can and can’t ask the system.