Custom BOM export script fails on large assemblies with memory overflow

We’ve developed a Python automation script using Teamcenter SOA to export BOM structures for manufacturing. The script works fine on smaller assemblies (under 500 parts), but consistently crashes with memory errors on our large product assemblies (2000+ parts). The error occurs during the BOM traversal phase.

We’re currently loading the entire BOM structure into memory before processing. I’m wondering if batch processing approaches would help, or if there’s a better way to handle large datasets. We’ve done basic memory profiling and see the heap growing continuously during traversal.

bom_lines = bom_service.getChildren(root_part)
for line in bom_lines:
    process_bom_line(line)
    export_to_csv(line)

The manufacturing team needs reliable exports for assemblies of any size. Any guidance on memory-efficient BOM automation patterns would be appreciated.

Have you considered the BOM window approach instead of full traversal? You can set up a BOM window with specific configuration rules and then traverse incrementally. This is particularly effective when you don’t need every single variant or alternate.

For batch processing vs. full load comparison, I strongly recommend batch. We switched from full load to batch processing last year and it made all the difference. Our largest assembly is 3500 parts and now exports reliably. The key is determining optimal batch size through testing - too small and you get overhead from multiple SOA calls, too large and you’re back to memory issues. Start with 250 and adjust based on your specific assembly structures.

Memory profiling is definitely the right first step. In Python with Teamcenter SOA, you’re dealing with both Python object overhead and the Java bridge memory. Two specific things to investigate: First, are you calling the proper cleanup methods on SOA objects? Second, what’s your batch size strategy? I typically use batches of 200-300 BOM lines for large assemblies. Process one batch, write results, clear references, then load next batch. This keeps memory footprint stable regardless of total assembly size.

I’ve seen this exact pattern cause issues. Loading all BOM lines at once is the problem. For large assemblies, you need pagination or streaming approach. The SOA supports cursor-based traversal that processes chunks without loading everything into memory. Also check if you’re properly releasing object references after processing each line.

Check your Python garbage collection settings too. When working with large object graphs from SOA, the default GC might not be aggressive enough. I’ve had success with explicitly calling gc.collect() after processing each major branch of the BOM tree. Also monitor the Java heap on the Teamcenter server side - if that’s maxing out, you need to work with your admin to tune JVM parameters. The issue might not be entirely client-side.