We’re experiencing consistent failures when importing large simulation result files (>500MB) through ENOVIA’s batch import service. The process works fine for smaller datasets but consistently times out on our larger analysis results.
The error we’re seeing:
java.lang.OutOfMemoryError: Java heap space
at SimulationDataImporter.processChunk(line 234)
Batch job terminated after 3600s timeout
Our current JVM heap is set to 4GB, and we’re processing files in single operations. The batch import job configuration hasn’t been modified from defaults. We need guidance on proper heap sizing and whether chunked file processing is supported for simulation data imports. This is blocking our simulation data consolidation project across three engineering teams.
Good questions. We’re processing files sequentially, one at a time. Checked site.xconf and the batch size is set to default (1000 records per transaction). The MethodServer logs show memory climbing steadily until it hits the limit around the 2-hour mark. We haven’t explored chunked processing yet - is that a standard approach for simulation data?
For batch import job tuning, modify your site.xconf batch service settings to reduce transaction size and extend timeout. This prevents memory buildup during large commits.
For chunked file processing, this is essential for files over 200MB. Implement a custom loader that processes simulation results in 50MB chunks. The approach splits file parsing from object creation, allowing garbage collection between chunks. Each chunk commits independently, so partial failures don’t lose all progress.
Additionally, enable streaming for binary simulation content by setting wt.load.simulationData.streamBinaryContent=true in wt.properties. This prevents buffering large binary arrays in memory.
Schedule imports during off-peak hours and monitor MethodServer memory usage via JMX. If you’re importing multiple files, add 2-minute delays between jobs to allow full GC cycles. We successfully imported 800MB simulation files using this configuration with processing times under 90 minutes and peak memory usage around 7GB.
The key is that heap sizing, batch tuning, and chunking must work together - addressing only one aspect won’t resolve your timeouts.
We faced similar issues last year during a simulation data migration. One thing that helped significantly was pre-validating file structure before import. Large files with structural issues can cause the parser to retry repeatedly, consuming memory. Also, if your simulation files contain embedded binary data, ensure the import service is configured to stream rather than buffer this content. Check the wt.load.simulationData.streamBinaryContent property.