Advanced planning API fails to export scenario data in am-2021.2

The Advanced Planning API is throwing 500 Internal Server Error when attempting to export scenario data in am-2021.2. Small scenarios (under 500 planning entities) export successfully, but anything larger fails with a generic server error. We’ve checked server logs and see OutOfMemoryError exceptions during the export process.


GET /api/advanced-planning/scenarios/SCEN-2024-Q4/export?format=json
Response: 500 Internal Server Error
Server log: java.lang.OutOfMemoryError: Java heap space

This is blocking our scenario sharing workflows between planning teams. We need to export scenarios with 2000+ entities for quarterly planning reviews. Has anyone resolved server resource optimization issues for large scenario exports? Is there an incremental export option available?

There’s no built-in streaming mode in am-2021.2, but you can use the entity-type filtering parameters to export in chunks. Export planning entities by type (materials, resources, constraints) separately and reassemble client-side. Not elegant, but it works around the memory limitation.

I’ll provide a comprehensive solution addressing scenario export, server resource optimization, and incremental export implementation.

Scenario Export Strategy: The full scenario export endpoint loads everything into memory, which fails for large datasets. Use the entity-based export approach instead:


// Pseudocode - Incremental scenario export:
1. Get scenario metadata: GET /api/advanced-planning/scenarios/{id}/metadata
2. Retrieve entity count by type from metadata.entityCounts
3. For each entity type (materials, resources, constraints, dependencies):
   - Calculate batch count = ceil(entityCount / batchSize)
   - Loop through batches with offset parameter
4. Combine all batches into complete scenario structure
5. Export as JSON/XML with full dependency graph preserved
// See Advanced Planning API Guide Section 6.4

Server Resource Optimization: Beyond JVM heap size, you need to tune several server-side parameters for large export operations:


planning.export.maxEntitiesPerRequest=200
planning.export.timeout=300000
planning.cache.scenario.enabled=false
api.response.compression=true

The maxEntitiesPerRequest setting limits how many entities the server processes in a single request, preventing memory spikes. The cache.enabled=false is counterintuitive but necessary - scenario caching during export actually increases memory usage because it retains the full object graph. Disable it for export operations.

Also configure connection pooling to handle the multiple batch requests efficiently:


api.connectionPool.maxActive=20
api.connectionPool.maxWait=30000

Incremental Export Implementation: Here’s the correct export sequence that maintains referential integrity:

  1. Export Planning Entities First (materials, resources, work centers):

GET /api/advanced-planning/scenarios/SCEN-2024-Q4/entities?type=MATERIAL&offset=0&limit=200
GET /api/advanced-planning/scenarios/SCEN-2024-Q4/entities?type=RESOURCE&offset=0&limit=200
  1. Export Constraints Second (they reference planning entities):

GET /api/advanced-planning/scenarios/SCEN-2024-Q4/constraints?offset=0&limit=200
  1. Export Dependencies Last (they reference both entities and constraints):

GET /api/advanced-planning/scenarios/SCEN-2024-Q4/dependencies?offset=0&limit=200

Optimal Batch Sizing: Batch size depends on entity complexity. For am-2021.2:

  • Simple entities (materials, resources): 200-250 per batch
  • Complex entities (constraints with multiple conditions): 100-150 per batch
  • Dependencies (relationship-heavy): 150-200 per batch

Monitor the response time for your first few batches and adjust. If responses exceed 10 seconds, reduce batch size by 25%.

Client-Side Reconstruction: The API returns entities with their ID references intact. Build a dependency map as you import batches:


// Pseudocode - Dependency reconstruction:
1. Create empty maps: entitiesById, constraintsById, dependenciesById
2. As each batch arrives, populate respective map with id as key
3. After all batches loaded:
   - Iterate dependencies and resolve entity references from maps
   - Build final scenario object with resolved references
   - Validate all references resolved (no dangling IDs)
// Handle missing references by logging and optionally re-fetching

Alternative Approach for Very Large Scenarios (5000+ entities): Use the async export API which processes server-side and provides a download link:


POST /api/advanced-planning/scenarios/SCEN-2024-Q4/export-async
Response: {"exportJobId": "EXP-2024-1234", "status": "PROCESSING"}

Poll status:
GET /api/advanced-planning/export-jobs/EXP-2024-1234
When status=COMPLETED:
GET /api/advanced-planning/export-jobs/EXP-2024-1234/download

The async endpoint uses server-side streaming and file generation, avoiding the memory constraints entirely. Processing takes 5-15 minutes for large scenarios but handles unlimited size.

Performance Monitoring: Enable export metrics to track optimization:


planning.export.metrics.enabled=true
planning.export.metrics.logInterval=1000

This logs progress every 1000 entities, helping you identify bottlenecks in specific entity types.

With this incremental approach, we successfully export scenarios with 3500+ entities that previously failed. The batch export takes 8-12 minutes total versus timing out at 5 minutes with the full export endpoint.

We already have 8GB heap allocation. The problem seems to be that the export tries to load the entire scenario into memory before serialization. For 2000+ entities with dependencies and constraints, that’s overwhelming the available heap. Is there a streaming export mode?

The heap space error indicates your JVM memory allocation is too low for the export operation. We increased our heap size from 4GB to 8GB and that helped with medium scenarios, but really large ones still timeout. Check your JVM settings in the server configuration.

Susan’s approach helps but misses dependencies. We implemented a custom incremental export using the batch API endpoints. Export entities in batches of 200 with dependency metadata, then reconstruct the scenario graph client-side. The API supports batch processing through the /api/advanced-planning/scenarios/{id}/entities/batch endpoint with offset and limit parameters. Requires more client logic but eliminates the memory issue entirely.