I’ll provide a comprehensive solution addressing scenario export, server resource optimization, and incremental export implementation.
Scenario Export Strategy:
The full scenario export endpoint loads everything into memory, which fails for large datasets. Use the entity-based export approach instead:
// Pseudocode - Incremental scenario export:
1. Get scenario metadata: GET /api/advanced-planning/scenarios/{id}/metadata
2. Retrieve entity count by type from metadata.entityCounts
3. For each entity type (materials, resources, constraints, dependencies):
- Calculate batch count = ceil(entityCount / batchSize)
- Loop through batches with offset parameter
4. Combine all batches into complete scenario structure
5. Export as JSON/XML with full dependency graph preserved
// See Advanced Planning API Guide Section 6.4
Server Resource Optimization:
Beyond JVM heap size, you need to tune several server-side parameters for large export operations:
planning.export.maxEntitiesPerRequest=200
planning.export.timeout=300000
planning.cache.scenario.enabled=false
api.response.compression=true
The maxEntitiesPerRequest setting limits how many entities the server processes in a single request, preventing memory spikes. The cache.enabled=false is counterintuitive but necessary - scenario caching during export actually increases memory usage because it retains the full object graph. Disable it for export operations.
Also configure connection pooling to handle the multiple batch requests efficiently:
api.connectionPool.maxActive=20
api.connectionPool.maxWait=30000
Incremental Export Implementation:
Here’s the correct export sequence that maintains referential integrity:
- Export Planning Entities First (materials, resources, work centers):
GET /api/advanced-planning/scenarios/SCEN-2024-Q4/entities?type=MATERIAL&offset=0&limit=200
GET /api/advanced-planning/scenarios/SCEN-2024-Q4/entities?type=RESOURCE&offset=0&limit=200
- Export Constraints Second (they reference planning entities):
GET /api/advanced-planning/scenarios/SCEN-2024-Q4/constraints?offset=0&limit=200
- Export Dependencies Last (they reference both entities and constraints):
GET /api/advanced-planning/scenarios/SCEN-2024-Q4/dependencies?offset=0&limit=200
Optimal Batch Sizing:
Batch size depends on entity complexity. For am-2021.2:
- Simple entities (materials, resources): 200-250 per batch
- Complex entities (constraints with multiple conditions): 100-150 per batch
- Dependencies (relationship-heavy): 150-200 per batch
Monitor the response time for your first few batches and adjust. If responses exceed 10 seconds, reduce batch size by 25%.
Client-Side Reconstruction:
The API returns entities with their ID references intact. Build a dependency map as you import batches:
// Pseudocode - Dependency reconstruction:
1. Create empty maps: entitiesById, constraintsById, dependenciesById
2. As each batch arrives, populate respective map with id as key
3. After all batches loaded:
- Iterate dependencies and resolve entity references from maps
- Build final scenario object with resolved references
- Validate all references resolved (no dangling IDs)
// Handle missing references by logging and optionally re-fetching
Alternative Approach for Very Large Scenarios (5000+ entities):
Use the async export API which processes server-side and provides a download link:
POST /api/advanced-planning/scenarios/SCEN-2024-Q4/export-async
Response: {"exportJobId": "EXP-2024-1234", "status": "PROCESSING"}
Poll status:
GET /api/advanced-planning/export-jobs/EXP-2024-1234
When status=COMPLETED:
GET /api/advanced-planning/export-jobs/EXP-2024-1234/download
The async endpoint uses server-side streaming and file generation, avoiding the memory constraints entirely. Processing takes 5-15 minutes for large scenarios but handles unlimited size.
Performance Monitoring:
Enable export metrics to track optimization:
planning.export.metrics.enabled=true
planning.export.metrics.logInterval=1000
This logs progress every 1000 entities, helping you identify bottlenecks in specific entity types.
With this incremental approach, we successfully export scenarios with 3500+ entities that previously failed. The batch export takes 8-12 minutes total versus timing out at 5 minutes with the full export endpoint.