We’re encountering ‘Request Entity Too Large’ (413) errors when our SOAP connector processes batch operations with large XML payloads. Our workflow integrates with an external vendor system that accepts batch order submissions, but it fails when processing more than 50 orders at once.
The SOAP connector configuration appears standard, but we haven’t tuned any payload size limits. During batch processing, we build a single XML request containing all orders, which can reach 5-10MB. The connector works fine for small batches (10-20 orders) but consistently fails on larger volumes.
This is causing workflow interruptions as batch jobs fail and require manual reprocessing. We’re using Pega 8.6 and the vendor’s SOAP service supposedly supports up to 15MB payloads. The error occurs before the request even reaches the vendor system, suggesting it’s a Pega-side configuration issue.
Here’s a sample of how we’re building the payload:
<BatchOrders>
<Order id="1001">...</Order>
<Order id="1002">...</Order>
<!-- Repeats for 50+ orders -->
</BatchOrders>
Has anyone dealt with large SOAP payloads in Pega workflows? What’s the recommended approach for handling batch processing logic when individual requests exceed default size limits?
You’re right to question the approach. Sending 10MB SOAP requests is risky even if technically possible. Network timeouts, memory pressure, and transaction rollback issues become more likely with large payloads. I’d recommend splitting your batch into smaller chunks - maybe 20 orders per request. Process them sequentially or in parallel depending on your volume needs and vendor system capabilities.
I’d implement chunking in a dedicated service activity called from your workflow. That way you can handle retry logic, error aggregation, and partial success scenarios in one place. Your workflow just calls the activity and gets back a summary of successes and failures. Keeps the flow clean and makes the chunking logic reusable.
Here’s a comprehensive solution addressing all three key areas:
SOAP Connector Configuration:
First, fix the immediate 413 error by adjusting size limits at multiple layers:
- Application Server Level (Tomcat):
Edit your tomcat/conf/server.xml:
<Connector port="8080"
maxPostSize="20971520"
maxHttpHeaderSize="16384"/>
This sets max POST size to 20MB and header size to 16KB.
- Pega Configuration:
Add to prconfig.xml:
<env name="http/client/max_response_size"
value="20971520"/>
<env name="http/client/connection_timeout"
value="120000"/>
- Connector-Specific Settings:
In your SOAP connector configuration (Integration Designer):
- Set timeout to 120 seconds (large payloads take longer)
- Enable connection pooling to reuse connections
- Configure retry logic with exponential backoff
Payload Size Limits:
While increasing limits solves the immediate error, it’s not a sustainable solution. Implement intelligent payload management:
- Calculate Optimal Chunk Size:
Don’t use arbitrary numbers. Create a calculation that considers:
- Average order XML size (measure from sample data)
- Target payload size (aim for 2-3MB max, well under limits)
- Network latency and timeout thresholds
Example calculation:
- If average order = 50KB XML
- Target payload = 2MB
- Chunk size = 2000KB / 50KB = 40 orders per batch
- Implement Dynamic Chunking:
Create a service activity that calculates chunk size at runtime based on actual order complexity, not fixed counts.
Batch Processing Logic:
Redesign your batch processing architecture for reliability and performance:
- Create Chunking Service Activity:
Build a reusable activity (e.g., ProcessOrderBatchChunked) that:
- Accepts a list of order IDs
- Chunks them into optimal groups
- Processes each chunk via SOAP connector
- Aggregates results and handles partial failures
- Workflow Integration Pattern:
In your workflow, implement this pattern:
- Batch job starts with full order list
- Call chunking service activity
- Activity returns summary (success count, failure list)
- Workflow branches based on results:
- All success → Complete case
- Partial failure → Route to error handler with failed order list
- Complete failure → Retry entire batch once, then escalate
- Error Handling Strategy:
Implement sophisticated error handling:
- Track which chunks succeeded and which failed
- Don’t reprocess successful chunks on retry
- Log each chunk attempt with payload size and response time
- Set maximum retry attempts per chunk (3 times max)
- After retries exhausted, create manual work items for failed chunks
- Parallel Processing Option:
For high-volume scenarios, consider parallel chunk processing:
- Split chunks across multiple threads using Pega’s queue processor
- Each thread handles one chunk independently
- Aggregate results using a wait shape in the workflow
- Be cautious of vendor system rate limits
- Alternative: Pagination Pattern:
Instead of batch submission, use pagination if vendor supports it:
<OrderSubmission page="1" totalPages="5">
<Order id="1001">...</Order>
<!-- 20 orders -->
</OrderSubmission>
This allows the vendor to process in chunks while maintaining transaction context.
- Monitoring and Optimization:
Implement monitoring to continuously optimize:
- Log payload sizes for each connector call
- Track success rates by chunk size
- Monitor average response times
- Alert on 413 errors or timeouts
- Adjust chunk size based on performance metrics
- Consider Asynchronous Pattern:
For truly large batches (1000+ orders), consider:
- Submit batch request to vendor asynchronously
- Vendor returns a batch ID immediately
- Poll vendor API for batch completion status
- Process results when batch completes
- This prevents timeout issues entirely
The key is moving from a “send everything at once” approach to a controlled, chunked processing strategy that’s resilient to failures and optimized for both network efficiency and system reliability. Start with 20-40 orders per chunk, monitor performance, and adjust based on your specific vendor system behavior and network conditions.