We built a custom adapter in Integration Hub to sync customer data from our legacy ERP to SAP CX. Works fine for small batches (under 500 records), but times out consistently when processing larger datasets (2000+ records). The adapter uses standard REST API calls and we’ve set the timeout to 120 seconds in the adapter configuration.
HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(120))
.build();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(endpoint))
.POST(HttpRequest.BodyPublishers.ofString(payload))
.build();
We’re seeing failures in the integration monitor with “Socket timeout exception” after exactly 120 seconds. The payload size for failed batches is around 8-12 MB. We need to handle these large syncs reliably without breaking them into tiny batches. Has anyone dealt with timeout configuration or payload chunking strategies for custom adapters in Integration Hub?
Adding to Marcus’s point - when you implement chunking, use a streaming approach for reading the source data too. Don’t load all 2000 records into memory and then chunk them. Read and process in chunks from the start. This reduces memory footprint and improves overall performance.
Yes, currently sending all records in one request. The adapter logs show the timeout happening during the POST call itself. I thought about chunking but wasn’t sure if that’s the right approach or if there’s a way to increase the timeout threshold further.
Beyond chunking, you should also look at the adapter’s execution context timeout settings. In Integration Hub, there’s a separate configuration for adapter execution timeout that’s independent of the HTTP client timeout. Check your adapter descriptor XML - there should be an executionTimeout parameter. Also, make sure you’re using async processing patterns if the data doesn’t need to be processed synchronously. For large datasets, consider using the batch API endpoints if SAP CX exposes them for your entity type.
I had this exact issue last year. The problem isn’t just timeout configuration - it’s about adapter optimization. Large payloads cause memory pressure on the Integration Hub runtime, which compounds the timeout issue. You need to stream the data rather than loading it all into memory at once.
Let me address all three key areas systematically:
Timeout Configuration:
You need to configure timeouts at multiple levels. First, increase the adapter execution timeout in your adapter descriptor:
<adapter executionTimeout="300000" name="CustomERPAdapter">
This sets a 5-minute execution window. Also configure the HTTP read timeout separately from connection timeout:
HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(30))
.build();
// Then set read timeout per request
Payload Chunking:
Implement intelligent chunking with these guidelines:
- Chunk size: 250-300 records per batch (test to find optimal size)
- Calculate chunk count upfront: `int chunks = (int) Math.ceil(totalRecords / chunkSize);
- Process chunks sequentially with delay between batches (100-200ms) to avoid overwhelming the target system
- Implement checkpoint mechanism - store last successfully processed chunk ID
- Use transaction boundaries per chunk, not per record
Adapter Optimization:
Optimize your adapter implementation:
- Use streaming for source data reading - don’t load everything into memory
- Implement connection pooling for HTTP clients (reuse connections across chunks)
- Add circuit breaker pattern - if 3 consecutive chunks fail, stop processing and alert
- Log chunk-level metrics (processing time, record count, payload size) for monitoring
- Consider parallel processing for independent chunks, but limit concurrency to 2-3 threads max
- Implement exponential backoff for retry logic: first retry after 2s, then 4s, then 8s
For your specific case with 8-12 MB payloads, I’d recommend 200-record chunks which should result in ~800KB-1.2MB per request. This keeps you well under typical gateway limits and processes fast enough to avoid timeouts. Monitor the Integration Hub metrics dashboard after implementation to verify chunk processing times stay under 30 seconds per batch.
One more critical point: ensure your custom adapter properly implements the AdapterLifecycle interface methods for graceful shutdown. If processing is interrupted, you want to be able to resume from the last successful checkpoint rather than reprocessing everything.
You definitely need payload chunking here. Sending 2000+ records in a single request is asking for trouble. I’ve implemented similar adapters and found that chunking to 200-300 records per request works well. You also need to implement retry logic for failed chunks. The key is to process chunks sequentially with proper error handling, so if one chunk fails, you can retry just that chunk without reprocessing everything. Also consider implementing a progress tracking mechanism to monitor which chunks have been successfully processed.