Integration hub REST API fails on large payloads with 413 Request Entity Too Large

We’re migrating master data from our legacy system to Appian using the Integration Hub REST API. Small batches work fine, but when we try to sync our full product catalog (8,500 records), the API returns 413 Request Entity Too Large.

Error response:


HTTP/1.1 413 Request Entity Too Large
Server: nginx/1.18.0
Content-Length: 178

The payload is about 15MB JSON. We’re on Appian 22.4 with NGINX as our reverse proxy. I know there are payload limits somewhere, but I’m not sure if it’s NGINX, Appian, or both. The migration deadline is next week, and we need to get this master data loaded. What’s the best approach - increase the limits or restructure how we’re sending the data?

While increasing the NGINX limit will fix the immediate error, sending 15MB payloads is not a good practice. That’s a massive amount of data to process in a single API call. If anything fails during processing, you lose the entire batch. Break it into smaller chunks - maybe 500-1000 records per API call. This also helps with error handling and retry logic.

I understand the batching recommendation, but we’re under time pressure. If I increase the NGINX limit, will that definitely work? Or are there other limits in Appian itself that might still block large payloads? I don’t want to change NGINX config only to hit another wall.

Let me address all three focus areas comprehensively:

API Gateway Payload Limits: Your 413 error is from NGINX, which sits in front of Appian as a reverse proxy. NGINX has a default client_max_body_size of 1MB. To fix this specific error, update your nginx.conf:


http {
    client_max_body_size 20M;
    client_body_timeout 300s;
}

However, this is just the first layer. Appian’s application server (Tomcat) also has a maxPostSize limit, typically 2MB by default. Check your server.xml configuration. Even if you increase both limits, you’re setting yourself up for failures.

Batch Processing Strategy: Instead of one 15MB payload with 8,500 records, implement proper batching. Here’s the optimal approach for master data migration:

  1. Split into batches of 500 records (~880KB each)
  2. Process batches sequentially with error tracking
  3. Implement retry logic for failed batches
  4. Log success/failure for each batch

This gives you granular control. If batch 7 fails, you retry just that batch instead of reprocessing all 8,500 records. Total processing time is actually faster because smaller payloads process more efficiently.

Master Data Migration Best Practices: For a migration of this scale, use a staged approach:

Phase 1: Validate data quality on first 100 records

Phase 2: Migrate in batches of 500 during off-peak hours

Phase 3: Verify data integrity after each batch

Phase 4: Run reconciliation report comparing source vs target counts

Implement a migration tracking table in Appian with columns: batchNumber, recordCount, status, startTime, endTime, errorDetails. This provides full visibility into migration progress and makes troubleshooting much easier.

Create a simple loop in your migration script:


for (batch in batches) {
  response = callIntegrationAPI(batch);
  logBatchResult(batch.id, response.status);
  if (response.failed) retryQueue.add(batch);
}

For your immediate deadline: Don’t increase payload limits. Instead, write a quick batch processor that splits your 8,500 records into 17 batches of 500. Run it tonight during low-traffic hours. With proper error handling, the entire migration completes in under 30 minutes and you have full audit trail of what succeeded or failed. This is infinitely more reliable than hoping a single 15MB payload succeeds.