Batch upload of quality records via REST API fails with timeout and payload too large errors for large JSON files

We’re trying to upload batches of 500+ quality inspection records via REST API and consistently hitting 413 Payload Too Large errors. When we reduce batch size to 50 records, uploads succeed but take forever due to API call overhead.

Our current approach:


POST /Windchill/servlet/odata/QualityMgmt/InspectionRecords
Content-Length: 8473621
{"records": [{...}, {...}, ...]} // 500 records

Server configuration shows max request size at 10MB, but our 500-record payload is only 8.4MB. We’ve tried adjusting batch sizes and compressing the payload, but either hit size limits or performance degrades. Need advice on optimal batch processing strategies for large quality data imports.

For batch uploads, I recommend using the async processing pattern. Instead of POSTing all records in one call, submit a batch job request that returns immediately with a job ID, then poll for completion. This avoids timeout issues and lets the server process records in manageable chunks on the backend. The QualityMgmt API should support /BatchJobs endpoint for this pattern.

Here’s a comprehensive solution addressing all three key areas:

REST API Payload Limits: Configure both proxy and application server limits. For Apache httpd.conf:


LimitRequestBody 15728640
Timeout 300
ProxyTimeout 300

For Windchill server.xml, increase maxPostSize:

<Connector port="80" maxPostSize="15728640"/>

Batch Processing Strategies: Implement adaptive chunking based on payload size, not record count. Quality records with attachments vary significantly in size:


// Pseudocode - Adaptive batch processing:
1. Calculate average record size from first 10 records
2. Set chunk_size = min(50, target_payload_size / avg_record_size)
3. For each chunk:
   - POST records to /InspectionRecords endpoint
   - Check response status and retry on 5xx errors
   - Add 100ms delay between chunks
4. Track failed records and retry separately
5. Log batch statistics (success rate, timing)

Use 4MB as target payload size to stay well under limits. This typically yields 25-75 records per batch depending on attachment sizes.

Server/Proxy Configuration: Adjust connection pooling to handle concurrent batch uploads. In wt.properties:


wt.pom.dbcp.maxActive=100
wt.pom.dbcp.maxWait=180000
wt.method.server.maxThreads=75

Set up monitoring to track batch performance. Log upload timing, payload sizes, and server response codes. After a few runs, you’ll identify the optimal batch size for your specific quality record structure.

Implement exponential backoff for retries: 1s, 2s, 4s, 8s. Gateway timeouts often resolve themselves as server load decreases. Also consider scheduling large imports during off-peak hours to reduce contention with interactive users.

The BatchJobs endpoint was introduced in 12.0. For 11.2, you’ll need to implement chunking logic client-side. Process records in batches of 25-50 with a small delay between requests to avoid overwhelming the server. Also consider using parallel threads (2-3 concurrent uploads) to speed things up without triggering rate limits. Monitor server CPU and memory during uploads to find the sweet spot.

The 413 error isn’t always about the actual payload size - it’s often the Apache/IIS proxy in front of Windchill that enforces stricter limits than the application server. Check your web server configuration for LimitRequestBody (Apache) or maxAllowedContentLength (IIS). Even if Windchill allows 10MB, your proxy might be capped at 5MB by default.

Don’t forget about database connection pool limits. Quality record creation involves multiple database transactions, and large batches can exhaust available connections. Check wt.pom.dbcp.maxActive in wt.properties - if it’s set too low, you’ll get intermittent failures even with proper batch sizing. We increased ours from 50 to 100 for similar bulk import scenarios.

Good points. I checked the Apache config and found maxRequestSize was indeed set to 5MB. Increased it to 15MB, but now we’re hitting different timeouts - the server returns 504 Gateway Timeout after about 60 seconds when processing 300+ records. The async batch job approach sounds promising, but our Windchill 11.2 instance doesn’t seem to expose a BatchJobs endpoint for QualityMgmt. Is this a custom extension or available in later versions?