We’re experiencing API timeouts when triggering bulk payment runs through our payroll automation process. The API call works fine for small batches (under 100 employees) but times out consistently when processing our full monthly payroll of 2,500+ employees.
The timeout occurs after exactly 60 seconds:
java.net.SocketTimeoutException: Read timed out
at a!startProcess() execution
Payload size: 2.8MB (2547 employee records)
I’ve checked our API timeout settings in the integration object, and they’re set to the default 60 seconds. Should I just increase the timeout, or is there a better approach? Our payroll deadline is in 3 days, and we need this working. The batch processing takes about 90 seconds to complete on the backend, so the 60-second timeout cuts it off prematurely.
Good point about the payload. I just checked and we’re sending the full employee record including address, department, manager hierarchy, and historical payment data. That’s definitely bloating the payload unnecessarily. For the batch approach, what’s a recommended batch size? Would 250 employees per batch be reasonable?
Here’s a comprehensive solution that addresses all three key areas:
API Timeout Settings: While you could increase the timeout to 120 seconds, this is a band-aid. Instead, configure the timeout based on your batch size. For batches of 250 employees, 45 seconds should be sufficient. Set this in your integration object:
timeout: 45000 /* milliseconds */
retryAttempts: 2
retryDelay: 5000
Batch Processing Implementation: Restructure your process to handle batches. Split the 2,500 employees into chunks of 250. Process each batch sequentially or use parallel processing if your backend supports it. The total processing time increases slightly (10 batches × 45 sec = 7.5 min vs single 90 sec call), but reliability improves dramatically.
Payload Optimization: This is critical. Strip your payload to essential fields only:
- Employee ID (required for identification)
- Payment amount (the actual payment value)
- Bank account number and routing info
- Payment date
Remove all these unnecessary fields that are bloating your payload:
- Full address details
- Department and org hierarchy
- Historical payment records
- Manager information
- Employee demographics
This should reduce your 2.8MB payload to roughly 400-600KB for 250 employees. The smaller payload not only prevents timeouts but also reduces network latency and backend processing time.
Implementation approach: Create a loop in your process model that chunks the employee list. For each chunk, call your optimized API with reduced payload. Store batch results in a record type with fields: batchNumber, employeeCount, status, timestamp, errorMessage. This gives you full visibility into the payroll run and allows selective retry of failed batches.
One final recommendation: consider making this asynchronous. Instead of waiting for each batch to complete, submit all batches and poll for completion status. This prevents the process from holding resources while waiting for backend processing.
250 per batch sounds reasonable, but test it with your actual data. The key is to balance between too many API calls (overhead) and payloads that are too large (timeout risk). I’d recommend starting with 200-300 records per batch and monitoring the response times. Also implement retry logic for failed batches - you don’t want one failed batch to stop the entire payroll run.
Don’t forget to implement proper error handling. When you split into batches, you need to track which batches succeeded and which failed. Use a database table or Appian record type to log the status of each batch. This way if batch 5 out of 10 fails, you can retry just that batch instead of reprocessing everything.
I agree with breaking it into batches. But also look at your payload size - 2.8MB is quite large for a REST API call. Are you sending unnecessary data? Often payroll records include fields that aren’t needed for the payment run itself. Optimize your payload by sending only the essential fields: employee ID, payment amount, bank account details. You might cut that 2.8MB down to under 500KB.