We’re running into consistent timeout errors when importing bulk asset records via Workday Web Services in our R1 2024 environment. The import process works fine for small batches (under 100 records), but anything above 500 assets triggers timeout errors after about 90 seconds.
Our current web service configuration:
connection.timeout=90000
read.timeout=90000
batch.size=500
The error message we’re seeing:
HTTPError: 504 Gateway Timeout
at AssetImport.processRecords
Connection closed after 90s
We’ve tried adjusting batch sizes and timeout values, but performance tuning hasn’t resolved the underlying issue. Has anyone successfully handled high-volume asset imports through web services? What timeout thresholds and batch configurations work best for bulk asset onboarding?
I’ve seen similar timeout issues with bulk imports. The 90-second timeout is actually hitting Workday’s load balancer limits, not your configuration. Try reducing your batch size to 200-250 records per call and implement parallel processing instead of increasing timeouts. Also, are you using synchronous or asynchronous web service calls?
I want to expand on the async approach since it’s clearly the right direction here. Let me address all three aspects of your challenge systematically.
Web Service Timeout Resolution:
Your timeout errors stem from Workday’s infrastructure limits, not your configuration. The 504 Gateway Timeout indicates the load balancer is terminating connections after 90 seconds regardless of your client settings. Switch to asynchronous web service calls using the Import_Process web service operation. This returns immediately with a process ID while Workday handles the import in the background.
Bulk Asset Import Optimization:
For high-volume asset imports, implement these configurations:
// Optimal async configuration
batch.size=250
max.parallel.batches=4
polling.interval=30000
max.retry.attempts=3
This allows you to process 1,000 assets per minute (4 parallel batches of 250) without hitting timeout limits. Use the Get_Integration_Events web service to poll for completion status. Each batch should include a unique correlation ID for tracking.
Performance Tuning Strategy:
Implement a three-tier approach:
-
Pre-processing optimization: Validate data quality before submission. Remove unnecessary fields from your payload - only include required fields plus critical custom fields in the initial import.
-
Parallel processing: Use a job scheduler to submit multiple batches concurrently. We run 4-6 parallel imports during off-peak hours without issues. Monitor your tenant’s concurrent integration limit (typically 10-15 for production).
-
Post-processing cleanup: Handle any failed records in a separate remediation process. Don’t retry failed records immediately - batch them for a second pass after investigating root causes.
Additional considerations: Check if your asset lifecycle has business process definitions that trigger on creation. These can add 200-500ms per record. If you’re importing historical assets, consider using an import-specific lifecycle state that bypasses unnecessary workflows, then transition to active state in bulk afterward.
Implementing async web services with optimized batch sizing should reduce your total import time by 60-70% while completely eliminating timeout errors. The trade-off is additional complexity in status tracking, but the integration event framework makes this manageable.
Another consideration beyond async processing is your asset data complexity. If you’re importing assets with extensive custom fields, attachments, or complex relationships, each record takes longer to process. We found that splitting our import into two phases helped significantly: first import basic asset data with minimal fields, then update with additional details in a second pass. This reduced our per-record processing time from 800ms to about 300ms. Also check if you have any business process triggers or calculated fields firing on asset creation that might be adding overhead.
We’re using synchronous calls currently. Would switching to asynchronous help with the timeout issue? We need to track import status for each batch, so we’ve avoided async up to this point.