We’re implementing automated document ingestion into ETQ Reliance 2023 using the REST API file upload endpoint for document control. Small files (under 20MB) upload successfully, but larger PDF attachments consistently timeout after about 60 seconds.
Our current implementation uses standard multipart/form-data POST:
POST /api/v2/document-control/{id}/attachments
Content-Type: multipart/form-data
Files over 50MB never complete - we get gateway timeout errors. The API documentation mentions file size limits but doesn’t specify what they are or how to handle large files. We’re wondering if there’s a chunked upload implementation we should be using instead, or if the API gateway timeout configuration needs adjustment. Has anyone successfully uploaded large technical documents through the API?
I found the chunked upload endpoints but I’m unclear on the implementation. Do I need to manually split the file into chunks on the client side, or does the API handle that? Also, what’s the recommended chunk size? I don’t want to make too many API calls if the chunks are too small.
Don’t forget about file size limits at the document control module level. Even with chunked uploads, ETQ has maximum file size limits configured per document type. Check your document control configuration to see what the limit is for your specific document types. For technical documents, we had to increase the limit from the default 100MB to 500MB. This is a separate setting from the API gateway timeout.
Beyond chunking, check your API gateway configuration. ETQ’s default gateway timeout might be set too low for your use case. If you have admin access, you can adjust the timeout in the API gateway settings. However, chunked upload is still the better approach because it’s more resilient - if one chunk fails, you only retry that chunk instead of the entire file. It also provides progress tracking which is useful for user feedback in your integration.
You handle the chunking client-side. Recommended chunk size is 5-10MB for optimal performance. The workflow is: 1) POST to initiate upload and get an upload_id, 2) PUT each chunk with sequence numbers, 3) POST to finalize when all chunks are uploaded. Make sure to include proper Content-Range headers with each chunk so the API knows how to reassemble them. Also, there’s usually a timeout on completing the upload - if you don’t finalize within a certain timeframe, the chunks get cleaned up.
Also verify your multipart form data handling. Some API clients don’t stream multipart uploads efficiently - they load the entire file into memory before sending, which can cause issues with large files even before timeouts occur. Use streaming multipart upload in your HTTP client library. For example, if you’re using Python requests, use file streaming mode. If using Java Apache HttpClient, enable chunked transfer encoding.