Vendor invoice attachment upload API fails for large files in accounts payable module

We’re automating vendor invoice processing in D365 10.0.38 accounts payable module. Our integration uploads invoice PDF attachments via the document attachment API, but uploads fail for files larger than 5MB with “Payload too large” error.

The invoice processing workflow requires attaching the original PDF invoice to the vendor invoice record for audit purposes. Many of our vendor invoices are scanned documents that exceed the 5MB API limit. When we attempt to upload these files, we get HTTP 413 error.

Example upload call:


POST /api/data/v9.0/documentattachments
Content-Type: application/json
{
  "fileName": "invoice_12345.pdf",
  "fileContent": "{base64_encoded_pdf}",
  "relatedEntity": "vendorinvoice",
  "relatedEntityId": "{invoice_guid}"
}

Smaller files (under 5MB) upload successfully, but anything larger fails immediately. We also occasionally see timeout errors on files around 4-5MB during peak hours. I need to understand the API file size limits and whether there’s support for chunked upload or an alternative approach for handling large invoice attachments. This is causing delays in our invoice processing workflow.

The 5MB limit is a hard constraint on the standard document attachment API. However, D365 does support chunked uploads for large files through the Azure Blob Storage integration. Instead of uploading directly via the API, you need to use the file upload protocol that breaks large files into smaller chunks. This involves requesting an upload URL, uploading chunks sequentially, and then finalizing the attachment. Check the documentation for the InitiateFileUpload and CompleteFileUpload actions.

Adding to the previous reply - the chunked upload process is more complex than the simple POST you’re using. You’ll need to implement a multi-step workflow: first call InitiateFileUpload to get a temporary upload URL and session token, then upload the file in chunks (typically 4MB per chunk), and finally call CompleteFileUpload to commit the attachment. The timeout errors you’re seeing on 4-5MB files are likely due to network latency during peak hours, so chunked uploads will help with that too since each chunk is smaller and faster to transmit.

Let me provide a comprehensive solution for handling large file uploads in your vendor invoice processing workflow. You need to address three key areas:

1. API File Size Limits - Understanding and Working Within Constraints: The D365 document attachment API has a 5MB limit for single-request uploads, but this is only one upload method. For files exceeding this limit, use the chunked upload protocol:


// Pseudocode - Chunked upload workflow:
1. Initiate upload session:
   POST /api/data/v9.0/InitiateFileUpload
   Body: {"fileName": "invoice_12345.pdf", "fileSize": 8388608}
   Response: {"uploadUrl": "{blob_url}", "sessionToken": "{token}", "chunkSize": 4194304}

2. Upload file in chunks (loop until complete):
   PUT {uploadUrl}?sessionToken={token}&chunkIndex=0
   Body: {first_4MB_of_file}

   PUT {uploadUrl}?sessionToken={token}&chunkIndex=1
   Body: {next_4MB_of_file}

3. Finalize attachment:
   POST /api/data/v9.0/CompleteFileUpload
   Body: {"sessionToken": "{token}", "relatedEntity": "vendorinvoice", "relatedEntityId": "{guid}"}

This protocol supports files up to 100MB, which should cover your invoice attachment needs.

2. Chunked Upload Support - Implementation Best Practices:

A. Dynamic Upload Strategy: Implement logic that chooses the upload method based on file size:


// Pseudocode - Smart upload routing:
1. Check file size
2. If size <= 4MB: Use direct POST to documentattachments (simple, faster)
3. If size > 4MB: Use chunked upload protocol (complex, handles large files)
4. If size > 50MB: Add compression step before upload

B. Chunk Management:

  • Use 4MB chunks (optimal balance between number of requests and individual request size)
  • Implement retry logic per chunk (don’t restart entire upload if one chunk fails)
  • Track upload progress in your application database for resume capability

C. Session Management:


// Pseudocode - Handle session expiration:
1. Start upload with 15-minute timer
2. If timer reaches 12 minutes and upload incomplete:
   - Pause current upload
   - Call CompleteFileUpload with partial flag
   - Immediately initiate new session
   - Resume from last successful chunk
3. For parallel processing, limit concurrent uploads to avoid overwhelming the API

3. Timeout Configuration - Preventing Upload Failures:

A. HTTP Client Configuration: Adjust your HTTP client timeouts for file uploads:


// Pseudocode - Timeout settings:
ConnectionTimeout: 30 seconds (establish connection)
ReadTimeout: 120 seconds (per chunk upload)
SessionTimeout: 15 minutes (manage at application level)

B. Network Optimization:

  • Implement upload during off-peak hours for large batches
  • Use connection pooling to reduce overhead for multiple file uploads
  • Enable HTTP/2 if supported by your client library for better multiplexing

C. Error Handling and Recovery:


// Pseudocode - Robust error handling:
1. On chunk upload failure:
   - Retry same chunk up to 3 times with exponential backoff (1s, 3s, 9s)
   - If chunk continues to fail, save progress and flag for manual review
   - Don't fail entire invoice processing due to attachment upload issue

2. On session timeout:
   - Save current chunk index and uploaded chunks list
   - Initiate new session with resume parameter
   - Continue from last successful chunk

3. On network timeout (4-5MB files during peak hours):
   - Reduce chunk size to 2MB for retry attempt
   - Queue upload for retry during off-peak hours
   - Process invoice without attachment and link attachment asynchronously

Additional Optimizations:

PDF Compression: Implement pre-upload compression:


// Pseudocode - Compression workflow:
1. Analyze PDF structure (scanned vs. native)
2. For scanned PDFs:
   - Reduce image DPI to 200 (sufficient for reading, reduces size)
   - Apply JPEG compression to embedded images
   - Remove duplicate embedded resources
3. Store original file hash for integrity verification
4. Add metadata indicating file was compressed

Typical compression results: 8MB scanned invoice → 3.5MB compressed (no chunking needed).

Async Processing Pattern: Decouple invoice processing from attachment upload:


// Pseudocode - Async attachment workflow:
1. Create vendor invoice record in D365 (without attachment)
2. Queue attachment upload as background job
3. Update invoice record with attachment reference when upload completes
4. Send notification if upload fails after retries

This prevents attachment upload issues from blocking invoice approval workflows.

Monitoring and Alerting: Implement metrics tracking:

  • Upload success rate by file size range
  • Average upload time per MB
  • Session timeout frequency
  • Chunk retry rate

Set alerts for:

  • Upload failure rate >5%
  • Average upload time >30s per MB (indicates network issues)
  • Session timeout rate >10% (indicates need for smaller chunks or better parallelization)

With this implementation, you should successfully handle invoice attachments up to 100MB with minimal failures, and the timeout issues during peak hours will be resolved through chunked uploads and retry logic.