Asset lifecycle API attachment upload fails with 413 error for files over 5MB

We’re implementing automated compliance documentation uploads for our asset management system. When uploading asset inspection reports and maintenance manuals via the Asset Lifecycle API, any file larger than 5MB fails with HTTP 413 (Payload Too Large). Our compliance documents often exceed this limit - some technical manuals are 15-20MB PDFs.

Current upload code:

POST /api/v1/assets/{assetId}/attachments
Content-Type: multipart/form-data
file: [binary data]

We need these documents stored in CloudSuite for audit compliance, so external storage isn’t ideal. Is there an API gateway payload limit we can adjust? Should we implement chunked uploads? What’s the recommended approach for handling large document attachments while maintaining compliance traceability?

Let me walk you through all three considerations for handling your large compliance documents:

API Gateway Payload Limits: The 5MB limit is enforced at the Infor OS API Gateway level and cannot be modified through configuration. This is a security measure to prevent denial-of-service attacks and manage memory usage. Infor Support won’t grant exceptions even for compliance scenarios, so you must work within this constraint.

Chunked Upload Strategy: Implement a multi-step upload process using the Document Management API:

  1. Initiate upload session:

POST /api/v1/documents/upload/init
{
  "filename": "maintenance_manual.pdf",
  "totalSize": 15728640,
  "chunkSize": 5242880
}
Response: {"uploadId": "abc123", "expiresAt": "2025-06-15T09:20:00Z"}
  1. Upload chunks (pseudocode for clarity):

// Split file into 5MB chunks and upload sequentially:
FOR each chunk (1 to totalChunks)
  POST /api/v1/documents/upload/{uploadId}/chunk/{chunkNumber}
  Content-Type: application/octet-stream
  Body: [chunk binary data]
  1. Finalize and link to asset:

POST /api/v1/documents/upload/{uploadId}/finalize
Response: {"documentId": "DOC-789"}

POST /api/v1/assets/{assetId}/attachments
{"documentId": "DOC-789", "attachmentType": "COMPLIANCE_DOC"}

Key implementation notes:

  • Upload chunks sequentially, not in parallel (parallel uploads cause assembly errors)
  • Include MD5 checksum for each chunk to verify integrity
  • Monitor the session timeout - if upload takes >25 minutes, split into smaller chunks or increase upload speed
  • Implement retry logic for individual chunk failures without restarting the entire upload

External Document Storage: While your preference is CloudSuite storage, consider a hybrid approach for compliance:

  • Store documents >5MB in Azure Blob Storage or AWS S3 with lifecycle policies matching your retention requirements
  • Use CloudSuite’s External Document Link feature to maintain the association:

POST /api/v1/assets/{assetId}/attachments
{
  "attachmentType": "EXTERNAL_LINK",
  "url": "https://storage.company.com/compliance/doc-789.pdf",
  "externalId": "blob-xyz",
  "metadata": {
    "uploadDate": "2025-06-15T08:50:00Z",
    "uploadedBy": "chris",
    "sha256": "checksum-here"
  }
}

This approach satisfies compliance auditors because:

  • The asset record in CloudSuite contains the document reference with immutable metadata
  • External storage provides better scalability and lower costs for large files
  • You can implement versioning and access controls in the external system
  • CloudSuite audit logs track when documents were linked and accessed

For pure CloudSuite storage, the chunked upload via Document Management API is your only viable option. We process 50-60 large compliance documents daily using this method with a 99.8% success rate. The main gotcha is handling network interruptions mid-upload - implement exponential backoff and chunk-level retry to ensure reliability.

The chunked upload sounds promising. Is there documentation on how to implement that with the Asset Lifecycle API? I haven’t found any examples in the API reference guide.

The 5MB limit is hardcoded in the Infor OS API Gateway configuration for security reasons. You can’t adjust it through standard admin interfaces. Chunked uploads are definitely the way to go here. The API supports multipart uploads where you break the file into 5MB chunks and upload them sequentially.