Object Storage API upload fails with 'Entity Too Large' error for files over 5GB in storage module

I’m uploading large database backup files (8-12GB) to OCI Object Storage using the REST API and hitting ‘Entity Too Large’ errors consistently. Files under 5GB upload fine with standard PUT requests, but anything larger fails immediately. This is breaking our automated backup jobs that run nightly.

Current upload approach:


PUT /n/{namespace}/b/{bucket}/o/{object}
Content-Length: 8589934592
<binary data>

The error appears instantly, so it’s not a timeout issue. I know there’s a 5GB limit somewhere, but the documentation mentions multipart uploads for larger files. Do I need to completely change my approach, or is there a header I’m missing to enable large file support?

I went through this last year. The multipart upload API has three steps: CreateMultipartUpload to get an upload ID, UploadPart for each chunk (you can do these in parallel), and CommitMultipartUpload to finalize. Each part must be at least 10MB except the last one. It’s more complex than single PUT but necessary for large files. The OCI SDK handles this automatically if you use that instead of raw REST calls.

Here’s the complete multipart upload workflow you need:

Multipart upload API usage: You must use a three-phase approach for files over 5GB:

Phase 1 - Initialize:


POST /n/{namespace}/b/{bucket}/u
{
  "object": "backup-20250428.tar.gz"
}

Response contains uploadId - save this for all subsequent calls.

Phase 2 - Upload parts (repeat for each part):


PUT /n/{namespace}/b/{bucket}/u/{object}?uploadId={uploadId}&uploadPartNum={partNum}
Content-Length: {partSize}
<binary chunk>

Phase 3 - Commit:


POST /n/{namespace}/b/{bucket}/u/{object}?uploadId={uploadId}
{
  "partsToCommit": [
    {"partNum": 1, "etag": "abc123..."},
    {"partNum": 2, "etag": "def456..."}
  ]
}

Part size limits: Minimum 10MB per part (except last part can be smaller), maximum 50GB per part. For 8-12GB backup files, use 128MB parts (optimal for network efficiency). This gives you 64-96 parts per file - well within the 10,000 part limit. Larger parts (256MB-512MB) reduce API calls but increase retry overhead if parts fail.

Required headers: Each part upload needs:

  • Content-Length: exact byte size of this part
  • Content-MD5: base64-encoded MD5 hash (optional but recommended for integrity)
  • Authorization: standard OCI signature

The uploadId must be included as a query parameter, not a header. Each part upload returns an ETag header - you MUST capture and store this for the commit phase. Missing or incorrect ETags cause commit failures.

For backup automation, implement these safeguards:

  1. Split files into 128MB chunks before starting upload
  2. Upload parts in parallel (max 10 concurrent) for speed
  3. Retry failed parts up to 3 times before aborting
  4. Store uploadId and part ETags in a tracking file
  5. Set a 7-day cleanup job to abort incomplete uploads (costs accumulate)

If your upload is interrupted, you can resume by querying ListMultipartUploadParts to see which parts completed, then upload only the missing parts before committing. The standard PUT /o/{object} endpoint will never work for files over 5GB - this is a hard limit in the Object Storage service architecture.

Yes, you need multipart upload for anything over 5GB. Standard PUT is limited to 5GB maximum. You’ll need to split your file into parts and upload them separately, then commit the multipart upload.

Make sure you understand the part numbering. Parts are numbered 1 to 10,000, and you must track the part number and ETag for each upload. The commit operation requires a manifest with all part numbers and their ETags in order. If you miss one or get the order wrong, the commit fails and you have to start over.

For backup files, I recommend 128MB parts as a good balance. Larger parts mean fewer API calls but longer retry times if a part fails. You need to include the Content-Length header for each part and the upload ID from the initial CreateMultipartUpload call. Each part gets an ETag in the response that you need to save for the final commit step. Don’t forget to handle part upload failures with retries - network issues can cause individual parts to fail even if the overall upload would succeed.