Service case API attachment upload fails for files over 5MB with REQUEST_ENTITY_TOO_LARGE

We’re experiencing consistent failures when uploading attachments larger than 5MB to service cases through our REST API integration. Our support team needs to attach diagnostic files and screenshots that often exceed this size.

The API returns REQUEST_ENTITY_TOO_LARGE errors for anything over 5MB. We’re currently using the ContentVersion object with a direct POST request:

ContentVersion cv = new ContentVersion();
cv.Title = fileName;
cv.PathOnClient = fileName;
cv.VersionData = Blob.valueOf(fileContent);
insert cv;

I’ve read about multipart upload capabilities but haven’t found clear documentation on implementing this for service case attachments. Our current approach works fine for smaller files but blocks our workflow for larger diagnostic files. Has anyone successfully implemented large file uploads using the ContentVersion API with proper chunking or multipart strategies?

We’re using Java with the Force.com REST API client library. I see references to chunked uploads but the examples I’ve found are mostly for bulk data operations, not file attachments. Do I need to modify the ContentVersion approach or switch to a different API endpoint entirely?

Actually, for service case attachments specifically, you want to leverage the ContentVersion object with proper multipart implementation. Here’s the complete approach that addresses your attachment size limits, ContentVersion handling, and multipart upload requirements:

First, implement file chunking in your Java client:

int chunkSize = 4 * 1024 * 1024; // 4MB chunks
byte[] fileBytes = Files.readAllBytes(filePath);
int chunks = (int) Math.ceil(fileBytes.length / (double) chunkSize);

For each chunk, create a multipart request with proper headers:

String boundary = "----Boundary" + System.currentTimeMillis();
HttpPost post = new HttpPost(instanceUrl + "/services/data/v59.0/sobjects/ContentVersion");
post.setHeader("Content-Type", "multipart/form-data; boundary=" + boundary);
post.setHeader("Authorization", "Bearer " + accessToken);

The critical part is the ContentVersion creation strategy. Instead of trying to upload everything at once, use a two-phase approach:

Phase 1 - Create the ContentVersion record with minimal data:

ContentVersion cv = new ContentVersion();
cv.Title = fileName;
cv.PathOnClient = fileName;
cv.FirstPublishLocationId = caseId; // Links to service case
cv.ContentLocation = 'S'; // Salesforce storage
// Don't set VersionData yet
insert cv;

Phase 2 - Upload file data in chunks using the REST API PATCH method:

for (int i = 0; i < chunks; i++) {
    int start = i * chunkSize;
    int end = Math.min(start + chunkSize, fileBytes.length);
    byte[] chunk = Arrays.copyOfRange(fileBytes, start, end);

    String rangeHeader = String.format("bytes %d-%d/%d", start, end-1, fileBytes.length);
    // PATCH /services/data/v59.0/sobjects/ContentVersion/{cvId}
    // Header: Content-Range: {rangeHeader}
    // Body: base64 encoded chunk
}

Key points for multipart upload success:

  1. Attachment Size Limits: The 5MB limit applies per request, not per file. By chunking at 4MB, you stay under the threshold with buffer for headers and encoding overhead.

  2. ContentVersion Object Management: Create the ContentVersion record first without VersionData, then use its ID for subsequent chunk uploads. The FirstPublishLocationId links it directly to your service case, so it appears as an attachment immediately (though not fully uploaded until all chunks complete).

  3. Multipart Upload Implementation: Use Content-Range headers for each chunk. Salesforce automatically reassembles chunks based on the range information. Critical: send chunks sequentially, not in parallel, to avoid race conditions. Include a checksum in your final chunk to verify integrity.

Error handling considerations:

  • Implement retry logic for individual chunk failures (don’t restart entire upload)
  • Store chunk upload state in case of connection interruption
  • Verify the ContentVersion.ContentSize matches your original file size after completion
  • Use ContentDocument.LatestPublishedVersionId to confirm the file is fully available

For service cases specifically, you can verify the attachment is properly linked by querying ContentDocumentLink:

SELECT ContentDocumentId, LinkedEntityId
FROM ContentDocumentLink
WHERE LinkedEntityId = :caseId

This approach handles files up to 2GB (Salesforce’s absolute limit for ContentVersion). For your diagnostic files and screenshots, this should cover all use cases. The multipart strategy also provides better progress tracking - you can update users on upload percentage as each chunk completes.

Have you considered using the Connect REST API’s multipart upload endpoint instead? It’s specifically designed for large file handling and manages the chunking automatically. The endpoint is /services/data/v59.0/connect/files/users/{userId} and accepts multipart/form-data. You still create a ContentVersion in the background, but the API handles size limits more gracefully. Worth testing if your use case fits.

For files over 5MB, you need to implement chunked upload using the multipart/form-data approach. Break your file into smaller chunks (I recommend 4MB chunks to stay safely under the limit) and upload them sequentially. Each chunk needs proper Content-Range headers. The ContentVersion object will reassemble them automatically once all chunks are received. Are you using a specific programming language for your integration? The implementation varies slightly between Java, Python, and Node.js clients.