Let me provide a comprehensive solution addressing all three aspects of your attachment upload challenge.
Understanding File Size Limits:
The 10MB limit you’re hitting is the default maximum request size configured in the SAP CX API gateway for SCX 2105. This is intentional - large single-request uploads can cause memory issues and timeout problems. While you can increase this limit to 25-50MB via gateway configuration, this isn’t recommended for production environments because:
- Large uploads tie up API worker threads for extended periods
- Network interruptions force complete re-uploads
- No progress tracking capability for end users
- Higher memory consumption on API nodes
The hard system limit for attachments in service cases is actually 100MB per file, but reaching it via single-request uploads is problematic.
Troubleshooting 413 Errors:
Your error indicates request rejection at the API gateway level. To confirm the exact rejection point:
- Check response headers for X-Gateway-Error or similar identifiers
- Review API gateway logs (typically in /var/log/api-gateway/) for detailed rejection reasons
- Verify no intermediate proxies are enforcing stricter limits
- Test with curl to isolate client-side vs server-side issues
Common causes beyond gateway limits: reverse proxy configuration, cloud provider request limits (if using cloud deployment), or corporate network proxies.
Implementing Chunked Upload Logic:
The recommended approach is implementing multipart chunked uploads. Here’s the proper implementation pattern:
Step 1 - Initialize upload session:
POST /sap/c4c/api/v1/service-cases/{caseId}/attachments/init
{"fileName": "logs.zip", "fileSize": 15728640, "chunkSize": 5242880}
Response: {"uploadId": "abc123", "totalChunks": 3}
Step 2 - Upload chunks sequentially:
PUT /sap/c4c/api/v1/attachments/upload/{uploadId}/chunk/{chunkNumber}
Content-Range: bytes 0-5242879/15728640
[Binary chunk data]
Step 3 - Complete upload:
POST /sap/c4c/api/v1/attachments/upload/{uploadId}/complete
{"md5Checksum": "calculated_hash"}
Implementation Best Practices:
- Use 5MB chunk size (5242880 bytes) - optimal balance for performance and reliability
- Implement retry logic with exponential backoff for failed chunks
- Store upload session state (uploadId, completed chunks) for resume capability
- Calculate and verify MD5 checksums to ensure data integrity
- Set chunk upload timeout to 30 seconds (allows for slower connections)
- Implement progress tracking by monitoring completed chunks
- Clean up abandoned upload sessions after 24 hours
This approach handles files up to 100MB reliably, provides progress feedback, and allows resuming interrupted uploads. The chunked pattern is also more efficient for the API layer since it processes smaller requests that don’t block worker threads.
For immediate resolution while implementing chunked uploads, you can temporarily increase the gateway limit to 25MB by updating the api-gateway.properties configuration, but plan to migrate to chunked uploads within your next sprint.