Let me provide a comprehensive solution that addresses the multipart upload configuration, network bandwidth optimization, and CLI options for your large file backup scenario.
Complete Solution for Large File Uploads:
1. Multipart Upload Configuration:
The key issue is that default CLI settings aren’t optimized for your 50GB+ files. Configure explicit multipart parameters:
oci os object put -bn backup-bucket \
--file /data/backup_20241208.tar.gz \
--part-size 128 \
--parallel-upload-count 5
Part Size Guidelines:
- 50-100GB files: use 128MB part size
- 100GB+ files: use 256MB part size
- Maximum 10,000 parts per object, so calculate accordingly
2. Network Bandwidth Optimization:
First, verify your network configuration is optimized:
- Service Gateway: Ensure your VCN has a service gateway for Object Storage
- Add route rule: Destination = Object Storage service CIDR, Target = Service Gateway
- This keeps traffic on OCI backbone (no internet egress)
Check current bandwidth utilization:
# Monitor during upload
oci monitoring metric-data summarize-metrics-data \
--namespace oci_computeagent \
--query-text 'NetworksBytesOut[1m].mean()'
VM.Standard2.4 provides up to 4.1 Gbps network bandwidth. If consistently hitting limits, consider:
- Upgrading to VM.Standard3 or E4 shapes (higher bandwidth)
- Scheduling uploads during off-peak hours
- Splitting very large files if possible
3. Enhanced CLI Options and Configuration:
Create an OCI CLI configuration profile optimized for large uploads:
# ~/.oci/config
[BACKUP_PROFILE]
user=ocid1.user...
fingerprint=xx:xx:...
key_file=~/.oci/oci_api_key.pem
tenancy=ocid1.tenancy...
region=us-phoenix-1
# Add these for large file handling
connection_timeout=600
read_timeout=600
max_retries=5
Use the profile with enhanced retry and timeout:
oci os object put \
--profile BACKUP_PROFILE \
--bucket-name backup-bucket \
--file /data/backup_20241208.tar.gz \
--part-size 128 \
--parallel-upload-count 5 \
--no-multipart-md5-verification
4. Production-Grade Backup Script:
For nightly automated backups, implement proper error handling:
#!/bin/bash
# Backup upload with retry logic
MAX_ATTEMPTS=3
ATTEMPT=1
while [ $ATTEMPT -le $MAX_ATTEMPTS ]; do
oci os object put \
--bucket-name backup-bucket \
--file /data/backup_$(date +%Y%m%d).tar.gz \
--part-size 128 \
--parallel-upload-count 5
if [ $? -eq 0 ]; then
echo "Upload successful"
break
fi
ATTEMPT=$((ATTEMPT+1))
sleep 300
done
5. Monitoring and Validation:
- Enable Object Storage logging to track upload operations
- Set up alarms for failed uploads using OCI Monitoring
- Verify upload integrity with MD5 checksums
- Monitor Object Storage metrics for your bucket
Additional Recommendations:
- Compression: If not already compressed, use compression before upload to reduce transfer size
- Lifecycle Policies: Configure Object Storage lifecycle policies for automated backup retention
- Archive Storage: Consider Archive Storage tier for long-term backups (lower cost)
- Parallel Uploads: If you have multiple backup files, upload them in parallel to maximize throughput
For Your Immediate Issue:
- Verify service gateway is configured in your VCN
- Use the multipart command with 128MB part size and 5 parallel uploads
- Test with one backup file first before automating
- Monitor network metrics during the upload
- If still failing, check OCI service limits for Object Storage in your tenancy
This comprehensive approach addresses all three focus areas: proper multipart upload configuration for large files, network bandwidth optimization through service gateway, and optimal CLI options with retry logic. Your nightly backup uploads should now complete reliably.