Let me provide you with a comprehensive solution that addresses all three key areas:
1. Attachment Size Configuration (Multiple Layers)
Your 10MB failure indicates a configuration mismatch across the stack. Here’s what to check and fix:
Agile PLM Layer:
- Admin > Settings > System Settings > File Manager
- Set ‘Maximum Attachment Size’ to 52428800 (50MB in bytes)
- Restart Agile application server after change
WebLogic Layer:
- Login to WebLogic Console
- Navigate to: Environment > Servers > [AgileServer] > Protocols > HTTP
- Set ‘Max Post Size’ to 52428800 bytes (50MB)
- Set ‘Post Timeout’ to 600 seconds (10 minutes for large uploads)
- Navigate to: Deployments > agile > Configuration > Web App
- Add/modify web.xml parameter:
<multipart-config>
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
</multipart-config>
Database Layer (Oracle):
Verify LOB storage can handle large files:
SELECT tablespace_name, max_size
FROM dba_tablespaces
WHERE tablespace_name = 'AGILE_DATA';
2. SDK-Based Chunked Upload Implementation
The standard SDK uploadFile() method is problematic for large files because it loads everything into memory. Implement chunked upload instead:
// Chunked upload for large attachments
public void uploadLargeAttachment(IQualityRecord record, File file) {
int chunkSize = 5242880; // 5MB chunks
IAttachment att = record.addAttachment();
att.setChunkedUpload(true);
// Process file in chunks with progress tracking
}
Key implementation points:
- Break files into 5MB chunks maximum
- Use IAttachment.setChunkedUpload(true) method
- Implement retry logic for failed chunks
- Track upload progress for audit purposes
- Validate file integrity after complete upload using checksum
Detailed Migration Script Pattern:
public class QualityAttachmentMigrator {
private static final int CHUNK_SIZE = 5 * 1024 * 1024; // 5MB
public void migrateAttachment(IQualityRecord record,
File sourceFile,
Map<String, String> metadata) {
try {
if (sourceFile.length() > 10 * 1024 * 1024) {
// Use chunked upload for files > 10MB
uploadInChunks(record, sourceFile, metadata);
} else {
// Standard upload for smaller files
standardUpload(record, sourceFile, metadata);
}
// Verify upload and log for compliance
logComplianceRecord(record, sourceFile, metadata);
} catch (APIException e) {
handleUploadFailure(record, sourceFile, e);
}
}
}
3. Compliance Documentation Requirements
For quality records, you need to maintain complete audit trail:
Document Metadata Preservation:
- Original creation date and author
- Approval status and approval chain
- Document version history
- File checksum (MD5 or SHA-256) for integrity verification
Migration Audit Log:
Create a detailed log for each attachment:
Record ID | Original File | Size | Upload Date | Checksum | Status | Validator
NCR-2024-156 | Audit_Report.pdf | 15.2MB | 2025-05-22 | a3f5... | SUCCESS | jennifer_q
Implementation Steps:
-
Pre-Migration Setup (Do this first):
- Update WebLogic Max Post Size to 50MB
- Verify Agile file size setting is 50MB
- Test with a single large file before bulk migration
- Restart all services after configuration changes
-
Update Migration Script:
- Implement chunked upload for files > 10MB
- Add file integrity verification (checksum comparison)
- Include metadata preservation logic
- Add detailed error logging with retry capability
-
Compliance Validation:
- Generate migration report with all attachment details
- Include before/after checksums to prove file integrity
- Document any failed uploads with root cause
- Create attestation that all required documents migrated successfully
-
Post-Migration Verification:
- Random sample 10% of large attachments
- Verify file opens correctly and matches original
- Confirm metadata (dates, authors) preserved
- Run compliance report showing 100% document migration
Specific Fix for Your 200 Records:
Since you have 200 quality records with 10-25MB attachments:
- First, apply the WebLogic configuration change (highest priority)
- Restart WebLogic managed server
- Re-run your migration script for the 200 failed records
- If you still see failures, implement the chunked upload approach
- For immediate compliance needs, you can temporarily upload large files manually through the Agile UI while you fix the SDK script
The chunked upload approach is more robust long-term and handles network interruptions better. It’s worth implementing even after fixing the configuration, especially for future migrations or integrations that need to handle large compliance documents.