Data storage SDK blob upload fails with timeout errors on aziotc for files over 50MB

We’re losing critical sensor data due to blob upload failures. Using Azure Storage SDK with aziotc, large file uploads (50MB+) consistently timeout after 90 seconds. Our data ingestion pipeline collects aggregated telemetry files hourly from edge devices, but the chunked upload configuration seems to be broken.

Timeout errors occur during the commit phase:


Error: OperationTimedOut at BlobClient.upload()
Timeout: 90000ms exceeded
File size: 67MB, Chunks uploaded: 512/512

All 512 chunks upload successfully, but the final commit operation times out. This causes data loss as we can’t retry without re-uploading everything. Blob upload timeout settings are at SDK defaults. Is there an SDK version compatibility issue between aziotc and the latest Azure Storage libraries?

There’s definitely an SDK version compatibility issue here. Aziotc was built against Azure Storage SDK 12.8, but if you’re using 12.14+ there are breaking changes in how block blob commits are handled. The newer SDK versions use HTTP/2 multiplexing which can cause timeout issues if your IoT Hub configuration doesn’t support it. Check your SDK versions and consider pinning to Azure Storage SDK 12.12 for compatibility with aziotc.

You’re hitting a perfect storm of configuration issues. Here’s the complete solution addressing all three focus areas:

1. Blob Upload Timeout Settings: The default 90-second timeout is insufficient for commit operations on files over 50MB. Update your BlobClient configuration:

const blobClient = new BlockBlobClient(url, credential, {
  retryOptions: {
    maxRetries: 5,
    retryDelayInMs: 2000,
    maxRetryDelayInMs: 30000
  },
  uploadOptions: {
    blockSize: 4 * 1024 * 1024,
    concurrency: 4,
    maxSingleShotSize: 32 * 1024 * 1024
  },
  timeout: 300000
};

2. Chunked Upload Configuration: Your current 128KB chunk size creates excessive overhead. With 512 blocks for 67MB, the commit operation must process a large block list. Optimal configuration:

  • Block size: 4MB (reduces 67MB file to ~17 blocks)
  • Concurrency: 4 parallel uploads (balance speed vs network stability)
  • Max single-shot: 32MB (files under this size skip chunking entirely)

This reduces commit payload size by 97% and dramatically speeds up the final commit phase.

3. SDK Version Compatibility: Aziotc has known issues with Azure Storage SDK versions 12.14+. The problem is HTTP/2 multiplexing incompatibility:

"dependencies": {
  "@azure/storage-blob": "12.12.0",
  "@azure/iot-device": "1.18.1"
}

Pin to these versions in your package.json. Versions 12.13+ introduced HTTP/2 by default, which conflicts with aziotc’s connection pooling.

Additional Recommendations:

  • Progress monitoring: Implement upload progress callbacks to detect stalls early
  • Conditional retry: Don’t retry if all blocks uploaded successfully - just retry the commit operation
  • Network diagnostics: Enable SDK debug logging to identify if timeouts are network or service-side
  • Memory management: Ensure edge devices have at least 50MB free RAM for upload buffers

Testing approach:

  1. Test with single 67MB file first using new configuration
  2. Monitor commit operation latency (should drop from 90s+ to under 10s)
  3. Gradually increase concurrency if network is stable
  4. Implement telemetry to track upload success rates

With these changes, your 67MB files should upload reliably in under 60 seconds total, with commit operations completing in 5-10 seconds. The reduced block count is the key to solving your timeout issue.

For unstable connectivity, implement retry logic at the chunk level rather than reducing chunk size. The Azure Storage SDK supports automatic retry with exponential backoff. Configure maxRetries and retryDelayInMs in your pipeline options. You can safely use 4MB chunks with proper retry handling - it’s more efficient than tiny chunks.

I’ve dealt with this before. The issue isn’t just timeout - it’s how aziotc handles the block commit list. With 512 blocks, the commit XML payload itself becomes large and takes time to process. Azure Storage has a 50,000 block limit per blob, but performance degrades significantly above 1000 blocks. Increase your chunk size from 128KB to 4MB to reduce block count to around 17 blocks for a 67MB file. This will drastically speed up commit operations.