Attempting to bulk import 350 new IoT devices using the registry import tool. I exported our device list from our asset management system as CSV, converted it to JSON for the bulk import API, but the import consistently fails with an ‘invalid JSON format’ error. The batch processing seems to reject the payload structure. Here’s a sample of the error:
Bulk import failed: Invalid JSON format
Line 47: Unexpected token in device metadata
Expected schema version: 2.1
I followed the CSV to JSON conversion process from the documentation, but the bulk import schema validation keeps rejecting it. Has anyone successfully imported large device batches and can share the correct JSON structure?
Yes, field names are case-sensitive! The API expects ‘manufacturer’ (lowercase), not ‘Manufacturer’ or ‘MANUFACTURER’. Also verify that your field values don’t have leading/trailing whitespace - that can cause validation failures. I recommend using the bulk import validator endpoint first to test your JSON before submitting the actual import. It’s at /iot/api/v2/devices/bulk/validate and will give you detailed error messages for each invalid device entry.
The ‘unexpected token in device metadata’ error usually means your JSON has special characters or formatting issues that aren’t properly escaped. Common culprits are device names with quotes, commas in description fields, or newline characters in metadata. Make sure you’re escaping all special characters during CSV to JSON conversion.
I validated the JSON structure and fixed the escaping issues. Now I’m getting a different error about missing required fields. The error message says ‘manufacturer’ is required, but I included it for all devices. Could this be a case sensitivity issue?
Also check that your JSON adheres to the schema version specified in the error. Schema 2.1 in oiot-pm has stricter validation than earlier versions. Each device object must have required fields: deviceId, deviceType, model, manufacturer, and metadata. Optional fields like location, tags, and custom attributes must follow the exact structure defined in the API documentation. Use a JSON validator to check your payload before submitting.
For large imports, I always split the batch into smaller chunks of 50-100 devices. This makes debugging easier if something fails, and it reduces the risk of timeout errors during processing. The bulk import API has a maximum batch size of 500 devices, but I’ve found that batches over 100 often have reliability issues. You can parallelize multiple smaller batches to speed up the overall import process.
Successfully imported all 350 devices! The solution required careful attention to CSV to JSON conversion, bulk import schema validation, and batch processing strategies.
CSV to JSON Conversion:
The main issue was improper escaping during conversion. I rebuilt the conversion script to properly handle special characters:
# Python conversion script snippet
import json, csv
with open('devices.csv') as f:
devices = [{
"deviceId": row['device_id'].strip(),
"deviceType": row['type'].strip(),
"manufacturer": row['mfr'].strip()
} for row in csv.DictReader(f)]
Key fixes: strip whitespace, escape quotes in string fields, handle null values properly (use null, not empty strings).
Bulk Import Schema Validation:
I used the validation endpoint before submitting actual imports. This caught several issues:
- Case sensitivity: Changed ‘Manufacturer’ to ‘manufacturer’ throughout
- Missing required fields: Added default values for ‘model’ where CSV had blanks
- Invalid enum values: Fixed deviceType values to match allowed types in schema 2.1
- Metadata structure: Nested custom attributes under ‘metadata’ object, not at root level
The validator provided line-by-line error details, which made fixing the payload much faster than trial-and-error imports.
Batch Processing Strategy:
Following Tom’s advice, I split the 350 devices into 7 batches of 50 devices each. I processed batches sequentially with 2-minute delays between submissions to avoid API rate limiting. This approach also made it easy to identify and fix issues - batch 3 had malformed location data, which I corrected before proceeding with batches 4-7.
Total import time: 45 minutes including validation and batch delays. All devices now appear in the registry with correct metadata and are ready for provisioning. For future imports, I’ve documented the conversion script and validation process to streamline onboarding of new device batches.