Bulk import of training records fails due to JSON schema validation errors

We’re attempting to bulk import approximately 2,500 training records into our controlled master data system using the ETQ import utility. The process consistently fails with JSON schema validation errors around line 847 in our import file.

The error messages reference field naming issues and null value handling, but the documentation doesn’t clearly specify the exact schema requirements. We’ve verified our JSON structure matches the sample templates, but we’re still hitting validation failures.

{
  "employeeId": "EMP-1234",
  "trainingTitle": null,
  "completionDate": "2024-12-15"
}

The schema appears very strict about field casing and empty values. Has anyone successfully completed large training record imports and can share insights on the exact schema requirements?

I’ve encountered this exact issue before. ETQ’s JSON schema validation is extremely strict about field naming conventions. The schema uses camelCase consistently, and any deviation causes immediate validation failures. Also, null values are generally not accepted - you need to either omit the field entirely or provide an empty string depending on the field type. Check your field mappings carefully.

The null value handling is particularly tricky. For training records specifically, fields like trainingTitle, instructorName, and certificationNumber cannot be null. If you don’t have data for optional fields, completely remove them from the JSON rather than setting them to null. Also, date fields must follow ISO 8601 format strictly (YYYY-MM-DD). Any variation causes schema validation to fail. I recommend preprocessing your data to remove all null values before import.

I can provide a comprehensive solution based on your specific error. The JSON schema validation in ETQ 2021 for training records has three critical requirements that address all the issues you’re experiencing:

Field Naming and Casing Sensitivity: The schema enforces strict camelCase for all field names. Your import must use exact field names as defined in the ETQ data dictionary. Common mistakes include: trainingTitle vs TrainingTitle, employeeId vs EmployeeID. Use this validation approach:

{
  "employeeId": "EMP-1234",
  "trainingTitle": "Safety Protocol Training",
  "completionDate": "2024-12-15",
  "instructorName": "John Smith"
}

Null and Empty Value Handling: The schema strictly rejects null values for most fields. For required fields, you must provide valid data. For optional fields, omit them entirely rather than setting to null. Empty strings are acceptable only for non-required text fields. Preprocess your data to remove all null values:

  • Required fields: Must have valid values
  • Optional string fields: Use “” or omit
  • Optional reference fields: Omit entirely
  • Date fields: Must be valid ISO 8601 or omit

JSON Schema Strictness: ETQ’s import utility validates against a stricter schema than the API. Additional validation rules include:

  1. All dates must be ISO 8601 format (YYYY-MM-DD)
  2. Reference fields (employeeId, departmentId) must exist in the system
  3. Enumerated fields must match predefined picklist values exactly
  4. String length limits are enforced (check field definitions)
  5. Special characters must be properly escaped

Recommended Import Process:

  1. Export a small sample of existing training records to get the exact schema
  2. Create a preprocessing script to:
    • Remove all null values
    • Standardize field naming to camelCase
    • Validate date formats
    • Verify reference IDs exist
  3. Split large imports into batches of 500 records
  4. Test with a small batch first (10-20 records)
  5. Monitor the import log for specific validation errors

Audit Trail Completeness: Note that the import utility creates full audit trails for each record. Ensure your import includes all required audit fields (createdBy, createdDate) if you want to preserve historical data accurately.

This approach should resolve your validation errors and allow successful bulk import of your 2,500 training records.

It depends on the field type and whether it’s required or optional. For required string fields, use empty strings. For optional fields, omit them completely from the JSON object. The schema treats missing fields differently from explicitly null or empty values. Also, be aware that certain controlled master data fields have additional validation rules beyond the basic schema - they may require values from predefined picklists or reference existing records in the system.

Thanks for the suggestions. I exported a sample and compared it to our import file. You’re right about the null handling being the main issue. I’m now working on preprocessing the data to strip out null values and ensure consistent field naming. One question though - for fields that are genuinely empty in our source system, should I use empty strings “” or just omit them entirely?

We had similar problems last quarter. The import utility validates against a very specific schema that’s more restrictive than the API. One thing that helped us was exporting a small sample dataset first to see the exact format ETQ expects. The exported JSON shows you the precise field names, casing, and how ETQ handles optional fields. This saved us hours of trial and error.