Bulk incident import fails with duplicate key violation when

We’re migrating historical incidents from our legacy system into Trackwise 9.0 using the Bulk Import Utility. The import consistently fails around record 450-500 with a duplicate key violation error. Our source data has alphanumeric incident IDs (like INC-2024-00123) but Trackwise expects numeric IDs. We’ve been preprocessing to strip prefixes and convert to integers, but the error persists.

The error message shows:


ERROR: duplicate key value violates unique constraint
KEY (incident_number)=(456) already exists
Batch terminated at record 487

We’re importing in batches of 500 records. The strange part is that record 456 doesn’t exist in our target system when we query directly. We need to understand if this is an ID normalization issue in our preprocessing, hidden duplicates in the source data, or something with the batch processing itself. The import logging is minimal and doesn’t show which source record maps to the conflicting ID. Has anyone dealt with similar duplicate key issues during bulk incident imports?

I’ve seen this before. The Bulk Import Utility generates internal sequence numbers that can conflict with your explicit ID assignments. Check if you’re setting both the incident_number field AND letting Trackwise auto-generate IDs. Also, your batch size of 500 might be too large for error isolation - try reducing to 100 records to pinpoint the exact problematic record.

Good catch on the leading zeros issue! I ran a duplicate check on our preprocessed numeric IDs and found 23 collisions across 2,847 records. That’s exactly what’s happening. Now I need to figure out the best approach - should I maintain a mapping table between legacy IDs and new Trackwise IDs, or is there a way to preserve the original alphanumeric format in a custom field while using sequential numbers for the primary key?

The alphanumeric to numeric conversion is definitely a red flag. When you strip ‘INC-2024-’ from ‘INC-2024-00123’, you get ‘00123’ which converts to integer 123, not 123 with leading zeros. If your source has both ‘INC-2024-00456’ and ‘INC-2024-456’, they’ll both normalize to 456 - that’s your duplicate. Run a deduplication query on your preprocessed data before import. Also agree with Mike on batch size reduction for better error tracking.

Use both approaches. Let Trackwise auto-generate the incident_number as the primary key, and store your legacy ID (‘INC-2024-00456’) in a custom text field like ‘Legacy_Incident_ID’. This field should be indexed and marked as unique if you want to prevent future duplicates. Create a mapping table externally for reference during the transition period. This gives you clean data in Trackwise while maintaining traceability to the source system.