Account sync to external ERP fails with Salesforce Bulk API upsert errors

We’re running nightly account synchronization from our ERP system to Salesforce using the Bulk API 2.0, and we’re encountering persistent upsert failures. The job processes about 15,000 account records, but roughly 2,000 fail each night with ‘UNABLE_TO_LOCK_ROW’ and ‘INVALID_FIELD’ errors.

Our integration uses an external ID field (ERP_Account_ID__c) to match records, but the error logs show mismatches where some records aren’t being found even though we know they exist in Salesforce. The Bulk API upsert requirements seem straightforward, but we’re clearly missing something in our external ID field mapping.

Here’s a sample from our failed records CSV:


Id,ERP_Account_ID__c,Error
,ERP-00456,INVALID_FIELD: No such column 'ERP_Account_ID__c'
,ERP-00789,UNABLE_TO_LOCK_ROW: unable to obtain exclusive access

We’re using Data Loader with the Bulk API enabled. The external ID field exists and is marked as External ID in Salesforce, but something in our mapping or the bulk operation itself is failing. Has anyone successfully implemented large-scale account upserts using Bulk API with external IDs?

The ‘No such column’ error is telling - your CSV header might not exactly match the API name of your external ID field. Bulk API 2.0 is case-sensitive and requires exact API names. Double-check that your field is actually named ‘ERP_Account_ID__c’ with double underscores and the __c suffix. Also, verify the field is set as External ID AND Unique in the field definition. The UNABLE_TO_LOCK_ROW errors suggest you might have parallel processes trying to update the same records simultaneously.

Let me provide a comprehensive solution addressing all three focus areas: Bulk API upsert requirements, external ID field mapping, and Data Loader troubleshooting.

Bulk API Upsert Requirements:

First, verify your external ID field meets all Bulk API requirements:

  1. Field must be marked as ‘External ID’ in field definition
  2. Field should be marked as ‘Unique’ to prevent duplicates
  3. Field must be indexed (automatic for External ID fields)
  4. Field API name must match exactly in your CSV header (case-sensitive)

For your ERP_Account_ID__c field, go to Setup > Object Manager > Account > Fields & Relationships and verify these settings are enabled.

External ID Field Mapping Configuration:

Your CSV structure needs to follow this exact format for upserts:


ERP_Account_ID__c,Name,Industry,Phone
ERP-00456,Acme Corp,Manufacturing,555-0100
ERP-00789,Global Tech,Technology,555-0200

Critical mapping rules:

  • Do NOT include the Salesforce Id column in upsert CSVs (only needed for updates)
  • External ID field must be the first column or explicitly mapped
  • All field names must match API names exactly (case-sensitive)
  • Remove any BOM (Byte Order Mark) from CSV files
  • Ensure UTF-8 encoding without BOM

Data Loader Configuration Fix:

Here’s the step-by-step Data Loader configuration for reliable bulk upserts:

  1. Operation Setup:

    • Select ‘Upsert’ operation (not Insert or Update)
    • When prompted, explicitly select ‘ERP_Account_ID__c’ as your External ID field
    • Don’t rely on auto-detection
  2. Bulk API Settings (Settings > Settings):


Bulk API enabled: true
Batch size: 10000
API version: 59.0 or higher
Compression: true
  1. Field Mapping:
    • Map each CSV column to the exact Salesforce API field name
    • Verify ERP_Account_ID__c is mapped to itself (not auto-mapped to Id)
    • Remove any unmapped fields from the CSV

Resolving UNABLE_TO_LOCK_ROW Errors:

Your concurrent processes are causing record locking conflicts. Implement these solutions:

  1. Serialize Batch Processing: Split your 15K records into smaller batches of 2,500 records each with 5-minute delays:

// Pseudocode for batch processing:
1. Split CSV into 6 batches of 2,500 records each
2. Process batch 1, wait for completion
3. Sleep 300 seconds (5 minutes)
4. Process batch 2, repeat until complete
  1. Schedule Separation:

    • ERP Sync: 3:00 AM - 4:30 AM
    • Enrichment Process: 6:00 AM - 7:00 AM
    • No overlapping windows
  2. Disable Competing Automation: Temporarily disable workflows/process builders during bulk sync:

    • Create a custom setting ‘Bulk_Sync_In_Progress__c’
    • Set to true before sync starts
    • Add entry criteria to workflows: Bulk_Sync_In_Progress__c = false
    • Set back to false after sync completes

Resolving INVALID_FIELD Errors:

The ‘No such column’ error indicates field name mismatches. Fix these issues:

  1. Verify Field API Names: Run this query to get exact field names:

SELECT Id, ERP_Account_ID__c FROM Account LIMIT 1

If the query fails, your field name is incorrect.

  1. CSV Header Validation:

    • Remove any spaces around field names
    • Ensure no special characters in headers
    • Verify no duplicate column names
    • Check for hidden characters (copy to plain text editor)
  2. Data Loader Mapping File: Export your working mapping file (.sdl) and verify the external ID mapping:


ERP_Account_ID__c=ERP_Account_ID__c
Name=Name
Industry=Industry

Testing and Validation:

  1. Test with Small Batch:

    • Extract 100 records that previously failed
    • Run upsert with verbose logging enabled
    • Examine success/error files in detail
  2. Monitor Bulk Jobs:

    • Setup > Bulk Data Load Jobs
    • Check job status and detailed error messages
    • Look for patterns in failed records
  3. Enable Debug Logs:

    • Setup > Debug Logs
    • Create trace flag for your integration user
    • Set Workflow and Validation to FINEST
    • Review logs for lock contention details

Long-term Optimization:

  1. Implement Bulk API 2.0 directly instead of Data Loader for better error handling and performance
  2. Add retry logic for UNABLE_TO_LOCK_ROW errors with exponential backoff
  3. Monitor API usage to ensure you’re not hitting org limits
  4. Set up error notifications to alert when failure rate exceeds 5%

After implementing these changes, your nightly sync should process all 15K records successfully with minimal failures. The key is proper external ID configuration, serialized processing to avoid locks, and precise field mapping in Data Loader.

External ID matching in Salesforce is case-insensitive by default, so that shouldn’t be your issue. However, I’ve seen problems when the external ID field contains special characters or leading/trailing spaces in the source data. Make sure your ERP export trims whitespace and handles null values properly. For the locking issues, definitely serialize your processes or schedule them during different time windows. You could run the enrichment process at 2 AM and the ERP sync at 4 AM to avoid conflicts.