Bulk update API vs individual record calls for account management at scale

We’re designing an integration that updates 50,000+ account records daily from our ERP system. The debate is whether to use Bulk API 2.0 with large batches or individual REST API calls with parallel processing.

Bulk API seems ideal for volume, but we’ve heard concerns about error handling complexity and batch processing delays. Individual REST API calls give us immediate feedback and granular retry logic, but we’re worried about hitting rate limits and overall performance.

Looking for real-world experiences with Bulk API batch sizing strategies, governor limit considerations, and error handling patterns. What’s the practical performance difference for account updates at this scale?

Having implemented both approaches across multiple high-volume integrations, here’s the definitive comparison for your 50k+ daily account updates:

Bulk API Batch Size Strategy: Optimal batch size is 5,000-7,500 records per batch for account updates. This balances several factors:

  • Processing time: Batches complete in 5-15 minutes typically
  • Error isolation: If a batch fails, you’re reprocessing a manageable subset
  • Lock contention: Smaller than 10k reduces likelihood of record locking issues
  • Governor limits: Stays well within heap size and CPU time limits

For your 50k daily volume, that’s 7-10 batches. Total processing time including job submission, execution, and result retrieval: 60-90 minutes. With individual REST calls at 25 concurrent requests, you’d need 2,000 sequential rounds of 25 calls each - taking 8-10 hours minimum with throttling.

Rate Limits Reality: This is where individual REST API calls become impossible at scale:

  • API call limit: 15,000/24hrs (Enterprise), 100,000/24hrs (Unlimited)
  • 50,000 updates = 50,000 API calls, consuming your entire daily allocation
  • No room for other integrations, user API calls, or retries
  • Bulk API 2.0 jobs count as 1 API call per batch submission + 1 per status check
  • Your 50k updates use approximately 20 API calls total with Bulk API

Concurrent request limits (25 per org) create additional bottlenecks. Even with perfect parallelization, you’re processing 25 records every 2-3 seconds = 1,500 records per minute = 33+ minutes for 50k records if you had unlimited API calls. In reality, with rate limiting and retries, expect 6-8 hours.

Error Handling Patterns: Bulk API error handling is actually superior for large volumes:

  1. Batch-level errors: Job fails before processing (CSV format issues, auth failures). Entire batch can be resubmitted immediately.

  2. Record-level errors: Individual records fail within successful batch. You get detailed error results:

    • Record ID or external ID
    • Specific error message
    • Fields that caused the failure

Implement a three-tier retry strategy:

  • Immediate retry queue: Records that failed due to locks or temporary issues (10-15% of failures)
  • Delayed retry queue: Records needing data correction (50-60% of failures)
  • Manual review queue: Records with business logic issues (30-35% of failures)

With individual REST calls, you need distributed transaction management across parallel threads. One thread’s failure requires complex state management to prevent duplicate processing. Bulk API handles this inherently - each record’s success/failure is independent.

Performance Comparison - Real Numbers: From our production environment processing 75k account updates nightly:

Bulk API 2.0 (10 batches of 7,500 records):

  • Job submission: 2 minutes
  • Processing time: 45-55 minutes
  • Result retrieval: 3 minutes
  • Total: ~60 minutes
  • API calls consumed: 20
  • Retry processing for ~2% failures: 10 minutes

REST API individual calls (theoretical, we don’t use this):

  • Sequential processing at 25 concurrent: 400+ minutes
  • API calls consumed: 75,000 (exceeds daily limit)
  • Retry complexity: Requires distributed transaction management
  • Total: Impossible without multiple orgs or extended processing windows

Recommendation: Use Bulk API 2.0 with 5,000-7,500 record batches. Implement asynchronous job monitoring with exponential backoff polling (start at 30s intervals, increase to 2min intervals). Build a dedicated error processing service that categorizes failures and routes them to appropriate retry queues. This architecture scales to millions of records without hitting governor limits or rate restrictions.

For 50k records daily, Bulk API 2.0 is the only viable option. Individual REST calls would consume your API limits rapidly - you get 15,000 API calls per 24 hours in Enterprise edition. Even with parallel processing, you’d hit limits before completing the job. Bulk API doesn’t count against these limits the same way. The batch size sweet spot we’ve found is 5,000-10,000 records per batch, balancing processing time with error isolation.

Batch size selection depends on your error tolerance and processing window. Smaller batches (2-3k records) give faster feedback and easier error isolation but require more job management overhead. Larger batches (8-10k) are more efficient but if a batch fails due to lock contention or governor limits, you’re reprocessing more records. We use 5k as the default and adjust based on historical error rates for specific record types.