Performance management API returns 429 Too Many Requests error during bulk review sync

Our bulk synchronization script for performance reviews is hitting rate limits. We’re syncing about 800 employee reviews and the API starts returning 429 errors after processing around 150-200 records.


HTTP/1.1 429 Too Many Requests
Retry-After: 60
{"error": "Rate limit exceeded", "limit": "100/minute"}

The script runs every night to sync reviews from our internal system to ADP. We’re not implementing any retry logic currently, so failed records just get skipped. Need guidance on API rate limits and how to handle bulk operations properly without hitting these thresholds constantly.

For bulk operations, you should implement batching with delays between batches. Process 50 records, wait 30 seconds, process the next 50. This keeps you under the rate limit threshold. Also consider running your sync during off-peak hours when API traffic is lower - we’ve noticed rate limits are more forgiving during nights and weekends.

The 429 response includes a Retry-After header telling you how long to wait. You need to implement exponential backoff in your script. When you hit the limit, wait the specified time, then retry. Don’t just skip the records.

We had the same issue. The solution was implementing a queue-based approach. Instead of processing all 800 at once, we queue them and process with controlled throughput. Our script now processes 80 records per minute with 5-second delays between each API call. Takes longer but never hits rate limits.

Let me provide a comprehensive solution for handling API rate limits in bulk synchronization scenarios.

Understanding ADP API Rate Limits: The performance management API enforces a 100 requests per minute limit per tenant. This is a sliding window, not a fixed minute boundary. Your current approach of processing 800 records without throttling will always hit this limit around the 100-150 record mark (accounting for other API calls your system might be making).

Bulk Operation Thresholds: For optimal bulk processing, implement these threshold strategies:

  1. Batch Size: Process in batches of 50 records maximum
  2. Inter-batch Delay: 35-40 seconds between batches
  3. Intra-batch Delay: 600ms between individual API calls within a batch

This gives you approximately 85-90 requests per minute, leaving headroom for rate limit variance and other system operations.

Implementing Robust Retry Logic: Here’s the pattern you need to implement:

max_retries = 3
for attempt in range(max_retries):
    response = api.post(review_data)
    if response.status == 429:
        wait_time = int(response.headers['Retry-After'])
        time.sleep(wait_time)
    else:
        break

Key components of proper retry logic:

1. Exponential Backoff: Start with the Retry-After header value (typically 60 seconds). If you hit 429 again, double the wait time: 60s → 120s → 240s. Maximum 3 retry attempts before logging the failure for manual review.

2. Rate Limit Header Monitoring: Track these response headers proactively:

  • X-RateLimit-Limit: Your total limit (100/minute)
  • X-RateLimit-Remaining: Requests left in current window
  • X-RateLimit-Reset: Unix timestamp when limit resets

When X-RateLimit-Remaining drops below 10, pause processing until the reset time. This prevents hitting 429 entirely.

3. Queue-Based Processing: Implement a processing queue that respects rate limits:

  • Load all 800 reviews into a queue
  • Process with controlled dequeue rate (85 per minute)
  • Failed records go to a retry queue with exponential backoff
  • Maintain processing state to resume after failures

4. Optimal Scheduling: Run your bulk sync during 2-6 AM in your tenant’s timezone when API traffic is lowest. We’ve observed rate limit enforcement is more lenient during these windows, and you’re less likely to compete with interactive user sessions.

5. Error Handling Strategy: Don’t skip failed records. Implement:

  • Immediate retry with Retry-After delay for 429 errors
  • Dead letter queue for records failing after 3 retries
  • Daily reconciliation report of failed syncs
  • Alert threshold when failure rate exceeds 5%

Implementation Timeline: For 800 records with this approach:

  • 16 batches of 50 records each
  • ~30 seconds per batch (50 × 600ms)
  • 35 seconds between batches
  • Total time: ~17 minutes

This is slower than your current approach but guarantees 100% success rate without rate limit errors. The predictable timing also makes it easier to schedule around other system operations.

Monitor your actual rate limit consumption for the first week and adjust the inter-batch delay if needed. Some tenants have slightly different limits based on their ADP subscription tier.