Client-side API calls for loyalty points redemption hit rate limits

We built a custom loyalty points redemption interface using client-side JavaScript that calls HubSpot’s API to update contact point balances when users redeem rewards. During our beta testing with a small group, everything worked fine. But now that we’ve rolled out to all customers, we’re hitting API rate limits constantly during peak hours. Users are getting failed redemptions and error messages, which is terrible for the experience.

The client-side code makes a POST request to update the contact’s loyalty_points property whenever they redeem a reward. With hundreds of users redeeming simultaneously, we’re exceeding HubSpot’s API rate limits (we’re seeing 429 responses). We tried adding basic retry logic, but that just makes the rate limiting worse because failed requests keep retrying immediately. We need a better approach for client-side throttling that handles high concurrent usage without overwhelming the API. Has anyone implemented exponential backoff or other rate limit handling strategies for client-side HubSpot API calls?

Here’s a basic pattern: wrap your API call in an async function with a retry loop. Start with a base delay (1000ms), double it on each retry, add random jitter (multiply by 0.5 to 1.5), and cap the max delay at 32 seconds. Also set a max retry count (like 5 attempts) to prevent infinite loops. Return a clear error to the user if all retries fail.

You’re dealing with three interconnected challenges: API rate limiting fundamentals, client-side throttling implementation, and exponential backoff with jitter. Let me address each systematically.

For API rate limiting, HubSpot enforces limits at multiple levels: per-second burst limits, per-10-second rolling windows, and daily quotas. When you hit a limit, the API returns HTTP 429 with a Retry-After header indicating when you can retry. The key insight: client-side calls from multiple users count against your portal’s shared rate limit, so concurrent redemptions quickly exhaust your quota. You need both immediate retry handling and longer-term rate management.

For client-side throttling, implement a request queue with controlled concurrency:

const requestQueue = [];
let activeRequests = 0;
const MAX_CONCURRENT = 3;

function queueAPICall(fn) {
  return new Promise((resolve, reject) => {
    requestQueue.push({ fn, resolve, reject });
    processQueue();
  });
}

For exponential backoff, here’s a production-ready implementation:

async function redeemWithBackoff(contactId, points, maxRetries = 5) {

  let delay = 1000;

  for (let i = 0; i < maxRetries; i++) {

    try {

      const response = await fetch(`/api/redeem`, {

        method: 'POST',

        body: JSON.stringify({ contactId, points })

      });

      if (response.status === 429) {

        const retryAfter = response.headers.get('Retry-After');

        delay = retryAfter ? retryAfter * 1000 : delay * 2;

        await new Promise(r => setTimeout(r, delay * (0.5 + Math.random())));

        continue;

      }

      return await response.json();

    } catch (error) {

      if (i === maxRetries - 1) throw error;

    }

  }

}

The jitter (0.5 + Math.random()) prevents all clients from retrying simultaneously after a rate limit. This is crucial for distributed systems.

However, I strongly echo the earlier advice: move to a server-side architecture. Client-side API calls for write operations create security, rate limiting, and reliability issues. Your server can implement proper queuing, batch processing, and rate limit management that’s impossible to achieve reliably from client code. Consider it a priority architectural improvement rather than an optional refactor.

In the interim, add optimistic UI updates - show the redemption as successful immediately, then reconcile with the API response asynchronously. This improves perceived performance and user experience even when dealing with rate limit delays.

We’re using OAuth for the API calls, not exposing API keys directly. But I see your point about control. Server-side would be ideal but requires significant infrastructure we don’t have yet. For now, we need to make the client-side approach more resilient. How do we implement the exponential backoff properly?

Derek’s right about the architecture, but if you must do client-side calls, implement exponential backoff with jitter. When you get a 429, wait 1 second, then 2, then 4, etc., adding random jitter to prevent thundering herd. Also check the Retry-After header in the 429 response - HubSpot tells you exactly how long to wait.

Client-side API calls for write operations are generally a bad pattern. You’re exposing your API key and can’t control the request rate. Move this to a server-side endpoint that you control, then call that from your client code. Your server can implement proper rate limiting and queuing.

Don’t forget about the user experience during retries. Show a loading state with a message like ‘Processing your redemption…’ rather than making them think the app is frozen. If you’re hitting rate limits consistently, you might also need request coalescing - batch multiple redemptions or queue them locally before sending to the API. That reduces overall API calls significantly during peak times.