The 429 response with Retry-After header is your friend here. You need to implement three key components to handle this properly:
1. Batch Processing with Dynamic Sizing:
Don’t use fixed batch sizes. Start with 100 records per batch and dynamically adjust based on API response times. If you’re getting close to rate limits, reduce batch size. Here’s the pattern:
function syncBatch(records, batchSize) {
for (let i = 0; i < records.length; i += batchSize) {
const batch = records.slice(i, i + batchSize);
const response = await sendToERP(batch);
if (response.status === 429) {
await handleRateLimit(response);
}
}
}
2. Automatic Retry Logic with Exponential Backoff:
When you hit a 429, don’t just wait the minimum time. Implement exponential backoff to avoid immediately hitting the limit again. The Retry-After header gives you the minimum wait time, but add a buffer:
async function handleRateLimit(response) {
const retryAfter = parseInt(response.headers['retry-after']) || 60;
const backoffTime = retryAfter * 1000 * (1.5 + Math.random());
await sleep(backoffTime);
return retryBatch();
}
3. Request Throttling:
Implement a token bucket algorithm to proactively throttle requests before hitting limits. Track your request rate and slow down when approaching 80% of the limit:
class RateLimiter {
constructor(maxRequests, perMinute) {
this.tokens = maxRequests;
this.maxTokens = maxRequests;
this.refillRate = maxRequests / (perMinute * 1000);
}
async acquireToken() {
while (this.tokens < 1) {
await sleep(100);
this.refill();
}
this.tokens--;
}
}
Configuration in Integration Hub:
In your Zoho Integration Hub connector settings, enable these options:
- Rate Limit Handling: Automatic
- Retry Strategy: Exponential Backoff
- Max Retries: 5
- Initial Retry Delay: 60 seconds
- Batch Size: 75 records
Monitoring and Optimization:
Add logging to track:
- Number of 429 responses per sync job
- Average batch processing time
- Total sync duration
- Records processed per minute
This data helps you tune batch sizes and delays. In our implementation, we went from 60% failed syncs to 99.8% success rate with average sync times only 15% longer than the original unthrottled approach. The key is balancing throughput with staying under rate limits.
One final tip: if you’re syncing bidirectionally, coordinate your sync schedules to avoid both systems trying to sync simultaneously and competing for API quota.