C
CoderspaE
/Documentation

Rate Limits

Understanding and working with CoderspaE API rate limits to ensure optimal performance and fair usage across all applications.

Overview

Rate limiting protects the CoderspaE API from abuse and ensures reliable service for all users. Limits are applied per API key and are based on a sliding window algorithm.

Key Points

  • • Limits are reset every hour on a sliding window basis
  • • Different endpoints have different rate limits
  • • Rate limit information is included in response headers
  • • Exceeded limits return HTTP 429 status code
  • • Premium plans have higher limits

Rate Limit Tiers

PlanRequests/HourBurst LimitConcurrent
Free1,000100/min5
Developer10,000500/min20
Pro100,0002,000/min50
EnterpriseCustomCustomCustom

Endpoint-Specific Limits

Battle Operations

EndpointFreeDeveloperPro
POST /battles/join10/min50/min200/min
POST /battles/submit100/min500/min2000/min
GET /battles/history60/min300/min1000/min

Code Execution

EndpointFreeDeveloperPro
POST /execute20/min100/min500/min
POST /test50/min250/min1000/min

Data Retrieval

EndpointFreeDeveloperPro
GET /problems300/min1000/min5000/min
GET /leaderboard60/min300/min1000/min
GET /users/profile200/min1000/min3000/min

Response Headers

Every API response includes rate limit information in the headers:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1642680000
X-RateLimit-Window: 3600
X-RateLimit-Retry-After: 3600
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingNumber of requests remaining in the current window
X-RateLimit-ResetUnix timestamp when the rate limit resets
X-RateLimit-WindowLength of the rate limit window in seconds
X-RateLimit-Retry-AfterSeconds to wait before making another request (429 only)

Handling Rate Limits

429 Response

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1642680000
X-RateLimit-Retry-After: 3600

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "API rate limit exceeded",
    "details": {
      "limit": 1000,
      "window": 3600,
      "retry_after": 3600
    }
  }
}

Exponential Backoff

Implement exponential backoff when you receive 429 responses:

async function makeRequestWithBackoff(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      const response = await fetch(url, options);
      
      if (response.status === 429) {
        const retryAfter = response.headers.get('X-RateLimit-Retry-After');
        const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000;
        
        console.log(`Rate limited. Retrying after ${delay}ms`);
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      
      return response;
    } catch (error) {
      if (attempt === maxRetries) throw error;
      
      const delay = Math.pow(2, attempt) * 1000;
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

Rate Limit Monitoring

class RateLimitMonitor {
  constructor() {
    this.limits = new Map();
  }
  
  updateLimits(response) {
    const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
    const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
    const reset = parseInt(response.headers.get('X-RateLimit-Reset'));
    
    this.limits.set('current', {
      limit,
      remaining,
      reset: new Date(reset * 1000),
      usage: ((limit - remaining) / limit * 100).toFixed(1)
    });
  }
  
  shouldThrottle(threshold = 80) {
    const current = this.limits.get('current');
    return current && parseFloat(current.usage) > threshold;
  }
  
  getTimeToReset() {
    const current = this.limits.get('current');
    return current ? current.reset.getTime() - Date.now() : 0;
  }
}

Optimization Strategies

Caching

Cache responses to reduce API calls:

// Cache frequently accessed data
const cache = new Map();

async function getCachedData(endpoint, ttl = 300000) { // 5 minutes
  const cached = cache.get(endpoint);
  
  if (cached && Date.now() - cached.timestamp < ttl) {
    return cached.data;
  }
  
  const data = await apiCall(endpoint);
  cache.set(endpoint, { data, timestamp: Date.now() });
  
  return data;
}

Batch Requests

Combine multiple operations into single requests:

// Instead of multiple single requests
// GET /users/123, GET /users/456, GET /users/789

// Use batch endpoint
POST /users/batch
{
  "user_ids": ["123", "456", "789"]
}

Request Queuing

Queue requests to stay within limits:

class RequestQueue {
  constructor(rateLimit = 10, interval = 60000) {
    this.queue = [];
    this.processing = false;
    this.rateLimit = rateLimit;
    this.interval = interval;
    this.requests = [];
  }
  
  async add(request) {
    return new Promise((resolve, reject) => {
      this.queue.push({ request, resolve, reject });
      this.process();
    });
  }
  
  async process() {
    if (this.processing) return;
    
    this.processing = true;
    
    while (this.queue.length > 0) {
      const now = Date.now();
      this.requests = this.requests.filter(time => now - time < this.interval);
      
      if (this.requests.length >= this.rateLimit) {
        const waitTime = this.interval - (now - this.requests[0]);
        await new Promise(resolve => setTimeout(resolve, waitTime));
        continue;
      }
      
      const { request, resolve, reject } = this.queue.shift();
      this.requests.push(now);
      
      try {
        const result = await request();
        resolve(result);
      } catch (error) {
        reject(error);
      }
    }
    
    this.processing = false;
  }
}

Pagination Optimization

Use efficient pagination to reduce total requests:

// Use cursor-based pagination for better performance
GET /problems?limit=100&cursor=eyJpZCI6MTIzfQ==

// Instead of offset-based pagination
GET /problems?limit=100&offset=1000

Webhooks vs Polling

Use Webhooks For

  • • Real-time event notifications
  • • Battle completion updates
  • • User activity tracking
  • • System alerts

Avoid Polling For

  • • Frequently changing data
  • • Real-time updates
  • • Event-driven workflows
  • • Push notifications

Monitoring & Analytics

Usage Dashboard

Monitor your API usage in the developer dashboard:

GET /api/v1/usage/stats

{
  "current_period": {
    "requests_made": 8450,
    "requests_limit": 10000,
    "requests_remaining": 1550,
    "percentage_used": 84.5
  },
  "endpoints": [
    {
      "endpoint": "/problems",
      "requests": 3200,
      "percentage": 37.9
    },
    {
      "endpoint": "/battles/join",
      "requests": 1800,
      "percentage": 21.3
    }
  ]
}

Alerts

Set up alerts for rate limit thresholds:

POST /api/v1/alerts

{
  "type": "rate_limit_threshold",
  "threshold": 80,
  "webhook_url": "https://your-app.com/alerts/rate-limit",
  "email": "alerts@your-app.com"
}

Best Practices

Request Management

  • • Always check rate limit headers
  • • Implement exponential backoff
  • • Use connection pooling
  • • Cache responses when possible

Error Handling

  • • Handle 429 responses gracefully
  • • Log rate limit violations
  • • Implement circuit breakers
  • • Monitor error rates

Optimization

  • • Use webhooks instead of polling
  • • Batch multiple operations
  • • Optimize pagination parameters
  • • Monitor usage patterns