Rate Limits
Understanding and working with CoderspaE API rate limits to ensure optimal performance and fair usage across all applications.
Overview
Rate limiting protects the CoderspaE API from abuse and ensures reliable service for all users. Limits are applied per API key and are based on a sliding window algorithm.
Key Points
- • Limits are reset every hour on a sliding window basis
- • Different endpoints have different rate limits
- • Rate limit information is included in response headers
- • Exceeded limits return HTTP 429 status code
- • Premium plans have higher limits
Rate Limit Tiers
| Plan | Requests/Hour | Burst Limit | Concurrent |
|---|---|---|---|
| Free | 1,000 | 100/min | 5 |
| Developer | 10,000 | 500/min | 20 |
| Pro | 100,000 | 2,000/min | 50 |
| Enterprise | Custom | Custom | Custom |
Endpoint-Specific Limits
Battle Operations
| Endpoint | Free | Developer | Pro |
|---|---|---|---|
| POST /battles/join | 10/min | 50/min | 200/min |
| POST /battles/submit | 100/min | 500/min | 2000/min |
| GET /battles/history | 60/min | 300/min | 1000/min |
Code Execution
| Endpoint | Free | Developer | Pro |
|---|---|---|---|
| POST /execute | 20/min | 100/min | 500/min |
| POST /test | 50/min | 250/min | 1000/min |
Data Retrieval
| Endpoint | Free | Developer | Pro |
|---|---|---|---|
| GET /problems | 300/min | 1000/min | 5000/min |
| GET /leaderboard | 60/min | 300/min | 1000/min |
| GET /users/profile | 200/min | 1000/min | 3000/min |
Response Headers
Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK X-RateLimit-Limit: 1000 X-RateLimit-Remaining: 999 X-RateLimit-Reset: 1642680000 X-RateLimit-Window: 3600 X-RateLimit-Retry-After: 3600
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests allowed in the current window |
| X-RateLimit-Remaining | Number of requests remaining in the current window |
| X-RateLimit-Reset | Unix timestamp when the rate limit resets |
| X-RateLimit-Window | Length of the rate limit window in seconds |
| X-RateLimit-Retry-After | Seconds to wait before making another request (429 only) |
Handling Rate Limits
429 Response
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1642680000
X-RateLimit-Retry-After: 3600
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "API rate limit exceeded",
"details": {
"limit": 1000,
"window": 3600,
"retry_after": 3600
}
}
}Exponential Backoff
Implement exponential backoff when you receive 429 responses:
async function makeRequestWithBackoff(url, options, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('X-RateLimit-Retry-After');
const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000;
console.log(`Rate limited. Retrying after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
return response;
} catch (error) {
if (attempt === maxRetries) throw error;
const delay = Math.pow(2, attempt) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}Rate Limit Monitoring
class RateLimitMonitor {
constructor() {
this.limits = new Map();
}
updateLimits(response) {
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));
this.limits.set('current', {
limit,
remaining,
reset: new Date(reset * 1000),
usage: ((limit - remaining) / limit * 100).toFixed(1)
});
}
shouldThrottle(threshold = 80) {
const current = this.limits.get('current');
return current && parseFloat(current.usage) > threshold;
}
getTimeToReset() {
const current = this.limits.get('current');
return current ? current.reset.getTime() - Date.now() : 0;
}
}Optimization Strategies
Caching
Cache responses to reduce API calls:
// Cache frequently accessed data
const cache = new Map();
async function getCachedData(endpoint, ttl = 300000) { // 5 minutes
const cached = cache.get(endpoint);
if (cached && Date.now() - cached.timestamp < ttl) {
return cached.data;
}
const data = await apiCall(endpoint);
cache.set(endpoint, { data, timestamp: Date.now() });
return data;
}Batch Requests
Combine multiple operations into single requests:
// Instead of multiple single requests
// GET /users/123, GET /users/456, GET /users/789
// Use batch endpoint
POST /users/batch
{
"user_ids": ["123", "456", "789"]
}Request Queuing
Queue requests to stay within limits:
class RequestQueue {
constructor(rateLimit = 10, interval = 60000) {
this.queue = [];
this.processing = false;
this.rateLimit = rateLimit;
this.interval = interval;
this.requests = [];
}
async add(request) {
return new Promise((resolve, reject) => {
this.queue.push({ request, resolve, reject });
this.process();
});
}
async process() {
if (this.processing) return;
this.processing = true;
while (this.queue.length > 0) {
const now = Date.now();
this.requests = this.requests.filter(time => now - time < this.interval);
if (this.requests.length >= this.rateLimit) {
const waitTime = this.interval - (now - this.requests[0]);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
const { request, resolve, reject } = this.queue.shift();
this.requests.push(now);
try {
const result = await request();
resolve(result);
} catch (error) {
reject(error);
}
}
this.processing = false;
}
}Pagination Optimization
Use efficient pagination to reduce total requests:
// Use cursor-based pagination for better performance GET /problems?limit=100&cursor=eyJpZCI6MTIzfQ== // Instead of offset-based pagination GET /problems?limit=100&offset=1000
Webhooks vs Polling
Use Webhooks For
- • Real-time event notifications
- • Battle completion updates
- • User activity tracking
- • System alerts
Avoid Polling For
- • Frequently changing data
- • Real-time updates
- • Event-driven workflows
- • Push notifications
Monitoring & Analytics
Usage Dashboard
Monitor your API usage in the developer dashboard:
GET /api/v1/usage/stats
{
"current_period": {
"requests_made": 8450,
"requests_limit": 10000,
"requests_remaining": 1550,
"percentage_used": 84.5
},
"endpoints": [
{
"endpoint": "/problems",
"requests": 3200,
"percentage": 37.9
},
{
"endpoint": "/battles/join",
"requests": 1800,
"percentage": 21.3
}
]
}Alerts
Set up alerts for rate limit thresholds:
POST /api/v1/alerts
{
"type": "rate_limit_threshold",
"threshold": 80,
"webhook_url": "https://your-app.com/alerts/rate-limit",
"email": "alerts@your-app.com"
}Best Practices
Request Management
- • Always check rate limit headers
- • Implement exponential backoff
- • Use connection pooling
- • Cache responses when possible
Error Handling
- • Handle 429 responses gracefully
- • Log rate limit violations
- • Implement circuit breakers
- • Monitor error rates
Optimization
- • Use webhooks instead of polling
- • Batch multiple operations
- • Optimize pagination parameters
- • Monitor usage patterns