Overview
The OpenFX v1 API enforces rate limits to protect platform stability and ensure fair access for all consumers. Every API response includes rate limit headers so you can monitor your usage proactively and avoid hitting limits.Response Headers
Every successful API response includes these rate limit headers:| Header | Type | Description |
|---|---|---|
X-RateLimit-Limit | integer | Maximum number of requests allowed in the current rate-limit window. |
X-RateLimit-Remaining | integer | Number of requests remaining in the current window. |
X-RateLimit-Reset | integer | Seconds until the current rate-limit window resets. |
X-RateLimit-Reset is the number of seconds until the window resets, not a Unix timestamp. When X-RateLimit-Remaining reaches 0, wait X-RateLimit-Reset seconds before making additional requests.Rate Limit Exceeded (429)
When the rate limit is exceeded, the API returns a429 Too Many Requests response with a Retry-After header:
Retry-After header value is in seconds — wait at least that many seconds before retrying.
Handling Rate Limits
Proactive Monitoring
The best strategy is to avoid hitting the limit by monitoring the remaining request count:Reactive Backoff
If you receive a429, always respect the Retry-After header:
Best Practices
Monitor
X-RateLimit-Remaining proactively. Track the remaining count and slow down before you hit zero. This avoids the latency penalty of a 429 response and retry cycle.Always respect the
Retry-After header. When you receive a 429, wait the exact number of seconds specified. Do not use a shorter delay.Use pagination efficiently. When fetching large datasets, use
limit=200 (the maximum) to minimize the number of API calls. See Pagination for details.Batch related operations. If your workflow creates multiple related resources (e.g., a counterparty and a payment method), structure your code to minimize unnecessary list or get calls between mutations.
Cache responses when appropriate. For data that changes infrequently (e.g., asset pairs from
GET /fx/asset-pairs, rail details from GET /rails), cache responses locally and refresh on a schedule rather than fetching on every request.Related
- Error Handling — understanding the
rate_limit_errortype - Pagination — reducing API calls with efficient pagination
- Idempotency — safely retrying requests after rate limit delays