X (Twitter) API Rate Limits in 2026: Every Endpoint, Explained
Key Takeaway: X API v2 rate limits are per-endpoint, measured in 15-minute or 24-hour windows depending on the operation. Limits split into per-app (Bearer Token) and per-user (OAuth). Recent search allows 300 requests per 15 minutes per user. Tweet lookup allows 900. Posting is capped at 100 per 15 minutes per user and 10,000 per day per app. Exceeding any limit returns a 429 error; the x-rate-limit-reset header tells you when the window resets.
If you are hitting rate limits on the official X API and looking for alternatives, providers like Sorsa API use a simpler model: a single rate limit (20 requests/second) across all endpoints, no 15-minute windows, no per-endpoint caps. The migration from the official API is straightforward.
Last updated: March 24, 2026
Table of Contents
- How X API Rate Limits Work
- Rate Limits for the Most-Used Endpoints
- How Many Tweets Can You Actually Pull Per Day?
- Standard vs. Enterprise Rate Limits
- What Happens When You Hit a Rate Limit
- How to Stay Under Rate Limits
- Frequently Asked Questions
How X API Rate Limits Work
Every X API v2 endpoint has its own rate limit. There is no single global number. The search endpoint has one limit, the tweet lookup endpoint has another, and the posting endpoint has a third. They are tracked independently.
Most limits reset on a 15-minute rolling window. A few, like posting tweets and media uploads, use 24-hour windows instead. The window starts from your first request to that endpoint, not from a fixed clock time.
Per-App vs. Per-User Limits
Rate limits come in two flavors, and the distinction matters.
Per-app limits apply when you authenticate with a Bearer Token (app-only auth). Every request your application makes, regardless of which user triggered it, counts against a single shared pool. If you have 10,000 users hitting your app simultaneously, all of their requests draw from the same per-app bucket.
Per-user limits apply when you authenticate with OAuth 1.0a or OAuth 2.0 user tokens. Each authenticated user gets their own separate limit. If you have 100 users, each one has their own 300 requests/15 minutes for search.
Some endpoints have both limits. Some have only one. The official rate limit tables list both columns for every endpoint.
Checking Your Limits in Real Time
Every API response includes three headers:
x-rate-limit-limit: 900
x-rate-limit-remaining: 847
x-rate-limit-reset: 1705420800
x-rate-limit-limit is the maximum allowed in the current window. x-rate-limit-remaining is how many you have left. x-rate-limit-reset is a Unix timestamp for when the window resets. Parse that timestamp, compare it to the current time, and you know exactly how long to wait if you are running low.
Rate Limits Are Not the Same as Usage Caps
This confuses a lot of developers. Rate limits control how fast you can make requests. Usage caps control how much data you can consume per billing cycle.
Since X moved to pay-per-use pricing in 2026, every resource you fetch costs money. And there is a hard cap of 2 million post reads per month on standard accounts. You can be well within your rate limit and still hit your billing cap. The two systems are completely independent.
For a full breakdown of what each API call actually costs, see our X API pricing guide.
Rate Limits for the Most-Used Endpoints
The official documentation lists 80+ endpoints. The table below covers the ones developers actually use day-to-day. All data is sourced directly from X's rate limit documentation.
Read Endpoints
| Endpoint | Method | Per App | Per User | Window | Notes |
|---|---|---|---|---|---|
/2/tweets (batch lookup) | GET | 3,500 | 5,000 | 15 min | Up to 100 tweet IDs per call |
/2/tweets/:id (single tweet) | GET | 450 | 900 | 15 min | |
/2/tweets/search/recent | GET | 450 | 300 | 15 min | 100 max results, 512-char query limit |
/2/tweets/search/all (full-archive) | GET | 300 + 1/sec | 1/sec | 15 min | 500 max results, 1024-char query limit |
/2/users/:id/tweets (user timeline) | GET | 10,000 | 900 | 15 min | |
/2/users/:id/mentions | GET | 450 | 300 | 15 min | |
/2/users (batch user lookup) | GET | 300 | 900 | 15 min | |
/2/users/:id (single user) | GET | 300 | 900 | 15 min | |
/2/users/search | GET | 300 | 900 | 15 min | |
/2/users/:id/followers | GET | 300 | 300 | 15 min | |
/2/users/:id/following | GET | 300 | 300 | 15 min | |
/2/lists/:id/tweets | GET | 900 | 900 | 15 min | |
/2/lists/:id/members | GET | 900 | 900 | 15 min |
Write Endpoints
| Endpoint | Method | Per App | Per User | Window | Notes |
|---|---|---|---|---|---|
/2/tweets (post) | POST | 10,000 | 100 | 24 hrs (app), 15 min (user) | |
/2/tweets/:id (delete) | DELETE | -- | 50 | 15 min | |
/2/users/:id/likes | POST | -- | 50 + 1,000/24hrs | 15 min + 24 hrs | Dual limit |
/2/users/:id/retweets | POST | -- | 50 | 15 min | |
/2/users/:id/following (follow) | POST | -- | 50 | 15 min |
Streaming Endpoints
| Endpoint | Method | Per App | Notes |
|---|---|---|---|
/2/tweets/search/stream (filtered) | GET | 50 | 1 connection, 1,000 rules, 1024-char rule length, 250 posts/sec |
/2/tweets/search/stream/rules (read) | GET | 450 | |
/2/tweets/search/stream/rules (add/delete) | POST | 100 |
A few things to note here.
Search has two endpoints with very different limits. Recent search (/search/recent) gives you 300 requests per 15 minutes per user but only searches the last 7 days. Full-archive search (/search/all) searches all tweets back to 2006 but caps you at 1 request per second with a 15-minute ceiling of 300. If you need historical depth, full-archive is the only option, but the 1/sec hard cap means you cannot burst requests.
Posting limits are split across two windows. Per-user: 100 posts every 15 minutes. Per-app: 10,000 posts per 24 hours. Your bot might post fine for the first few hours, then hit the daily app-level cap even though the per-user limit shows requests remaining.
Follower and following endpoints share the same limit. 300 per 15 minutes, per app and per user. For accounts with millions of followers, paginating through the full list takes time. At 1,000 results per page and 300 requests per 15 minutes, pulling 1 million followers takes about 16-17 windows, or roughly 4 hours.
How Many Tweets Can You Actually Pull Per Day?
Rate limit numbers on their own do not tell you much. What matters is throughput: how many tweets, profiles, or follower records can you realistically collect in a day?
Here is the math for the most common collection scenarios, assuming per-user authentication.
Tweet Search (Recent)
- Rate limit: 300 requests / 15 minutes = 1,200/hour = 28,800/day
- Max results per request: 100
- Theoretical ceiling: 2,880,000 tweets/day
Sounds generous. But the pay-per-use billing cap is 2 million post reads per month. You would burn through your entire monthly allowance in less than one day of continuous searching. The rate limit is not your bottleneck here. The billing cap is.
User Timelines
- Rate limit: 900 requests / 15 minutes = 3,600/hour
- Results per request: ~20 (varies)
- Theoretical ceiling: ~72,000 tweets/hour per account
For scraping individual timelines, rate limits are rarely the constraint. The real limit is how many tweets a user has posted. Most accounts have a few thousand tweets total.
Followers
- Rate limit: 300 requests / 15 minutes = 1,200/hour
- Max results per page: 1,000
- Theoretical ceiling: 1,200,000 follower records/hour
The X API returns up to 1,000 followers per page, making this one of the higher-throughput endpoints. For comparison, Sorsa API returns up to 200 per request but has no 15-minute window, just a flat 20 req/sec limit, which works out to roughly 4,000 profiles per second or 14.4 million per hour. The throughput difference is significant for large follower graphs.
The Real Bottleneck: A Summary
| Scenario | Rate Limit Ceiling (per day) | Billing Cap (per month) | Which Hits First? |
|---|---|---|---|
| Recent search (100 results/req) | ~2.88M tweets | 2M post reads | Billing cap |
| User timelines (20 results/req) | ~1.7M tweets | 2M post reads | Depends on scale |
| Follower collection (1K/page) | ~28.8M records | No specific cap, but $0.01/record | Cost |
| Posting tweets | 10K (app) or 9,600 (user) | N/A | Rate limit |
For read-heavy workloads at any meaningful scale, the monthly billing cap is the constraint you hit first. Rate limits only become the practical bottleneck for write operations or for high-concurrency pipelines using app-only auth with its lower per-app limits.
For details on how the billing cap works and what each post read actually costs, see our complete X API pricing breakdown.
Standard vs. Enterprise Rate Limits
Everything above describes standard (pay-per-use) rate limits. Enterprise is a different world.
Enterprise customers negotiate custom rate limits directly with X's sales team. The specifics are not public and vary by contract, but here is what is generally known:
- Custom per-endpoint limits, typically significantly higher than standard
- Higher or no monthly usage caps (the 2M post read ceiling can be lifted)
- Dedicated infrastructure and support
- Full-archive search with higher concurrency
- Activity API and webhook access with custom limits
The minimum Enterprise price has historically started around $42,000/month, though this may have changed with the pay-per-use transition. Approval is selective and the onboarding process can take weeks.
Who Actually Needs Enterprise?
Enterprise makes sense for large SaaS platforms that redistribute X data to their own customers, financial institutions running real-time sentiment models, and research organizations processing millions of posts monthly. If you need more than 2 million post reads per month and you need write access, Enterprise is your only option on the official API.
For teams that need high throughput on read operations without the Enterprise price tag, third-party providers fill the gap. For a detailed comparison of providers, see our X API alternatives guide. Sorsa API, for example, offers 20 requests per second across all endpoints on every plan (starting at $49/month), with no per-endpoint limits and no 15-minute windows. The rate limit can be raised further by contacting support at contacts@sorsa.io. The tradeoff is that Sorsa and similar providers are read-only: no posting, no DMs, no like/follow actions.
What Happens When You Hit a Rate Limit
When you exceed a rate limit, X returns HTTP 429 with this response body:
{
"errors": [{
"code": 88,
"message": "Rate limit exceeded"
}]
}
The response still includes rate limit headers, which is the important part. x-rate-limit-reset tells you exactly when the window resets.
A Proper Recovery Strategy
The naive approach is to sleep for 15 minutes and retry. That works but wastes time. A better approach checks the reset timestamp and waits only as long as necessary.
import time
import requests
def request_with_rate_limit_handling(url, headers, max_retries=3):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code != 429:
return response
reset_timestamp = int(response.headers.get("x-rate-limit-reset", 0))
wait_seconds = max(reset_timestamp - int(time.time()), 1)
print(f"Rate limited. Waiting {wait_seconds}s until window resets.")
time.sleep(wait_seconds + 1) # +1s buffer
raise Exception("Rate limit retries exhausted")
A few things to watch for.
Do not retry instantly. Some developers wrap API calls in a retry loop with no delay. This burns through your remaining requests across other endpoints (if using app-level auth) and can trigger more aggressive throttling.
Do not ignore 429s in bulk pipelines. If you are collecting data in a loop, a single 429 can mean hundreds of failed requests before your code notices. Check x-rate-limit-remaining before each request and pause proactively when it drops below 10-20.
Log your remaining requests. When debugging rate limit issues, the single most useful data point is a time-series log of x-rate-limit-remaining values. It shows exactly which call pattern is burning through your budget.
How to Stay Under Rate Limits
The standard advice ("cache responses" and "use exponential backoff") is not wrong, but it is incomplete. Here are the strategies that actually move the needle, based on patterns I have seen across dozens of API pipeline migrations.
1. Use Batch Endpoints Instead of Individual Lookups
The /2/tweets endpoint accepts up to 100 tweet IDs in a single request. One request, one rate limit hit, 100 tweets returned. Making 100 individual /2/tweets/:id calls instead burns 100 hits from a tighter limit (450/15min vs. 3,500/15min for batch).
The same logic applies to user lookups. /2/users accepts up to 100 usernames or IDs.
Third-party APIs take batching further. [Sorsa API's /tweet-info-bulk](https://docs.sorsa.io/api-reference/tweets/tweet-data-batch) accepts 100 tweet URLs or IDs per request, and [/info-batch](https://docs.sorsa.io/api-reference/users-data/user-profile-batch) accepts 100 user profiles, each counting as a single request from your quota. For more strategies on reducing request volume, see our guide to optimizing API usage.
2. Use Streaming Instead of Polling
If you are polling /2/tweets/search/recent every 30 seconds to catch new tweets, you are making 2,880 requests per day to that endpoint alone. X's filtered stream endpoint (/2/tweets/search/stream) pushes matching tweets to you in real time over a single persistent connection. One connection, one rate limit hit, unlimited matching tweets (up to 250/sec).
The catch: filtered stream has a per-app limit of 50 connections per 15 minutes, and you can only maintain 1 active connection at a time. But for monitoring keywords or accounts, it is dramatically more efficient than polling.
3. Cache User Profiles Aggressively
User profiles change rarely. Display names, bios, and follower counts shift over days, not minutes. If your application fetches user data alongside tweets, cache profiles with a 12-24 hour TTL and skip the API call on cache hit.
Tweet engagement metrics (likes, retweets, replies) change faster. For analytics pipelines, a 1-6 hour cache TTL is usually acceptable. For real-time dashboards, there is no good substitute for fresh API calls.
4. Monitor Headers and Throttle Proactively
Do not wait for a 429 to slow down. Track x-rate-limit-remaining on every response and implement a soft threshold. When remaining requests drop below 10% of the limit, add a delay between calls. This prevents the hard stop of a 429 and keeps your pipeline running smoothly.
remaining = int(response.headers.get("x-rate-limit-remaining", 100))
if remaining < 30: # soft threshold
time.sleep(2) # gentle slowdown before hitting the wall
5. Distribute Across Authentication Methods
Per-app and per-user limits are independent pools. If you authenticate with both a Bearer Token (per-app) and a user OAuth token (per-user), you effectively get two separate rate limit buckets for endpoints that support both. The tweet lookup endpoint, for example, allows 3,500/15min per app and 5,000/15min per user. Using both gives you 8,500 requests per 15 minutes.
This only helps if your architecture supports dual auth paths. For server-side pipelines, it is straightforward. For client-side apps, you are usually limited to user tokens.
Frequently Asked Questions
Are X API rate limits per user or per app?
Both, but it depends on the endpoint and your authentication method.
If you use a Bearer Token (app-only auth), per-app limits apply. All requests from your application share a single pool regardless of which user triggered them.
If you use an OAuth user token, per-user limits apply. Each authenticated user gets their own independent bucket.
Some endpoints list both a per-app and a per-user limit. Some list only one. The official rate limit tables specify which columns apply to each endpoint.
Is there a free tier for the X API in 2026?
No. X eliminated the free tier as part of the transition to pay-per-use pricing. There is no free credit allowance, no trial period, and no limited free access. You must purchase credits before making any API call.
For zero-cost access to X data, third-party providers that offer free tiers or trials are the only remaining option. If you are considering scraping as an alternative, see our guide to scraping X.com for a breakdown of methods and costs. More on the pricing changes in our X API pricing breakdown.
What is the rate limit for Twitter search API?
Recent search (/2/tweets/search/recent): 450 requests per 15 minutes per app, 300 per 15 minutes per user. Each request returns up to 100 results. Query length is capped at 512 characters.
Full-archive search (/2/tweets/search/all): 300 requests per 15 minutes per app with a hard cap of 1 request per second. Each request returns up to 500 results. Query length is capped at 1,024 characters. This endpoint is only available on paid plans and delivers results going back to 2006. For a reference on available query operators, see our search operators guide.
How long do X API rate limit windows last?
Most endpoints use 15-minute rolling windows. A few use 24-hour windows, including the posting endpoint (10,000/24hrs per app), the like endpoint (1,000/24hrs per user), and media upload endpoints. The window starts from your first request, not from a fixed clock time.
Do rate limits reset at the same time for all endpoints?
No. Each endpoint has its own independent timer. Hitting the limit on /2/tweets/search/recent does not affect your remaining requests on /2/users/:id/tweets. This is important for pipeline design: if one endpoint is rate-limited, your code can continue collecting data from other endpoints while waiting for the blocked one to reset.
Can I increase my X API rate limits without Enterprise?
Not on the official API. Standard rate limits are fixed and cannot be raised outside of an Enterprise agreement. The only workarounds are batching (to get more data per request), streaming (to avoid polling), and distributing requests across multiple auth methods.
Third-party providers offer more flexibility here. Sorsa API's rate limit is 20 requests per second on all plans, and the team will raise it on request for high-volume use cases. There are no per-endpoint limits or 15-minute windows to manage.
What is the difference between rate limits and usage caps?
Rate limits control request frequency (how fast you can call the API). Usage caps control total consumption (how much data you can read per billing cycle). On the X API, the standard pay-per-use account has a hard cap of 2 million post reads per month. You can be well within your rate limit and still hit the usage cap, or vice versa. They are tracked independently.
Do third-party X API providers have rate limits?
Yes, but they tend to be simpler. Most use a single universal rate limit instead of per-endpoint tables. Sorsa API, for example, applies one limit (20 requests/second) to all 38 endpoints regardless of plan tier. There are no 15-minute windows, no per-app vs. per-user split, and no separate limits per endpoint. If you exceed 20 req/s, you get a 429 and wait one second. That is the entire rate limit model.
Daniel Kolbassen is a data engineer and API infrastructure consultant. He has worked with the Twitter/X API since the v1.1 era and has helped over 40 companies restructure their data pipelines after the 2023 pricing overhaul. Follow him on Twitter/X or connect on LinkedIn.