How to Download All Tweets From a Twitter/X User: Complete Guide for 2026
Key Takeaway: You can download all tweets from any public Twitter/X account using four methods: your own Twitter archive (own account only), browser-based export tools, open-source scrapers, or a dedicated data API. The right choice depends on whether you need your own tweets or someone else's, how many tweets you need, and whether you want raw structured data or a formatted report.
Every week, another researcher, marketer, or developer runs into the same wall: they need a complete tweet history from a specific account, and Twitter/X makes it surprisingly difficult. The platform's own archive feature only works for your own account. Most third-party tools cap out at 3,200 tweets. Open-source scrapers break every few months when X changes its frontend.
I have spent over a decade building data pipelines around Twitter's APIs -- first through the v1.1 era, then through the 2023 pricing upheaval, and now through whatever X decides to call its current setup. In this guide, I will walk through every working method for downloading a user's complete tweet history in 2026, with honest tradeoffs for each. Where it makes sense, I will show how Sorsa API handles this with no tweet count limits and full pagination through a user's entire timeline.
Disclosure: Sorsa API is our product. I have aimed to keep this comparison balanced, but I recommend testing any solution against your own workload before committing.
Table of Contents
- Method 1: Download Your Own Twitter Archive
- Method 2: Browser-Based Export Tools
- Method 3: Open-Source Scrapers
- Method 4: Using a Twitter Data API
- Step-by-Step: Download All Tweets via API with Code
- The 3,200-Tweet Limit Explained
- Estimating Cost and API Usage
- FAQ
- Getting Started
Method 1: Download Your Own Twitter Archive (Free, Own Account Only)
If you only need tweets from your own account, X offers a built-in archive export at no cost. Here is how to request it:
- Go to Settings and Privacy > Your Account > Download an archive of your data.
- Verify your identity (X may send an email or SMS code).
- Click Request archive. X will notify you when it is ready -- typically 24 to 48 hours.
- Download the
.zipfile. Inside, you will find an HTML viewer and adata/folder with JSON files containing every tweet you have ever posted, plus media files, DMs, and account metadata.
This method gives you everything -- no tweet count limit, full media, complete history back to your first post. The catch is that it only works for accounts you own and control. You cannot request another user's archive.
The data format is also awkward for analysis. The JSON structure is nested and Twitter-specific, not a clean CSV you can drop into a spreadsheet. If you need the data in a usable format, you will need to write a parser or use a conversion tool.
Best for: Personal backup, archiving your own account before deactivation, compliance record-keeping.
Method 2: Browser-Based Export Tools (No Code Required)
A category of GUI tools lets you export tweets from any public account without writing code. You enter a username, wait, and download a CSV or Excel file. The most well-known options include Circleboom, Tweet Binder, FollowersAnalysis, Lobstr, and XTractor.
How they work
Most of these tools operate on top of the official X API or use browser-session-based scraping. You typically create an account on the tool's website, enter a Twitter handle, and receive a downloadable file with tweet text, timestamps, engagement metrics, and sometimes media URLs.
Limitations to watch for
The biggest issue across this entire category is tweet count limits. Because most GUI tools rely on the official X API's user timeline endpoint under the hood, they inherit its hard cap of 3,200 tweets. Some tools return even fewer.
| Tool | Tweet limit | Cost | Code required | Full engagement data |
|---|---|---|---|---|
| Circleboom | 3,200 | Paid plans from ~$15/mo | No | Yes |
| FollowersAnalysis | 3,200 (more on request) | Paid per report | No | Yes + PDF analytics |
| Lobstr | 800-1,000 | From $0 (100 credits free) | No | Yes |
| Tweet Binder | Varies by plan | Paid plans | No | Yes |
| XTractor | Varies (Chrome extension) | Free tier limited | No | Yes |
These tools are a reasonable choice if you need a quick export of recent tweets from a single account and do not want to touch code. But if you need a complete timeline from a prolific user -- say, an account with 50,000+ tweets -- most of these tools will only deliver the latest 3,200 at best.
Other friction points: paid plans add up if you are pulling data from multiple accounts regularly, rate limits can make large exports slow, and some tools require you to sync your own Twitter session cookies (which introduces security considerations).
Best for: One-off exports of recent tweets, non-technical users, quick competitive snapshots.
Method 3: Open-Source Scrapers (Free, but Fragile)
Open-source Python libraries offer a free, code-based alternative. They scrape Twitter's web interface or internal APIs directly, bypassing the official API and its pricing.
Which scrapers still work in 2026
The landscape has thinned considerably. Here is the current status of the most commonly recommended tools:
Still active: Twikit, TweeterPy, and XActions all have recent commits and active maintainers as of early 2026.
Dead or broken: snscrape has not been updated in over three years and no longer works reliably. twint is fully dead -- the project is archived. twscrape has had no meaningful updates in roughly 11 months and is likely broken against current X endpoints.
If you have seen older guides recommending snscrape or twint, those guides are outdated. Neither tool functions in 2026.
For a deeper comparison of what is still working, see our full breakdown of Twitter scrapers.
The reliability problem
Even the active scrapers carry a fundamental risk: they depend on reverse-engineering X's internal endpoints, which change without notice. When X updates its frontend or rotates authentication tokens, every scraper relying on that mechanism breaks simultaneously. There is no SLA, no deprecation warning, no migration path.
A research team I consulted for in late 2024 built their entire political sentiment pipeline on snscrape. When it stopped working overnight, they lost three weeks of a funded study scrambling for alternatives. Their dataset had a gap they could not backfill.
If you need reliable, repeatable access for production workflows or research with deadlines, scrapers introduce a dependency you cannot control.
Best for: Ad-hoc exploration, budget-constrained projects with flexible timelines, developers comfortable fixing breakage.
Method 4: Using a Twitter Data API (The Developer Approach)
APIs provide the most reliable path to downloading a user's complete tweet history programmatically. You send HTTP requests, receive structured JSON, paginate through results, and process the data however you need.
The official X API
The official X API works, but two things make it painful for this specific use case. First, the user timeline endpoint is capped at 3,200 of a user's most recent tweets -- this is a documented hard limit that has existed for over a decade. Second, pricing is steep: the current pay-per-use model makes large-scale tweet collection expensive quickly.
Third-party Twitter APIs
Third-party APIs access the same public data but often with simpler authentication, more predictable pricing, and fewer restrictions. Sorsa API is one such alternative -- a managed REST API that returns full tweet data with a single API key header, flat per-request pricing, and critically, no 3,200-tweet cap on the /user-tweets endpoint. You can paginate through a user's entire tweet history until you reach their very first post.
For background on how third-party APIs compare to the official X API, see Twitter API Alternatives and our pricing comparison for 2026.
Best for: Developers, data engineers, production pipelines, research projects, anyone who needs structured data at scale.
How to Download All Tweets via API (Step-by-Step with Code)
This section walks through the complete process using Sorsa API, from the first request to a finished CSV file.
Prerequisites
- An API key from the Sorsa dashboard (Starter plan or above)
- Python 3.7+ with the
requestslibrary, or Node.js 18+ - The target account's username (e.g.,
stripe)
New to the API? The quickstart guide covers authentication and your first request in under five minutes.
Fetch the complete tweet timeline
The /user-tweets endpoint returns 20 tweets per page with cursor-based pagination. Loop through pages until next_cursor is absent or null -- that means you have reached the end.
Python:
import requests
import time
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.sorsa.io/v3"
def download_all_tweets(username, max_pages=None):
"""Download every tweet from a public account."""
all_tweets = []
cursor = None
page = 0
while True:
body = {"username": username}
if cursor:
body["next_cursor"] = cursor
resp = requests.post(
f"{BASE_URL}/user-tweets",
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json=body,
)
resp.raise_for_status()
data = resp.json()
tweets = data.get("tweets", [])
all_tweets.extend(tweets)
page += 1
print(f"Page {page}: {len(tweets)} tweets (total: {len(all_tweets)})")
cursor = data.get("next_cursor")
if not cursor:
print("Done. Reached end of timeline.")
break
if max_pages and page >= max_pages:
print(f"Stopped at {max_pages} pages.")
break
time.sleep(0.05) # Stay well within 20 req/s limit
return all_tweets
tweets = download_all_tweets("stripe")
print(f"\nCollected {len(tweets)} tweets from @stripe")
JavaScript:
const API_KEY = "YOUR_API_KEY";
const BASE_URL = "https://api.sorsa.io/v3";
async function downloadAllTweets(username, maxPages = Infinity) {
const allTweets = [];
let cursor = null;
let page = 0;
while (page < maxPages) {
const body = { username };
if (cursor) body.next_cursor = cursor;
const resp = await fetch(`${BASE_URL}/user-tweets`, {
method: "POST",
headers: {
"ApiKey": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify(body),
});
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
const data = await resp.json();
allTweets.push(...(data.tweets || []));
page++;
console.log(`Page ${page}: ${data.tweets?.length || 0} tweets (total: ${allTweets.length})`);
cursor = data.next_cursor;
if (!cursor) break;
await new Promise((r) => setTimeout(r, 50));
}
return allTweets;
}
const tweets = await downloadAllTweets("stripe");
console.log(`Collected ${tweets.length} tweets from @stripe`);
Each tweet object includes the full text, publication date, engagement counts (likes, retweets, replies, quotes, views, bookmarks), language, media entities, and the complete author profile. The author profile is embedded in every response at no extra API cost -- you do not need a separate call to /info unless you specifically need the account's bio URLs or pinned tweet IDs before starting.
Export tweets to CSV
Once you have collected the tweets, write them to CSV for analysis in Excel, Google Sheets, Pandas, or a database import:
import csv
def export_tweets_csv(tweets, filename="tweets.csv"):
fields = [
"id", "created_at", "full_text", "lang",
"likes_count", "retweet_count", "reply_count",
"quote_count", "view_count", "bookmark_count",
"is_reply", "is_quote_status", "username",
]
with open(filename, "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for t in tweets:
writer.writerow({
"id": t["id"],
"created_at": t["created_at"],
"full_text": t["full_text"].replace("\n", " "),
"lang": t.get("lang", ""),
"likes_count": t.get("likes_count", 0),
"retweet_count": t.get("retweet_count", 0),
"reply_count": t.get("reply_count", 0),
"quote_count": t.get("quote_count", 0),
"view_count": t.get("view_count", 0),
"bookmark_count": t.get("bookmark_count", 0),
"is_reply": t.get("is_reply", False),
"is_quote_status": t.get("is_quote_status", False),
"username": t.get("user", {}).get("username", ""),
})
print(f"Exported {len(tweets)} tweets to {filename}")
export_tweets_csv(tweets, "stripe_tweets.csv")
Going beyond the timeline: the search method
The /user-tweets endpoint gives you the raw chronological timeline. But sometimes you want something more targeted: only tweets with images, only tweets with 100+ likes, or tweets from a specific date range.
The /search-tweets endpoint accepts the full set of Twitter/X search operators, including from:username. This lets you run queries like:
# Only high-engagement tweets from a specific user
resp = requests.post(
f"{BASE_URL}/search-tweets",
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json={
"query": "from:stripe min_faves:50 -filter:replies",
"order": "latest",
},
)
# Tweets from a specific date range
resp = requests.post(
f"{BASE_URL}/search-tweets",
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json={
"query": "from:stripe since:2025-01-01 until:2025-07-01",
"order": "latest",
},
)
You can combine from: with engagement filters (min_faves:, min_retweets:), media filters (filter:images, filter:videos), language codes (lang:en), and date ranges (since: / until:). The Search Builder is a free visual tool that lets you assemble these queries without memorizing operator syntax.
Pagination works the same way -- pass next_cursor from each response into the next request until results are exhausted.
The 3,200-Tweet Limit: What It Is and Who It Affects
If you have tried downloading tweets before, you have probably hit an invisible ceiling around 3,200 tweets. This is not a bug -- it is a documented limitation of the official Twitter/X API's user timeline endpoint that has existed for over a decade.
The official X API (both legacy v1.1 and current v2) hard-caps the user timeline at 3,200 of a user's most recent posts. Any tool built on top of the official API inherits this limit. That includes most GUI export tools and many code libraries.
Here is how each method stacks up:
| Method | Max tweets retrievable | Notes |
|---|---|---|
| Twitter/X Archive (own account) | All | Free, but 24-48h wait, own account only |
| GUI tools (Circleboom, Lobstr, etc.) | 800 - 3,200 | Depend on official API; paid plans required |
| Open-source scrapers | Varies | Unreliable; may retrieve more or fewer depending on breakage |
| Official X API (timeline endpoint) | 3,200 | Hard limit, documented by X |
Sorsa API /user-tweets | No limit | Paginate through complete history |
Sorsa API /search-tweets with from: | No limit | Add date ranges, engagement filters, media filters |
For most casual use cases -- pulling the last few hundred tweets for a content audit or competitor snapshot -- the 3,200 limit does not matter. But for researchers archiving political commentary, analysts building historical engagement datasets, or anyone working with prolific accounts (journalists, politicians, and media brands routinely have 50,000+ tweets), the limit is a dealbreaker.
If you need to go deeper, see our guide on working with historical Twitter data.
Estimating Cost and API Usage
Each Sorsa API request to /user-tweets returns up to 20 tweets per page. Here is what full timeline extraction looks like at different scales:
| Account size | Pages needed | Sorsa API requests | Cost (Pro plan, $0.00199/req) | Time at 20 req/s |
|---|---|---|---|---|
| 1,000 tweets | 50 | 50 | ~$0.10 | ~3 seconds |
| 5,000 tweets | 250 | 250 | ~$0.50 | ~13 seconds |
| 10,000 tweets | 500 | 500 | ~$1.00 | ~25 seconds |
| 50,000 tweets | 2,500 | 2,500 | ~$4.98 | ~2 minutes |
| 100,000 tweets | 5,000 | 5,000 | ~$9.95 | ~4 minutes |
For context: pulling the complete timeline of a 50,000-tweet account costs under $5 on the Pro plan and takes about two minutes. A GUI tool at $15-50/month would give you only the latest 3,200 tweets from that same account.
If you are pulling timelines from multiple accounts regularly, batch your work to maximize your monthly request quota. The Pro plan includes 100,000 requests per month -- enough to fully archive roughly 40 accounts with 50,000 tweets each, every month. For tips on reducing unnecessary API calls, see Optimizing API Usage.
Frequently Asked Questions
Can I download tweets from a private (protected) account?
No. Private accounts restrict their tweets to approved followers only. No API, scraper, or export tool can access a protected account's tweets unless you are an authenticated follower with an active session. This applies equally to the official X API, Sorsa API, and every other method listed here.
How far back can I go?
With the official X API's timeline endpoint, you are limited to the most recent 3,200 tweets regardless of their age. With Sorsa API's /user-tweets endpoint, you can paginate all the way back to a user's first tweet -- there is no hard limit. For search-based approaches, the /search-tweets endpoint with from:username and since: / until: date operators gives you fine-grained control over the time window.
Do downloaded tweets include images and videos?
Yes. Each tweet object contains an entities array with direct URLs for photos, videos, and GIFs attached to the tweet, plus preview/thumbnail URLs. You can use these URLs to download the actual media files programmatically. For a no-code approach to saving media from individual tweets, try the free Media Downloader tool.
Is it legal to download someone's tweets?
Downloading publicly available tweets for personal analysis, research, or business intelligence is generally permissible. Tweets posted to public accounts are visible to anyone on the internet. That said, how you use the data matters -- redistribution, republishing at scale, or using data for harassment could violate platform terms of service or local regulations (such as GDPR in the EU). If your use case involves large-scale data collection or commercial use, consult legal counsel familiar with your jurisdiction.
Can I download my liked tweets or bookmarks?
Not through any third-party tool or API. Likes and bookmarks are private data tied to your authenticated session. The only way to export your own likes is through the official Twitter archive (Settings > Download an archive of your data), which includes your like history. No external API has access to another user's liked or bookmarked tweets.
What format is best for analysis -- CSV, JSON, or Excel?
It depends on your workflow. CSV is the most universal choice -- it opens in Excel, Google Sheets, Pandas, R, and virtually every data tool. JSON preserves the full nested structure (media entities, quoted tweets, author objects) and is better for programmatic processing or database imports. Excel (.xlsx) is convenient for non-technical stakeholders but adds a layer of conversion. For most use cases, start with CSV. If you need media URLs or nested reply data, use JSON.
How do I download tweets containing specific hashtags or keywords?
Use the search-based approach instead of the timeline endpoint. Pass a query with the hashtag or keyword (and optionally a from:username filter) to the /search-tweets endpoint. For example, from:stripe #payments since:2025-01-01 returns only tweets from @stripe that contain the #payments hashtag posted since January 2025. The full list of available search filters is in our search operators reference.
Getting Started
If you want to test the /user-tweets endpoint before writing any code, the API Playground lets you run requests against any public account directly in your browser -- no API key needed for the playground.
When you are ready to build, grab an API key from the dashboard and follow the quickstart guide to make your first authenticated request. The Starter plan (10,000 requests) is enough to fully download the timeline of most accounts, and you can scale from there.
Last verified: April 2026. If you spot anything outdated, reach out at contacts@sorsa.io.