Extract Twitter Followers: A Practical Guide to Every Method in 2026
Key Takeaway: You can extract a full Twitter/X follower list using four methods: a third-party API (fastest, richest data, fully automatable), a browser extension (no-code but slow and limited), a DIY headless scraper (free but fragile and risky), or a manual data export from X (incomplete, no follower details). The right choice depends on how many followers you need, what data fields matter, and whether you plan to automate.
Twitter does not offer a "Download Followers" button. The platform's web interface lets you scroll through followers one screen at a time, but there is no native way to export a structured list with bios, locations, follower counts, or any other metadata. For anyone doing audience research, lead generation, or competitive analysis, this is a dead end.
The official X API technically supports follower lookups, but since the 2023 pricing overhaul, it operates on a pay-per-use model with no free tier and aggressive rate limits that make large extractions either slow or expensive. That gap created the market for alternative tools. At Sorsa API, we built one of them: a REST API that returns up to 200 full follower profiles per request, with flat per-request pricing and no OAuth setup. But Sorsa is not the only option, and different situations call for different tools.
This guide covers every extraction method that works right now, with honest trade-offs for each. I have been working with Twitter's data layer since the v1.1 era, and I have helped dozens of teams set up follower extraction pipelines after X locked down its API. The goal here is to help you pick the right approach for your specific use case.
Table of Contents
- Why extract followers in the first place?
- Method comparison: which approach fits your use case?
- Method 1: Third-party API (recommended for scale)
- Method 2: Browser extensions (no-code, small lists)
- Method 3: DIY scrapers with Selenium or Playwright
- Method 4: Manual export via X's data download
- How much does it cost to extract 50,000 followers?
- What to do with the data once you have it
- Risks and safety by method
- FAQ
Why Extract Followers in the First Place?
A follower list is not a vanity metric. It is a dataset of people who actively opted in to hear from a specific account. That makes it one of the most targeted audience signals available on any social platform.
Lead generation. Export followers from a competitor's account, filter by bio keywords like "founder" or "VP Marketing," and you have a warm prospect list that took minutes to build instead of weeks.
Audience research. Understanding who follows an account tells you about their content strategy's actual reach. A SaaS brand with 50K followers that are mostly bots is in a different position than one with 50K followers who are active developers with 1K+ followers of their own.
Influencer vetting. Before paying for a sponsored post, extract the influencer's follower list and check what percentage are real, active accounts with relevant bios. I helped a DTC brand avoid a $15,000 influencer deal after we found that 60% of the influencer's followers were inactive accounts created within the same two-week window.
Competitive intelligence. Pull follower lists from two or three competitors, find the overlap (users who follow all of them), and you have identified the most engaged people in your niche. These are the accounts most likely to convert because they are already paying attention to your space.
Method Comparison: Which Approach Fits Your Use Case?
Before diving into each method, here is how they stack up on the dimensions that actually matter:
| Third-Party API | Browser Extension | DIY Scraper (Selenium) | Manual X Export | |
|---|---|---|---|---|
| Max followers per run | Unlimited (paginated) | 200 - 50,000 depending on tool | Unlimited in theory, ~5K - 10K practical | Your own followers only, no details |
| Data fields returned | 15 - 20+ (bio, location, counts, verified status, URLs) | 10 - 26 depending on extension | Varies, often just name + handle | User IDs only (no bios, no counts) |
| Coding required | Yes (basic HTTP requests) | No | Yes (Python + browser automation) | No |
| Account risk | None (uses API key, not your X session) | Low to moderate (uses your session cookies) | High (mimics browser behavior, detectable) | None |
| Speed (10K followers) | ~50 requests, under 30 seconds | 30 - 90 minutes | 1 - 4 hours | N/A |
| Works on any public account | Yes | Yes | Yes | No (your own account only) |
| Automatable / schedulable | Yes | Limited | Yes, but fragile | No |
| Cost | Pay per request (from ~$0.10 for 10K followers) | Free to $9.99/mo | Free (but costs dev time) | Free |
The short version: if you need more than a few hundred followers, need rich profile data, or plan to run extractions regularly, an API is the most reliable path. Extensions work for quick one-off exports of small lists. DIY scrapers are a false economy for most teams. Manual export is a last resort.
Method 1: Third-Party API
An API-based approach sends HTTP requests to a service that returns structured JSON with follower data. No browser sessions, no cookies, no risk to your X account. You authenticate with an API key, not your Twitter login.
Sorsa API's /followers endpoint returns up to 200 full user profiles per request. Each profile includes username, display name, bio, location, follower count, following count, tweet count, verified status, profile image URL, bio URLs, and more. You paginate through the full list using cursors.
Quick start: fetch the first page
curl "https://api.sorsa.io/v3/followers?username=stripe" \
-H "ApiKey: YOUR_API_KEY"
That single request returns up to 200 follower profiles with full metadata. No OAuth flow, no app approval, no bearer tokens. One header, one parameter.
Python: extract and save a complete follower list
import requests
import csv
import time
API_KEY = "YOUR_API_KEY"
TARGET = "stripe"
def extract_followers(username, max_pages=500):
followers = []
cursor = None
for page in range(max_pages):
params = {"username": username}
if cursor:
params["next_cursor"] = cursor
resp = requests.get(
"https://api.sorsa.io/v3/followers",
headers={"ApiKey": API_KEY},
params=params,
)
resp.raise_for_status()
data = resp.json()
batch = data.get("users", [])
followers.extend(batch)
print(f"Page {page + 1}: fetched {len(batch)} profiles "
f"(total: {len(followers)})")
cursor = data.get("next_cursor")
if not cursor:
break
time.sleep(0.05) # stay well within 20 req/s limit
return followers
# Extract
followers = extract_followers(TARGET)
print(f"\nDone. {len(followers)} followers extracted.")
# Save to CSV
fields = ["username", "display_name", "description", "location",
"followers_count", "followings_count", "tweets_count",
"verified", "created_at"]
with open(f"{TARGET}_followers.csv", "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=fields, extrasaction="ignore")
writer.writeheader()
for u in followers:
u["description"] = (u.get("description") or "").replace("\n", " ")
writer.writerow(u)
print(f"Saved to {TARGET}_followers.csv")
This script handles pagination automatically, respects the rate limit of 20 requests per second, and outputs a clean CSV ready for Google Sheets, Excel, or a CRM import.
How fast is this? An account with 10,000 followers requires 50 API requests. At Sorsa's rate limit, that completes in under 30 seconds. An account with 100,000 followers takes 500 requests and finishes in about a minute. Even a million-follower account is only 5,000 requests, which runs in roughly four to five minutes.
JavaScript: same extraction in Node.js
const API_KEY = "YOUR_API_KEY";
async function extractFollowers(username, maxPages = 500) {
const followers = [];
let cursor = null;
for (let page = 0; page < maxPages; page++) {
const params = new URLSearchParams({ username });
if (cursor) params.set("next_cursor", cursor);
const resp = await fetch(
`https://api.sorsa.io/v3/followers?${params}`,
{ headers: { ApiKey: API_KEY } }
);
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
const data = await resp.json();
followers.push(...(data.users || []));
cursor = data.next_cursor;
if (!cursor) break;
await new Promise(r => setTimeout(r, 50));
}
return followers;
}
Getting the following list (who an account follows)
Swap the endpoint from /followers to /follows. Everything else stays identical. The /follows endpoint returns the same user profile structure with the same 200-per-page pagination.
Following lists are often more revealing than follower lists. A startup founder's following list shows which investors, competitors, and thought leaders they track. An influencer's following list reveals their information sources.
Verified followers only
If you only care about verified accounts (Blue, Gold, or Gray checkmarks), use the /verified-followers endpoint. Same request format, same response structure, but pre-filtered to verified users only. Useful for isolating high-profile followers without post-processing.
Method 2: Browser Extensions
Browser extensions like X Follow Exporter and XExporter run inside Chrome, using your active X session to scroll through the follower list and capture data. No coding required.
How they work: You visit a profile page, click the extension icon, and it automates the scrolling process, extracting profile data as it loads. Results export to CSV, JSON, or Excel.
Practical limits:
- Speed. Extensions simulate browser scrolling with built-in delays to avoid triggering X's rate limits. Extracting 10,000 followers takes 30 to 90 minutes, compared to under 30 seconds via API.
- Volume caps. Free tiers typically limit exports to 200 records. Paid tiers (around $9.99/month) cap at 50,000 per export. For accounts with hundreds of thousands of followers, you hit a wall.
- Session dependency. Extensions use your logged-in X session. If X detects unusual scrolling patterns, it may temporarily throttle your account or serve CAPTCHAs. Your account is the one at risk, not a separate API key.
- No automation. You cannot schedule recurring extractions or integrate with a data pipeline. Each export is a manual, browser-dependent process.
When extensions make sense: You need a one-time export of a few thousand followers, you do not write code, and you do not need to automate. For a marketing manager who needs to pull 500 followers from a niche account once a quarter, an extension is perfectly fine.
When they do not: Recurring extractions, lists over 50K, integration with CRMs or databases, or any workflow that runs without manual intervention.
Method 3: DIY Scrapers with Selenium or Playwright
Open-source scrapers like the ones on GitHub (Chemsse57/TwitterScraper, aimadnet/X-Twitter-Followers-Scraper) use browser automation to log into X, navigate to a follower page, and scroll through the list while parsing HTML.
The appeal is obvious: it is free, open-source, and gives you full control.
The reality is less appealing:
- Fragile by design. These scrapers parse X's HTML structure, which changes without notice. Most GitHub repos in this space have not been updated in one to three years. If X changes a CSS class name or DOM structure, the scraper breaks silently or returns garbage data.
- Authentication risk. Every scraper requires your X session cookie or login credentials. You are handing your account access to a script that automates behavior X explicitly prohibits in its Terms of Service. Account suspensions are not theoretical; they happen.
- Slow. Selenium-based scrapers load full browser pages, wait for JavaScript to render, then parse the DOM. Extracting 10,000 followers can take one to four hours depending on scroll speed and delays.
- Maintenance cost. Even if a scraper works today, someone on your team needs to fix it when it breaks next month. The ongoing engineering time often exceeds the cost of a paid API.
I have seen teams spend 40+ hours building and maintaining a Selenium-based Twitter scraper only to abandon it after X's third DOM update in six months. The upfront cost is zero, but the total cost is rarely zero.
When DIY scrapers make sense: You are a developer learning web scraping as a skill, you need a tiny one-off extraction, or you are working under constraints where no paid tool is an option.
Method 4: Manual Export via X's Data Download
X lets you request an archive of your own account data via Settings > Your Account > Download an archive of your data. This archive includes a followers.js file with the user IDs of your followers, and a following.js file for accounts you follow.
What you get: User IDs in JSON format. No usernames, no bios, no follower counts, no profile data. Just numeric IDs.
What you do not get: Any information about other accounts' followers. This method only works for your own account. And even for your own followers, you would need to resolve each user ID to a username and profile, which requires additional API calls anyway. The ID conversion endpoint can help with that step if you go this route.
The conversion workaround: Several Reddit users have shared workflows for converting the followers.js file to CSV using online JSON-to-CSV converters after stripping the leading window.YTD.follower.part0 = wrapper. It works, but the output is still just IDs.
When this makes sense: You are leaving X entirely and want a personal backup of who followed you. That is about it.
How Much Does It Cost to Extract 50,000 Followers?
Cost is where the methods diverge most dramatically. Here is what 50,000 followers actually costs with each approach:
| Method | Cost for 50K followers | Notes |
|---|---|---|
| Sorsa API (Pro plan) | ~$0.50 | 250 requests at $0.00199 each. Pro plan is $199/mo for 100K requests, so this uses 0.25% of the monthly quota. |
| Sorsa API (Starter plan) | ~$1.23 | 250 requests at $0.0049 each. Starter is $49/mo for 10K requests. |
| Chrome extension (paid tier) | $9.99/mo | Flat monthly fee regardless of volume, but capped at 50K per export. |
| DIY Selenium scraper | $0 + dev time | Free in direct cost, but 4 - 10 hours of setup, testing, and ongoing maintenance. At $50/hr for a developer, that is $200 - $500 in labor. |
| Manual X export | N/A | Cannot extract other accounts' followers. Own followers come without profile data. |
For a single extraction of 50,000 followers, an API call through Sorsa costs less than a cup of coffee. A browser extension costs a monthly subscription. A DIY scraper costs developer hours.
The math gets more decisive at scale. Extracting followers from 20 competitor accounts, each with 25,000 followers, means 500,000 profiles. With Sorsa API on the Enterprise plan ($899/mo), that is 2,500 requests at $0.0018 each: about $4.50 total. Trying that with a browser extension would take days of manual work. Trying it with Selenium would require a multi-account rotation setup to avoid detection.
For a full breakdown of how Twitter API pricing works across providers, including the official X API's pay-per-use model, see our dedicated pricing guide.
What to Do with the Data Once You Have It
Extracting a follower list is step one. The value comes from what you do with it.
Filter by profile criteria
Every follower profile from an API extraction includes bio text, location, follower count, and tweet count. You can segment the raw list immediately:
# Find high-value accounts: 1K+ followers, active, with a website in bio
qualified = [
u for u in followers
if u.get("followers_count", 0) >= 1000
and u.get("tweets_count", 0) >= 100
and u.get("bio_urls")
]
print(f"Qualified leads: {len(qualified)} / {len(followers)}")
Common filters for lead generation: bio keywords matching job titles ("founder," "CTO," "head of growth"), location strings matching target markets, minimum follower count to filter out bots, and presence of a website URL in the bio.
Find audience overlap between competitors
Pull follower lists from multiple competitors and find users who follow two or more of them. These people are the most engaged audience in your space:
from collections import Counter
competitors = ["competitor_a", "competitor_b", "competitor_c"]
all_ids = []
for handle in competitors:
handle_followers = extract_followers(handle, max_pages=25)
all_ids.extend(u["id"] for u in handle_followers)
counts = Counter(all_ids)
overlap = {uid: c for uid, c in counts.items() if c >= 2}
print(f"Users following 2+ competitors: {len(overlap)}")
For a deeper dive into this workflow, see the competitor analysis and target audience discovery guides in our docs.
Feed into CRMs and ad platforms
A CSV of filtered followers imports directly into HubSpot, Pipedrive, or Salesforce as a lead list. The same data can build Custom Audiences on ad platforms for targeted campaigns. One fintech client I worked with extracted 8,000 followers from a competitor's account, filtered down to 400 profiles with "CFO" or "controller" in their bios and European location hints, and imported the result into Pipedrive. Their outbound reply rate on that segment was 14%, roughly three times their usual cold outreach numbers.
Map follower geography
If you need country-level data beyond what the location field provides, the audience geography workflow uses the /about endpoint to resolve each follower's country.
Risks and Safety by Method
Not all extraction methods carry the same risk. Here is what you are actually exposing:
API-based extraction (Sorsa, other providers): Zero risk to your X account. The API authenticates via its own key, completely separate from your Twitter login. You never share cookies or credentials. If you exceed the API's rate limit, you get a 429 response and retry after a second. No account flags, no CAPTCHAs, no suspensions.
Browser extensions: Moderate risk. Extensions operate through your logged-in X session. Most reputable extensions add delays between requests to mimic human scrolling, but unusual activity patterns can still trigger rate limiting on your account. Your session cookie is accessible to the extension code.
DIY scrapers (Selenium/Playwright): High risk. These tools automate a browser using your credentials or session cookies. X actively detects automated browser behavior and can restrict or suspend accounts that trigger anti-scraping defenses. Running a scraper on a server 24/7, as one Reddit user described wanting to do, is almost certain to result in account action.
Manual X export: No risk. You are using X's own data download feature with your own account.
Legal context. Public follower data is, by definition, publicly visible on X. Extracting and analyzing it for research, marketing, or competitive intelligence is standard practice. The key lines not to cross: do not resell raw scraped data, do not send unsolicited bulk messages to extracted followers, and comply with GDPR or equivalent regulations if handling data from EU users.
FAQ
Can I extract followers from any public account, or only my own?
Any public account. If a profile's follower list is visible when you visit it on x.com, an API or extension can extract it. The only exception is protected (private) accounts, where follower lists are hidden from everyone except approved followers.
How many data fields do I get per follower?
With an API like Sorsa, each follower profile includes over 15 fields: user ID, username, display name, bio, location, follower count, following count, tweet count, media count, verified status, account creation date, profile image URL, banner URL, bio URLs, pinned tweet IDs, and flags like protected and can_dm. See the full response format in the docs.
How do I extract the following list (accounts someone follows) instead of followers?
With the Sorsa API, swap /followers for /follows in the endpoint URL. The request parameters, response format, and pagination work identically. Following lists are often smaller than follower lists and extract faster. Full details in the /follows endpoint reference.
What about extracting only verified followers?
Sorsa API has a dedicated /verified-followers endpoint that returns only followers with Blue, Gold, or Gray verification badges. Same request format, pre-filtered results. Useful for identifying high-profile or premium followers without filtering the full list yourself.
Can I extract followers without writing code?
Yes, in two ways. Browser extensions like X Follow Exporter work entirely through a Chrome UI with no coding. Alternatively, Sorsa API offers a free API Playground and a Recent Followers tool where you can preview follower data through a web interface before writing any code.
How often should I re-extract a follower list?
Depends on the use case. For competitive intelligence, monthly extractions capture meaningful audience shifts. For active lead generation campaigns, every two to four weeks keeps the pipeline fresh. For one-time research projects, a single extraction is enough. Avoid daily full re-extractions unless you have a specific monitoring need; incremental checks are more efficient.
Is the follower order consistent? Will I get the same results each time?
The /followers endpoint returns followers in the order X provides them, which is generally newest followers first (reverse chronological). This means the first pages of results always contain the most recently acquired followers. The order is consistent across requests, but the actual list changes as people follow and unfollow.
What happens with suspended or deactivated follower accounts?
The follower count shown on a profile is a real-time counter maintained by X. The extractable list may differ slightly because suspended, deactivated, or recently removed accounts may still be counted in the total but not appear in the list. This is platform-level behavior, not specific to any extraction tool.
Can I use extracted follower data for Twitter ad targeting?
Yes. Export your filtered follower list to CSV, then upload it as a Custom Audience in X's ad platform (or any other ad platform that accepts email/username-based audiences). Pair this with engagement data from the engagement calculator to prioritize high-engagement segments.
Getting Started
If you want to test follower extraction before committing to a method:
- Preview without code. Open the Recent Followers tool, enter any public handle, and see the last 20 followers with full profile data. No API key needed.
- Try the API playground. The Sorsa API Playground lets you test the
/followersand/followsendpoints through a browser UI. Useful for seeing the exact JSON response format before writing code. - Get an API key. When you are ready to scale, grab a key from the dashboard. The quickstart guide walks through authentication and your first request in under five minutes.
- Read the docs. The full followers and following documentation covers pagination patterns, filtering strategies, and production-scale extraction workflows.
Disclosure: Sorsa API is our product. We have aimed to cover all methods fairly in this guide, but we obviously think the API approach is the strongest for most use cases. We recommend testing any solution against your own requirements before committing.
Last verified: April 2026