Source Credibility Checker
Check News Source
Enter a URL to verify if it's associated with known low-credibility news sources used by superspreaders.
How This Works
Based on the FIB index methodology from Indiana University's Observatory on Social Media, this tool identifies sites frequently shared by news superspreaders. These sites include:
- InfoWars
- The Gateway Pundit
- Natural News
The FIB index tracks accounts that repeatedly share links from these sources. Research shows 56% of superspreaders share content from these sites at least 17 times daily.
Imagine your social media feed is a crowded room. Everyone’s talking. But only 10 people are shouting - and everyone else is repeating what they say. That’s not a metaphor. That’s what’s happening on Twitter, Facebook, and YouTube right now. A tiny fraction of accounts - less than 0.1% of users - are responsible for nearly 80% of the false or misleading news that goes viral. These are the news superspreaders, and they’re not just sharing rumors. They’re setting the agenda for elections, public health, and cultural debates across the globe.
Who Are These Superspreaders?
News superspreaders aren’t bots. They’re real people. And according to research from Indiana University’s Observatory on Social Media (OSoMe), they look a lot like your neighbor: middle-aged, politically engaged, and deeply distrustful of mainstream media. A 2022 study in Science found that 56% of these accounts belonged to women, with a median age of 52. Nearly 8 out of 10 identified as Republican. They don’t post once a day. They post 17 times a day - manually, consistently, and with the same handful of low-credibility sources. These aren’t fringe trolls. They’re regular users who’ve built large followings by feeding content that confirms existing beliefs. Their go-to sources? Websites like InfoWars, The Gateway Pundit, and Natural News - all flagged by Media Bias/Fact Check as “conspiracy” or “fake news.” When they share a link, it doesn’t just disappear into the void. It gets picked up by friends, family, and strangers who trust them. Then it spreads again. And again. Until it’s trending.The FIB Index: How Researchers Track the Untrackable
How do you find someone who’s spreading misinformation when they’re not breaking any rules? That’s where the False Information Broadcaster (FIB) index comes in. Developed by OSoMe, the FIB index doesn’t look at how many likes a post gets. It looks at who shares what, and how often. The system tracks accounts that repeatedly share links to sources rated as low-credibility. It doesn’t care if the post is funny or angry. It cares if the same 10 accounts are sharing the same 5 fake news sites every single day. Then it measures how much those posts get reshared. One account might post 10 low-credibility links. But if those links get retweeted by 10,000 others, that account gets a high FIB score. That’s how researchers identified the “disinformation dozen” - 12 accounts that generated 65% of all anti-vaccine content on social media during the pandemic. The FIB index isn’t perfect. It takes 48 hours of computing power on a 32-core server just to generate one monthly report. But it’s the most accurate tool we have. And it’s showing something terrifying: the same small group of people are driving the narrative across continents.Platform Failures and Political Reversals
In 2020, Twitter started cracking down. They suspended 30% of the accounts flagged by OSoMe as superspreaders. For a while, misinformation dropped. Then Elon Musk bought the company in October 2022. Within months, 7 of the “disinformation dozen” were reinstated. Robert F. Kennedy Jr.’s account, once suspended for spreading vaccine lies, started generating over 2.3 million shares of anti-vaccine content in just the first quarter of 2024. Twitter’s policy changes didn’t stop there. In November 2023, they eliminated third-party fact-checking. That meant no more warning labels on false claims. No more links to credible sources. Just raw, unchecked content. OSoMe’s December 2023 report showed a 37% spike in low-credibility sharing after that change. Facebook and Instagram aren’t much better. While Twitter’s superspreaders are mostly political, Facebook’s are often tied to verified pages - local influencers, church groups, or conspiracy forums that look legitimate. YouTube’s algorithm doesn’t care about truth. It cares about watch time. So if a video titled “The Truth About 5G and Cancer” keeps people watching for 12 minutes, it gets pushed to millions - even if every claim is false.
Why Do People Believe Them?
Here’s the uncomfortable truth: most people who follow superspreaders don’t think they’re consuming misinformation. They think they’re getting the “real story.” A 2023 OSoMe user study found that 68% of viewers engaging with superspreader content didn’t recognize it as false. Comments like “They tell the truth mainstream media won’t” appeared over and over. This isn’t about ignorance. It’s about identity. These accounts don’t just share news. They offer belonging. They say: “You’re not crazy. The system is lying to you. I’m the one who sees through it.” That’s powerful. And it’s why blocking one account doesn’t work. If you block one, five more pop up with the same content, just slightly rewritten. Reddit users have tried to fight back. In r/MediaBias, people report blocking 50+ accounts - only to find they all repost from the same five sources. Trustpilot reviews of fact-checking tools are full of complaints: “They never catch the big ones before it goes viral.”What’s Being Done - and Why It’s Not Enough
Fact-checkers like Snopes and FactCheck.org process about 12,000 claims a month. Superspreaders share 2.4 million low-credibility items every day. It’s like trying to mop up a flood with a sponge. Some platforms have tried labeling content. MIT found that adding warning labels reduced sharing by 29%. But it also lowered engagement with real news by 18%. People started avoiding anything with a label - even accurate stories - because they saw it as “censorship.” Other solutions are more technical. Graphika and Logically sell superspreader detection tools to corporations for up to $500,000 a year. But their false positive rate? 22%. That means nearly one in five legitimate accounts gets flagged as dangerous. The most promising fix? Reduce algorithmic amplification. In test environments, when platforms stopped pushing low-credibility content to large audiences, superspreader reach dropped by 42%. But that hurts engagement - and engagement is what platforms sell to advertisers. So they don’t do it.
The Bigger Picture: Who Benefits?
The global misinformation economy is worth $7.3 billion a year. Political misinformation makes up 58% of it. That means billions are being spent to manipulate voters, scare people about vaccines, and erode trust in institutions. The superspreaders aren’t just random actors. They’re part of a system - one that includes media outlets, political operatives, and even foreign actors. In 2022, Media Matters found that mainstream news outlets were also superspreaders - not by creating lies, but by repeating them without correction. When Trump tweeted false claims about election fraud, news accounts retweeted them 2 out of 3 times. That gave those lies legitimacy. The media didn’t create the virus - but they became its carrier.What You Can Do
You can’t shut down the superspreaders. But you can stop being their amplifier.- Check who’s sharing the content before you retweet. Are they one of the same 10 accounts that keep popping up?
- Don’t engage with outrage. Superspreaders thrive on comments, shares, and replies. Silence kills their reach.
- Use tools like Media Bias/Fact Check to vet sources before sharing.
- Report accounts that repeatedly share known low-credibility sites - even if they’re “just sharing their opinion.”
- Teach others. A Stanford study found it takes 12 hours of training to help people recognize superspreader patterns with 75% accuracy. That’s not a lot of time - but it’s a start.
What’s Next?
By 2027, AI-generated content will make up 65% of superspreader material, according to Gartner. That means fake videos, deepfake audio, and automated text that looks human. Detection will get harder. The EU’s Digital Services Act, which took effect in November 2023, requires platforms to report on systemic risks from misinformation. By January 2026, they’ll have to publish “superspreader transparency reports.” That’s a start. But without enforcement, it’s just paperwork. Researchers at Indiana University are building real-time superspreader detection tools expected to launch in late 2025. If they work, they could flag dangerous accounts before they go viral. But they’ll need political will to act on the results. The truth is simple: we’re not fighting misinformation. We’re fighting a network. And networks only break when the people in them decide to stop feeding them.What exactly is a news superspreader?
A news superspreader is a social media account that repeatedly shares low-credibility information - like fake news or conspiracy theories - and causes it to spread widely. Unlike bots, these are often real people who manually post content multiple times a day. Research shows that a tiny fraction of accounts (as few as 10) can be responsible for one-third of all false information circulating online.
How do researchers identify superspreaders?
Researchers use the False Information Broadcaster (FIB) index, developed by Indiana University’s Observatory on Social Media. The FIB index tracks accounts that consistently share links to sources rated as low-credibility by Media Bias/Fact Check. It doesn’t measure likes or shares directly - it measures how often an account shares these sources and how much those posts get amplified by others. High FIB scores mean the account is a key node in the misinformation network.
Are superspreaders mostly bots or real people?
Most are real people. Studies show they’re not automated. They manually post 17 times a day on average - far more than the typical user, who shares less than one low-credibility post per day. They’re often older adults, women, and politically conservative individuals who distrust mainstream media. Their power comes from trust, not technology.
Why did Twitter stop suspending superspreaders after Elon Musk took over?
After Elon Musk acquired Twitter (now X) in October 2022, he reinstated many previously banned accounts - including 7 of the 12 most dangerous “disinformation dozen” accounts. Musk claimed he wanted to promote free speech, but the result was a sharp rise in misinformation. The platform also removed third-party fact-checking in November 2023, removing a key tool for identifying false content.
Can fact-checking organizations keep up with superspreaders?
No. Fact-checkers like Snopes and FactCheck.org process about 12,000 claims per month. Meanwhile, superspreaders share over 2.4 million low-credibility items every single day. The scale is impossible to match. Even the best fact-checking tools can’t catch everything before it goes viral.
What’s the most effective way to reduce superspreader impact?
The most effective method is reducing algorithmic amplification. Tests show that when platforms stop pushing low-credibility content to large audiences, superspreader reach drops by 42%. But platforms avoid this because it lowers engagement - and engagement drives ad revenue. Education helps too: teaching people to recognize superspreader patterns can improve detection accuracy by 75% after 12 hours of training.
Are news outlets also superspreaders?
Yes. A 2022 Media Matters study found that two-thirds of media retweets of Donald Trump’s false claims passed on misinformation without correction. Even reputable outlets became carriers - not creators - of false narratives. This gave those lies legitimacy and made them harder to challenge.
Will AI make superspreaders harder to stop?
Absolutely. By 2027, AI-generated content - like fake videos, synthetic voices, and automated text - is expected to make up 65% of all superspreader material. These AI-generated posts are harder to detect because they look human and can be produced at scale. This means current detection tools will become obsolete unless they evolve with the technology.