Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Two-thirds of links on Twitter come from bots. The good news? They’re mostly bland.

Twitter bots produce 66% of all links on Twitter. They’re mostly just aggregation, sports, and porn.

Aja Romano
Aja Romano wrote about pop culture, media, and ethics. Before joining Vox in 2016, they were a staff reporter at the Daily Dot. A 2019 fellow of the National Critics Institute, they’re considered an authority on fandom, the internet, and the culture wars.

If you’ve ever wondered whether social media is just a vast network of bots interacting with other bots, congratulations: Your worldview may be dystopian, but it’s not exactly wrong.

A new study has determined that on Twitter, at least, two-thirds of all links shared on the platform are done so by automated tweets powered by bots.

The Pew Research Center studied over a million English-language tweets posted over a 47-day period in 2017 to determine the nature of Twitter’s link-sharing infrastructure — and to glean more information about the behavior and scope of bots on the platform.

The study analyzed 1.2 million tweeted links, which were all generated by just 140,545 Twitter accounts. Of those tweets, 66 percent were generated by bots. The most frequently linked subjects: porn and sports. Ninety percent of all links to adult content and 76 percent of all links to sports-related content were generated by fake humans.

Pew Research Center

Certain modes of content also got more play than others — notably that published through aggregation websites, an estimated 89 percent of which were linked by bots.

Somewhat surprisingly in the era of the fabled Russian bot fake news army, the Pew study found that most of these tweeted bot links seemed relatively benign, indicating that our divisive political climate doesn’t yet seem to have perceptibly warped Twitter’s natural bot-based link-sharing infrastructure.

Bots are highly efficient, but not highly partisan

The Pew study highlights the reason for bots’ popularity on social media platforms: They’re extremely efficient.

Researchers found that a small number of highly active bots could fuel massive amounts of link-sharing. The report noted that 22 percent of all links to news and current events generated during the study originated from just 500 accounts — all suspected bots. “By comparison,” the report also observed, “the 500 most-active human users are responsible for a much smaller share (an estimated 6%) of tweeted links to these outlets.”

The study didn’t assess whether links shared by bots are more prone to go viral, or how humans engage with bot-tweeted links. When taken in conjunction with other recent research detailing the speed at which false information spreads on Twitter, it’s natural to assume that the swift efficiency of bots plays a big part in the spread of fake news.

But in fact, bots don’t seem to have a major role in making fake news go viral. As Vox science reporter Brian Resnick notes, “bots seemed to spread false stories and true stories at equal rates.” Fake news actually spreads faster and further after it’s landed in human hands.

Related

Most of the bots observed in the Pew study had no demonstrative political bias and seemed to be sharing links targeted at centrist audiences rather than polarized ones. And in fact, among popular news and current events content, links with overt political content were among the least frequently shared by bots.

Sorting the bots from the crowd

To figure out which Twitter accounts were bots, the study worked with a tool called the Botometer, developed by scientists at Indiana University. This is an algorithm that closely analyzes Twitter accounts based on over 1,000 pieces of information gleaned from a user’s public profile, public tweets, and mentions.

These include the user’s general mood, the time of day the user typically posts, how much they post, who they follow, and who follows them back, among many other factors. The Botometer then rates how likely that user is to be a bot. (This reporter is 31 percent likely to be a bot.)

After performing several experiments based on the Botometer, the Pew research decided to adhere to a threshold Botometer score of 43 to figure out whether a Twitter user was human or automated. In other words, for the purposes of the study, if the Botometer ranked an account at 43 percent or higher, that account was considered to be a bot.

The PEW researchers also did a version of checking their work, following up to see how many of the accounts identified as bots in their research were ultimately suspended by Twitter — presumably for being bots. Of the 140,545 accounts they surveyed, a total of 7,226 were ultimately suspended. Accounts that were flagged as bots by the Botometer for the study were 4.6 times more likely to have been suspended than accounts flagged as human.

What does all this tell us about bots?

In a press release, Pew’s associate research director, Aaron Smith, pointed out that the study helps quantify “the extent to which bots play a prominent and pervasive role in the social media environment.” Because bots are so efficient, and because they can often go undetected, it’s important for users to be aware of not only the volume of bots fueling the web of link-sharing on social media, but also what their focus and agendas are. Twitter bots seem to be primarily dedicated to sharing link aggregator websites, drawing from a centrist information pool, and targeting centrist audiences.

That all seems relatively harmless, but knowing the sheer number of links being shared by automated accounts throws into sharp relief just how surreal the landscape of digital and social media has become. After all, this study alone involved a neural network-trained algorithm using an automated process to detect whether automated Twitter bots were real. At best, that’s a bit Blade Runner-y; at worst, it’s another sign our robot overlord takeover is proceeding apace.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel