Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Here’s why Mark Zuckerberg doesn’t think Infowars should be banned

Facebook prefers to slow the spread of conspiracy theories. That’s better than outright banning them, said the social media kingpin.

Facebook wants to rid itself of so-called “fake news,” and Infowars, the far-right site that often promotes appalling conspiracy theories, is one of the most egregious purveyors of fake news, including insisting that the Sandy Hook shooting of school children was staged.

That’s why a lot of people were confused last week when Facebook said it wouldn’t ban a site like Infowars, even though it acknowledged the service often shares “conspiracy theories or false news.”

What gives? The company’s official stance is that it will use its algorithms to minimize the spread of false news, but it won’t take those posts down. In a tweet, Facebook described it as a “free speech” issue. Now, CEO Mark Zuckerberg has weighed in with his explanation for how the company thinks about its role policing the news online.

Here’s how Zuckerberg described the company’s thinking to Recode Editor at Large Kara Swisher on this week’s Recode Decode podcast:

“Let’s take this a little closer to home. So I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong — I don’t think that they’re intentionally getting it wrong. It’s hard to impugn intent and to understand the intent. I just think as important as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public leaders who we respect do, too. I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times.”

The solution Zuckerberg thinks is more fair is the one that Facebook currently employs: An offensive or deliberately inaccurate post can stay up, but Facebook may downgrade the post so that its algorithms show it to fewer people. “You can put up that content on your page even if people might disagree with it or find it offensive, but that doesn’t mean that we have a responsibility to make it widely distributed in News Feed,” Zuckerberg said.

That policy has attracted a lot of criticism. Facebook’s content-filtering practices are so controversial, in fact, that they were the subject of a nearly three-hour-long congressional hearing yesterday. But Zuckerberg continued to insist that he doesn’t want to be the one to decide what’s right or wrong online, even if his desire to curb fake news has put the company in a position where it needs to do just that. And inevitably, he said, mistakes will be made.

“You can either look at this and say we should have predicted all these issues ahead of time, and some people think that,” he said. “I tend to think that it is very difficult to predict every single thing.”

“There are going to be challenges that come up that are things that we did not foresee.”

Facebook’s policy is still evolving, though. Zuckerberg says that some misinformation is worse than others. Specifically, misinformation that encourages harm, like what we’ve seen in Myanmar. “We are moving towards the policy of misinformation that is aimed at or going to induce violence, we are going to take down,” he said.

When asked how he felt personally about Facebook’s role in spreading dangerous content, Zuckerberg paused. “My emotion,” he said, “is feeling a deep sense of responsibility to try to fix the problem.”

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel