Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Hey Facebook, YouTube and Twitter: It’s time to clarify what gets you banned — then actually enforce the rules

The harassment of the teens who spoke up after the Parkland massacre is a black-and-white case study of the impotence of today’s social media giants.

Charlton Heston as Moses in “The Ten Commandments”
Charlton Heston as Moses in “The Ten Commandments”
“You have sinned a great sin!”
Paramount Pictures / “The Ten Commandments”

Much like gun violence in America, toxicity on social media is so common that we’ve become inured to it: “Well, that’s the way things are.”

To paraphrase some badass teenagers in Florida: I call B.S.

The harassment of and conspiracy theory-mongering about those same teens, who survived the Valentine’s Day shooting in Parkland, Fla., and are now speaking out in favor of gun control, should give us all pause. YouTube, Twitter and Facebook are, once again, proving that they are ill-equipped to control malicious actors on their platforms, because they’re afraid to call themselves “media companies” (even though they are) and take some responsibility for the content they amplify.

This is as black and white as it gets.

Unlike a political campaign, where reasonable people can disagree about winners and losers, there are not two “sides” in a mass shooting. Everyone who was in Parkland High School on Feb. 14 should have the unflinching, immediate support of the leading social media platforms.

But that’s not what happened this week. Instead, lies went viral on Facebook over many hours (great engagement!) and YouTube once again helped spread a false, toxic video because it was “trending.”

There are basic questions at play here that consumers should be asking: What do these sites believe in? Do they have values? How are Twitter’s rules, Facebook’s policies and YouTube’s community guidelines informed by these values? Are those rules actually enforced the same way for everyone?

And if the platforms can’t answer those questions: Why should we use them?

YouTube (eventually) removed the offending conspiracy-theory video about one of the Florida teens, saying it violated a “policy on harassment and bullying.” Pressed for more detail by our sister site, The Verge, it provided the following comment:

This video should never have appeared in Trending. Because the video contained footage from an authoritative news source, our system misclassified it. As soon as we became aware of the video, we removed it from Trending and from YouTube for violating our policies. We are working to improve our systems moving forward.

Vague much?

The poor communication around toxicity on social media is disheartening. These are extremely hard problems to solve, but right now, it too often feels like no one is trying to be proactive about the next blowup. So, here are a couple starting points.

When he appeared on Kara Swisher’s Recode Decode podcast last year, Instagram CEO Kevin Systrom said something that others, including Instagram’s parent company Facebook, could learn from:

“Do I feel personally responsible for making the world a safer place online? Yeah. Am I gonna get it perfect? No. But the intention is there. And I realize that the products we build have real impact on people. They have impact on their mental health, they have impact on their social networks in the real world, their relationships with their friends, and that at its best that can be such an amazing thing.”

This is a perfect articulation of a healthy, smart, apolitical value: The people who use our product can be changed by it and we have a responsibility to protect them. Later in the podcast, Systrom recalled how he and co-founder Mike Krieger personally deactivated trolls’ accounts in the app’s early days, a reflection of another good value.

“If you’re here to cause trouble, you don’t belong,” Systrom said. “I think that set a tone in the community.”

That tone is still coming through loud and clear. Currently, Instagram is the most pleasant social network I have ever used — no surprise.

Meanwhile, I’ve been on Twitter for nearly 10 years, and I couldn’t tell you with much certainty what I could tweet that would definitely get me banned from the site. To its credit, the site recently rewrote its rules about abuse and it feels like the worst of the worst after Parkland was proliferating on YouTube and Facebook, not Twitter. But there’s no way of knowing that that’s the case.

So, in addition to spelling out values, here’s another proposal: Platform moderators should publish what they’re doing, every day: “We banned this many users for copyright violation; we put this many accounts under review for alleged harassment; we concluded reviews of this many accounts today and restored them to normal because we found they did not violate the rules.”

Crucially, these reports should not include the names or usernames of anyone who is under review or has been banned — that would just become a reward for trolls. But an anonymized running record, like a community police blotter, would be a valuable window into how the rules work and why moderators behave the way they do.

It’s time for these platforms to grow up and start behaving like the big companies they are. They need to have rules that are transparent, well-communicated and aggressively enforced. If they continue to fail us here, then they deserve no loyalty from their users and we should take our thoughts elsewhere.


This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel