Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Facebook removed hundreds more accounts linked to the Myanmar military for posting hate speech and attacks against ethnic minorities

The accounts were sharing “anti-Rohingya messages” — the same kind of messages that have played into a broader genocide in Myanmar.

Rohingya refugees protest on the first anniversary of the Rohingya crisis in August 2018 in Bangladesh.
Rohingya refugees protest on the first anniversary of the Rohingya crisis in August 2018 in Bangladesh.
Rohingya refugees protest on the first anniversary of the Rohingya crisis in August 2018 in Bangladesh.
Paula Bronstein / Getty Images

Facebook has discovered and removed another coordinated hate campaign operated by the Myanmar military, which has used the service to spread false news and insults about the Rohingya people, Myanmar’s mostly Muslim ethnic minority.

In a blog post published Tuesday night, Facebook said it took down 425 Pages and 150 additional Facebook and Instagram accounts “linked to the Myanmar military.” At least 2.5 million people “followed at least one of these Facebook Pages,” the company added.

The Pages — which looked on the surface like “news, entertainment, beauty and lifestyle” Pages — were actually “periodically being used to drive specific anti-Rohingya messages,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy, in an interview with Recode.

It’s the third time since August that Facebook has taken down Pages and accounts linked to the Myanmar military, and these new Pages were part of the same network of accounts Facebook removed in the past, a spokesperson confirmed. Facebook was able to link these accounts to the Myanmar military in a number of ways. In some cases, military personnel were listed as administrators on the Pages. In other cases, Facebook noticed “infrastructure overlap,” such as matching IP addresses, between military devices and the Facebook accounts.

Facebook’s actions here may be too late to fix Myanmar’s problems. The government and military of the mostly Buddhist country has been working to eliminate the country’s Rohingya minority, and more than 725,000 Rohingya — out of a population estimated at just over one million — have fled to Bangladesh to escape ethnic genocide in Myanmar since late 2017, according to a recent report from the Human Rights Council.

Facebook has been at the center of the government’s campaign of hate and misinformation. The military has used Facebook to help spread propaganda to support that mission, and Facebook has been criticized for moving too slowly to stop the spread of this propaganda. Military officials in Myanmar “were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade” and targeted the Rohingya, as the New York Times reported in October.

That’s particularly damning, considering Facebook’s presence in Myanmar. The social network is used by an estimated 20 million people in Myanmar, or roughly 40 percent of the population. That’s the same number of people who have the Internet there, according to a human rights impact report Facebook commissioned and published in November. The Facebook app comes preinstalled on many smartphones sold in the country.

Those inside Facebook admit that the company was much too slow to find and remove these kinds of posts, and it’s clear now that Facebook was not properly staffed up to execute the manual review necessary to take down posts from Myanmar that violate the company’s community standards.

When Facebook launched in Myanmar in 2011, it had “a couple” of full-time Burmese speakers on its moderation team, said Monika Bickert, Facebook’s head of global content policy, in an interview. Now it has more than 100.

Facebook claims that it is getting better at removing bad content, and Tuesday’s account takedown is evidence that the company is making some progress. Facebook’s artificial intelligence technology can proactively flag hate speech in the country at a much higher rate than it did one year ago, for example. Facebook says its systems detected 63 percent of the hate speech it removed or suppressed last quarter, up from just 13 percent at the end of last year.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel