Almost six months after a U.S. presidential election that was rife with controversy over fake news, Facebook acknowledged that “malicious actors” had created “fake personas” to spread misinformation last year on the site.
Facebook admits ‘malicious actors’ spread misinformation during the 2016 U.S. election
It also cites a government report that found Russia played a major role in the presidential race.


Immediately following Trump’s surprise November victory, the company’s chief executive, Mark Zuckerberg, scoffed at the idea that Facebook — and fake news — had determined the election’s outcome, calling the notion “crazy.”
This week, however, the social giant’s security team admitted in a new report (pdf) that political debate on Facebook suffered as a result of “information operations,” and hinted at the impact of the suspected Russian-sponsored cyber attack that led to the release of private emails involving Democratic contender Hillary Clinton and her top aides.
The company never mentioned Russia by name, even as it cited a government investigation that found the country had “ordered an influence campaign” targeting the U.S. presidential race — nor does Facebook discuss Clinton specifically.
But the report does conclude, for example, that unidentified sources sought “to share information stolen from other sources, such as email accounts, with the intent of harming the reputation of specific political targets.”
That data, the report found, had been hosted sometimes on “dedicated sites,” seemingly a reference to the likes of WikiLeaks. At the same time, “fake personas were created on Facebook and elsewhere to amplify news accounts and direct people to the stolen data.” Other inauthentic Facebook accounts sought “to push narratives and themes that reinforced or expanded on some of the topics exposed from stolen data,” their review concluded.
Facebook’s security team stressed that the reach of that information was “marginal compared to the overall volume” of content shared during the 2016 presidential election.
But the report still marks a dramatic change in tone from Zuckerberg and his company, which have endeavored over the past six months to try to combat fake news. So far, it has fact-checkers to flag false news reports in News Feed, for example, and the company said it had “taken action” against 30,000 accounts fake accounts in France, which is currently in the midst of its own heated election.
This article originally appeared on Recode.net.











