Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Twitter says it’s fixed a ‘bug’ that allowed ad campaigns to target users with derogatory terms

In a statement, the company says it will “continue to strongly enforce our policies.”

Twitter CEO Jack Dorsey onstage
Twitter CEO Jack Dorsey onstage
The Verge

Twitter said today it had fixed a “bug” in its platform that could have allowed advertisers to target users with racial epithets and terms like “Nazi.”

The change follows a report by the Daily Beast — which found that potential ad campaigns using those derogatory terms could have reached millions on the site — and a broader controversy this week about inappropriate algorithmic ad targeting on big internet platforms.

“We determined these few campaigns were able to go through because of a bug that we have now fixed,” a spokeswoman said in a statement. “Twitter prohibits and prevents ad campaigns involving offensive or inappropriate content, and we will continue to strongly enforce our policies.”

Earlier this week, Facebook faced its own barrage of criticism after ProPublica discovered the social giant allowed advertisers to target users based on categories like “Ku-Klux-Klan” and “Jew hater.” It has since similarly implemented changes to its targeting platform.

And Google for a time also appeared to allow ad campaigns based on racist or otherwise hate-inspired terms, BuzzFeed found, prompting the search giant to do its own fine-tuning last week.

This has spurred a new round of debate as to whether these giant internet companies — which many feel already have too much control — should add more proactive human oversight to their algorithms, especially to filter inappropriate and hateful speech.

On one hand, more human involvement, earlier, could prevent embarrassing discoveries like these, and potentially worse outcomes. On the other, trying to form — and police — a line of what’s appropriate is highly imperfect, and puts even more control in the hands of the platform companies, which tend to prefer to hide behind their algorithms.


This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel