Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

We Can All Learn Something From How Quickly Microsoft’s Chatbot Turned Into a Racist

Racism, sexism and xenophobia are all too easily learned.

Microsoft

There’s an important lesson for us humans in just how quickly Microsoft’s chatbot learned how to spew out racist, sexist and other hateful messages.

For those who missed it, yesterday Microsoft turned on Tay, a chatbot designed to converse with, and mimic the speech patterns of, millennials. But, in less than a day, Microsoft was forced to take Tay offline as the bot started sending offensive messages.

Though Tay was apparently influenced by intentional hate speech, the fact is so are humans — and from an early age. Racism, sexism and xenophobia in general are all learned behaviors that are challenging to un-learn.

It is hard to blame Tay for quickly picking up on hate when we live in a world where Donald Trump can spew anti-Muslim rhetoric and still be a major party’s front-runner. Meanwhile, on Tuesday, North Carolina managed to propose, pass and enact legislation stripping civil rights from an entire group of people.

Microsoft isn’t the first to struggle in this area. IBM taught Watson the entire Urban Dictionary but quickly decided its computer would be better off not knowing everything.

So, yes, Microsoft was right to take Tay offline.

“Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” Tay says in a message on its website.

Instead of teaching Tay to mimic humanity, Microsoft is going to have to teach it to be better than humanity, to filter out our worst inclinations and focus on our better selves. Luckily for Microsoft, it is likely a matter of tweaking a few algorithms and adding more filters.

If only changing humans were that easy.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel