Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

‘Machine Learning Is Hard’: Google Photos Has Egregious Facial Recognition Error

A programmer flagged a nasty problem with Google’s facial recognition. Thankfully, its engineers responded promptly.

Google

In May, Google unfurled its new Photos app as the crowning achievement of machine learning capabilities — a service that stores and catalogues your images with computing smarts that can pick out buildings, landscapes, animals, even abstract events like birthdays, on its own.

As users get their hands on the app, though, it’s evident Photos is far from perfect. Two days ago, Jacky Alciné, an African-American programmer based in New York, flagged a flagrant error on Photos: It had tagged him and his friend as “gorillas.”

To Google’s credit, its human ambassadors responded swiftly. Yonatan Zunger, an engineer and “chief architect” of Google+, the social service from which Photos was stripped, replied to Alciné on Twitter within roughly 90 minutes, noting that he had alerted the Photos team.

Zunger checked in with Twitter missives the following day; Alciné thanked him, and noted that the erroneous label was removed.

Google has advanced more on artificial intelligence than others, infusing the advanced tech into speech and photo recognition as well as natural language processing. (Its trippy neural network construction has yet to be baked into consumer products.) The company is frank that its machine learning abilities still have a way to go. In his exchange on Twitter, Zunger noted how the company was still working on “long-term fixes” for linguistics and image recognition for “dark-skinned faces.”

Google put out this conciliatory statement: “We’re appalled and genuinely sorry that this happened. We are taking immediate action to prevent this type of result from appearing. There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”

Zunger, for his part, put it more bluntly.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel