Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Google’s Eric Schmidt Calls for a Check Against Hate Online but Avoids Encryption Talk

In an op-ed, the Alphabet chairman dances delicately between censorship and free expression.

Win McNamee | Getty Images

Over the weekend, Democratic front-runner Hillary Clinton publicly berated tech companies for not doing enough to combat ISIS, giving Silicon Valley a taste of what’s to come in the heated election cycle.

Eric Schmidt, Google’s former CEO and executive chairman of its parent Alphabet, penned a cloaked response to Clinton and other politicians in a New York Times opinion piece on Monday. The column leads with praise for the wonders of the global Web, before admitting it can be used for egregious harm. “Ever since there’s been fire, there’s been arson,” he writes.

What he does not write are the words “encryption” or “backdoor” — a tacit signal that the Internet giant is holding firm in its position against mounting political pressure.

Tech giants, like Google and Facebook, have said they are willing to scrub content from social media accounts and videos if they promote terrorism or violence. But they’ve stopped short of acquiescing to the other rising political demand: That the companies shut down encrypted messages and open up a backdoor passage for governments to track user information.

Schmidt, long a proponent of radical openness on the Web and an opponent of backdoor access, offers some conciliatory language. He mentions some unnamed “tools” that should be built to sift out “hate and harassment” on social media sites and videos (without mentioning Google’s YouTube). In short, if there’s bad content, we should ditch it:

We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice. Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.

But his argument dances around the pivotal question. Schmidt doesn’t make it clear how Internet companies or governments determine what to “spell check” — or who should spell check. Earlier, in the same paragraph, he makes a claim that seems to contradict the one above:

Authoritarian governments tell their citizens that censorship is necessary for stability. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

As Internet middleman, YouTube has had troubles in the past navigating the choppy waters of whether to remove content. On its site, it describes its policy about hate speech this way: “There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their race.” The video site requires its users to flag flagrant content.

This isn’t a simple issue. With terrorism set to be the central focus of the Presidential election, it’s not one that tech companies can shake off. Schmidt’s final line — that the onus is on the collective “us” to build an Internet “free from coercion and conformity” — reads like a plea from Silicon Valley to Washington, D.C.

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel