Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

I talked to Google’s Duplex voice assistant. It felt like the beginning of something big.

But it’s not even close to ready for everyday use.

Google CEO Sundar Pichai
Google CEO Sundar Pichai
Google CEO Sundar Pichai
Justin Sullivan / Getty

The coolest thing that came out of Google’s I/O developers conference back in May was Google Duplex: New technology for Google Assistant that could make AI-driven phone calls on your behalf to set up things like restaurant reservations or salon appointments.

The demo was a hit, and the assistant sounded shockingly human. But there was also a problem: People walked away wondering if the Duplex technology actually existed. And if it was real, was it right? Google wasn’t answering many questions about the demo, which seemed a bit off.

Well, Google Duplex does exist. In fact, I talked to it yesterday.

Yesterday, Google invited the media to Oren’s Hummus — a Mediterranean restaurant just a few miles from its Mountain View, Calif., headquarters — for demo 2.0. The company showed off another recording of Duplex that included a few major elements missing from the original demo — mostly, an acknowledgment that the assistant is a robot, not a human, and that it was recording the phone call. Neither of those points were made during the initial demo at I/O, which raised some major ethical questions around Google’s plans for artificial intelligence.

Google also let reporters test the technology for themselves.

For just one minute, I served as an “employee“ at Oren’s and answered the phone to take down a reservation that Google’s voice robot was trying to make. It first identified itself as the Google Assistant, told me it was recording the call, then asked me to secure a group reservation for the following Monday night, July 2. The voice sounded like it belonged to a grown man, not a robot, especially over the phone where you can’t see the source.

Having seen a few others go through the process already, I tried to stump the technology. I first asked for it to hold, to which it replied, “mhmm,” then I got back on to say that there were no reservations available for Monday. I had to repeat myself once, as it requested a clarification after my initial reply, but then Google Assistant kicked me to an actual human, which Google says is the fallback plan for whenever the technology gets confused or can’t seem to finish an assigned task. Certainly not flawless.

It seemed as though the demo was intended to accomplish two things:

First, Google wanted to convince people that the technology does indeed exist and won’t just be used inside Google headquarters. The company says it plans to test Duplex with real humans later this summer, but that test will be small, and will only allow people to call preapproved businesses and ask about their hours of operation. Eventually, Google says, the test will expand to allow people to book restaurant reservations and make salon appointments.

Second, Google wanted to convince people that it isn’t overlooking the potential ethical challenges that come with building AI that acts and sounds like a real person.

That was a real problem after I/O. “What you’re going to hear is the Google Assistant actually calling a real salon,” Google CEO Sundar Pichai said at I/O during the company’s demo. The major concern with that demo was that Google Assistant never said it was a robot or told the salon that the call was being recorded. When pressed by members of the media in the days after the demo, Google declined to comment, leading some to believe the company had simply overlooked this privacy element altogether.

Google’s Nick Fox, head of Google Assistant, said yesterday that the I/O demo was edited, which is why there were missing elements, like the name of the salon, or any disclosure that half the conversation was handled by a robot. “We didn’t include a disclosure at I/O because we were framing and thought of it much more as a tech demo versus more of a product demo,” he said.

At Oren’s, the Duplex recording Google showed off both identified itself as “Google Assistant” and alerted the restaurant that it was recording the call. “Hi, I’m calling to make a reservation. I’m Google’s automated booking service, so I’ll record the call,” the AI said when the shop’s owner, Oren Dobronsky, answered a call in front of reporters as part of the demo.

Were these disclosures added because of the criticism following I/O?

“I think it confirmed that a disclosure made sense,” Fox said. “[But] I think we always would have done a disclosure.”

It’s tough to tell what to make of the Google Duplex technology.

On one hand, it feels like the beginning of something massive: A robot that sounds so lifelike you need an actual disclosure in order to determine you’re speaking to a computer is a powerful thing. The fact that it could, theoretically, handle all of your phone calls someday and free your time for other things you care about is an exciting idea.

But this technology is still very far from that future. The fact that it’s rolling out to people who can only ask businesses what their hours of operation are is a sign that Duplex is in its earliest iteration. When asked when something like this could be available to the mainstream public, Scott Huffman, Google’s VP of engineering for Assistant, said, “We kind of don’t know.”

Still, Google claims it’s taking its role as an industry leader seriously. You can imagine how technology like this — AI that sounds like a real human — could be abused in the wrong hands.

“We’re not just sort of opening this up broadly and saying, ‘Have at it, do whatever you want with it,’” Fox said. “We’re taking, I would say, a very slow and measured approach here, really, so that we can make sure that we’re being thoughtful and that others are being thoughtful about how it can be used and we’re not unwittingly unleashing a set of things that will be kind of hard to put back in a box later.”

This article originally appeared on Recode.net.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel