CEO Sundar Pichai had said to his phone, “OK Google, book me a haircut appointment on Tuesday between 10 a.m. and noon.”
And then, silently and invisibly (to him), Google Assistant had made a phone call to a human receptionist at the salon and had held a conversation, flawlessly impersonating an actual person, complete with “umms” and “ahhs.” The receptionist never knew she’d been talking to AI.
Here’s the video of the demo–playback of an actual phone call:
Not surprisingly, the internet wasn’t delighted.
“That the many in Google did not erupt in utter panic and disgust at the first suggestion of this is incredible to me,” tweeted Zeynep Tufekci, a University of North Carolina professor. “This is horrible and so obviously wrong. SO OBVIOUSLY WRONG.”
And on “CBS This Morning,” Salesforce (CRM) CEO Marc Benioff spoke about it in the context of his call for a new, national privacy law. “That was the most amazing AI technology I’ve seen. It’s indistinguishable from a human being when you’re talking to it. Is that a human or a computer? So if we’re at that point, we need to start to have better regulations, greater controls.”
What seems to bother most people was that the human receiving the call doesn’t know she’s talking to an AI entity. Therefore, if this technology slips out into the world, it could be a convenient and powerful tool for scammers, robocallers, Russian influence peddlers, and all kinds of other nasty social hacks.
Shortly after the demo, I interviewed Rishi Chandra, vice president for all things Google Home. He made clear that the company is aware of the concerns.
“The way we’re designing this is fairly narrow,” he said. “It’s not like you can call anyone and do anything, right? Because the technology’s not there yet. But we found that if we can isolate a couple core use cases—setting an appointment, booking a restaurant reservation, finding opening hours—we think we can build an AI that can do that on your behalf.”
Chandra emphasized that Google will alert the human that he’s talking to an AI character by the time Duplex’s first “small experiment” begins this summer.
“There’s a lot of considerations we have to make sure we get right here. The first is, making sure that the restaurant knows that the call is coming on behalf of the assistant.
“This is one of the things we want to make sure we’re thoughtful about,” he said. “We want to be very transparent that this is coming from Google. We’re gonna be spending a bunch of time on different ways we can let the restaurant know.
“The example that Sundar gave today was, ‘Hey, I’m calling on behalf of my client,’ but there are other mechanisms we’re testing to make sure everybody’s comfortable. And we’re also offering that businesses can opt out of this if they’re not comfortable.”
If the not-identifying-itself-as-AI problem goes away, Duplex doesn’t seem nearly as sinister—and its potential seems very bright. After all, as Chandra points out, 60% of businesses today don’t have websites, apps, or any other electronic appointment system.
“The reality is that many businesses today are not digital businesses,” he says. “What we wanted to figure out is, how do we bridge that? How do we bridge this notion that I want a haircut, or I want to order a pizza, but my local pizza joint’s not online?
“We can democratize a personal assistant. [Today,] a very narrow, small number of people can have personal assistants doing all these things for them. Now, can we make that accessible to everyone?”
When Duplex fails
Clearly, Duplex is astonishingly good—but even still, it often fails. CEO Pichai played, for example, this example, where the restaurant receptionist had difficulty with English:
I asked Chandra, therefore, if you’d get a transcript of the call later, so you could see where it went off the rails.
“Right now, we’re hoping to just let you know whether it worked or not,” he replied. “But that’s a good question. Maybe this is what we’ll learn in the experiment.”
The rest of the news on the Home front
Chandra also detailed Home’s progress since our last interview a year ago. For example:
- Google has made strides catching up to Amazon (AMZN) Echo’s dominance in the world of smart-home control (lights, door locks, thermostats, and so on). “The day we launched Google Home, 18 months ago, we worked with four devices.” Now, he says, Home can control over 5,000 devices. (For its part, Amazon says that its Echo works with 12,000 devices.)
- Assistant is getting better at understanding you, too. For example, “continued conversation” means you don’t have to say “OK Google” before every command—even if you speak to a human in between. “This is called semantic filtering,” Chandra says. “Our AI will parse when you’re talking to the Assistant and when you’re not.” (Here’s the demonstration from the keynote.)
- “Multiple actions” lets you string multiple commands together. “My wife uses it every day when she goes to bed,” Chandra said. “She can just say, ‘Hey Google, turn on the noise machine and set the alarm for 7:20.”
- This July, new Google Home-powered screens are coming from LG, Lenovo, and JBL. With a screen, Chandra says, “we can bring content to you besides just talking to you. We can show you a map, we can show you YouTube videos, we can show you recipes.” (Of course, Amazon Echo’s Show and Spot models have had screens for some time.)
- Finally, Chandra points out that Assistant may slowly be winning on the pure basis of ubiquity. “We’re on over 500 million devices—laptops, phone models, speakers, watches. This idea of having an always-available assistant is becoming more and more real,” he says.
Well, you know what I say to that: “OK, Google. Help us prepare ourselves for AI everywhere.”
David Pogue, tech columnist for Yahoo Finance, welcomes non-toxic comments in the Comments below. On the web, he’s davidpogue.com. On Twitter, he’s @pogue. On email, he’s firstname.lastname@example.org. You can sign up to get his stuff by email, here.