For better or worse, Google Assistant can do it all. From mundane tasks like turning on your lights and setting reminders to convincingly mimicking human speech patterns, the AI helper is so capable it’s scary. Its latest (unofficial) ability, though, is a bit more sinister. Artist Alexander Reben recently taught Assistant to fire a gun. Fortunately, the victim was an apple, not a living being. The 30-second video, simply titled “Google Shoots,” shows Reben saying “OK Google, activate gun.” Barely a second later, a buzzer goes off, the gun fires, and Assistant responds “Sure, turning on the gun.” On the surface, the footage is underwhelming — nothing visually arresting is really happening. But peel back the layers even a little, and it’s obvious this project is meant to provoke a conversation on the boundaries of what AI should be allowed to do.
As Reben told Engadget, “the discourse around such a(n) apparatus is more important than its physical presence.” For this project he chose to use Google Assistant, but said it could have been an Amazon Echo “or some other input device as well.” At the same time, the device triggered “could have been a back massaging chair or an ice cream maker.”
But Reben chose to arm Assistant with a gun. And given the concerns raised by Google’s Duplex AI since I/O earlier this month, as well as the seeminglynever-endingmassshootings in America, his decision is astute.
In this example, Reben was the one who told Assistant to “activate gun.” He is still the person responsible for the action. But in a world where machine learning has led to AI that’s smart enough to anticipate our needs and cater to our comfort every day, it’s not hard to imagine a day when digital assistants could kill off people who upset us, if given access to weapons. Who is responsible for the deaths then?
It’s easy enough to say that we should block AI access to dangerous devices. But we’ve already got them in our cars, in the military and in other places we probably haven’t even thought about. We can demand companies make sure that their tech cannot cause harm, but it’s simply impossible to plan for every eventuality, every possible way that AI might go rogue.
“Part of the message for me includes the unintended consequences of technology and the futility of considering every use case,” Reben said. Google might not ever have imagined Assistant being used to shoot a weapon. But all it took for Reben to achieve that was parts laying around his studio. He used a control relay that usually turns on a lamp, linked it to a Google Home speaker, then connected a laundromat change-dispensing solenoid to pull a string that looped around the trigger. “The setup was very easy,” Reben said.
He’s no stranger to artwork featuring the abilities of artificial intelligence. Reben has created projects that show what AI hears and sees when exposed to “The Joy of Painting,” as well as hiding tiny artworks in URLs. This, however is an even more provocative piece that forces the viewer to imagine what could happen if nefarious programmers made an AI that went rogue. Now, more than ever, we need to have the discussions about how, and if, we can prevent an intelligent machine from killing us all.
Comments to this article were available for the first 24 hours only, and have since been closed. Please note: Feedback to this article has been forwarded to the staff.
- This article originally appeared on Engadget.