Majority of users don’t trust AIs to complete simple calls
Google detailed the Duplex call assist feature at its Google I/O conference last month, but most consumers are skeptical about its accuracy and security, according to analytics firm Clutch.Co.
In a survey study conducted with 501 participants, 73 per cent reported that they’re unlikely to trust an AI-powered voice assistant to make simple calls for them correctly. Clutch says the uncertainty stems from a lack of human intervention and control.
“On the computer, you have a user interface,” said Daniel Shapiro, chief technology officer and co-founder of Lemay.ai. “You can see what it’s doing. Over the phone, you really have no idea what it’s capable of doing. You have to just believe.”
Currently, Google’s Duplex is only used to make restaurant reservations. The user can tell Duplex when and where to book a seat, and Duplex will call up the restaurant and schedule the appointment on your behalf; saving the user time from being tied on the phone. Once the reservation is complete, Duplex will send a confirmation and a calendar reminder for the date.
While AI features like Google Duplex verify the information with the caller before making a reservation, consumers worry the AI could mess up the delivery. Perhaps worse, the receiver could misinterpret the message and Duplex would fail to take any corrective actions. The fear of botching the reservation outweighs the convenience for most people.
Users uncomfortable if AIs become too human-like
On the receiving end, 61 per cent of the participants said getting calls from an AI assistant mimicking a human would make them feel uncomfortable. As AI-driven digital assistants expand their presence, industry experts hope that users would eventually harmonize with them through repeated interactions.
Dj Das, founder and chief executive officer of ThirdEye Data, recalled the initial resistance against major technological leaps.
“When ATMs first came to this world, people were scared. At that time, I was in India. We never went to the ATM because we didn’t trust it.”
But to dispel the anxiety, AI assistants need to clearly identify themselves. Eighty-one per cent of the study participants agreed that AI assistants need to declare that they’re robots before proceeding with a call.
California will be the first state to require robot callers to identify themselves starting July 1, 2019.
AI could enable stronger social engineering attacks
Human-like AIs would also create new social engineering attacks to steal personal information. The threat calls for a new plateau of vigilance from the industry. Shapiro said the worry is how the technology will be abused for malicious purposes.
“I don’t worry about Google,” said Shapiro. “I worry about the bad actor who is going to use the same technology.”
The Clutch report listed a few examples of more sophisticated attacks. The AI generated voice could mimic authorities, institutions, co-workers, or even loved ones to phish out personal information such as credit card details and Social Security numbers.
Chinese internet giant Baidu recently announced an AI that can mimic someone’s voice using only snippets of their speech. A person would only need to record minutes of their interaction with someone to gather enough voice training data to replicate a voice.
But Das remains hopeful that security and regulations will evolve with the technology.
“Whenever a new technology comes, people don’t know how to handle it…that’s very common,” said Das. “As things mature, we’ll understand what’s what and we’ll come up with data privacy and security laws.”