Deepfake Security Concerns Are Limiting Voice ID Adoption: Survey
Worries about deepfake voice fraud may be slowing the adoption of voice as identification, according to a new survey produced by biometric security and technology startup ID R&D. Two-thirds of adults in the United States said they are concerned that someone could use mimic their voice well enough to illicitly access accounts linked to a vocal ID. ID R&D conducted the survey through YouGov in early December this year. The collected data was then weighted to represent all U.S. adults.
Vocal identification is becoming more common, with banks and other businesses starting to use voice AIs to recognize individuals and give them access to their accounts without needing to use a password or personal details. The convenience is attractive, but interest is often tinged by fear of voice identity theft. Deepfakes, artificially generated speech masquerading as a particular human voice, are getting better as voice technology overall becomes more sophisticated. People don’t want their bank account hijacked by someone with sophisticated audio software. That explains why 27% of those surveyed who said they would prefer a voice login to a standard password system, but an assurance that the biometric login would be highly secure saw that number leap to 40%.
The security concerns extend to consumer voice assistants like Amazon’s Alexa and Google Assistant as well. These platforms are getting better at distinguishing between voices, allowing a household to access individual accounts on different apps without having to sign in and out every time. That said, only a third of the respondents said they would use a home voice assistant to get account information, even if they were certain that the transaction was secure. That fits with other recent surveys, such as the Pew Research Center’s American Trends Panel survey, which reported that more than half of smart speaker owners don’t want their voice assistant’s personalization ability to improve because of personal data security and privacy concerns.
“This research shows that the biometric industry has a lot of work to do to educate consumers around legitimate security issues in voice technology,” ID R&D president Alexey Khitrov said in a statement. “Those of us in the biometric industry have a responsibility to educate consumers about the risks of deepfakes and synthetic voice, but also a real opportunity to educate consumers about the many benefits of biometrics, including improved security.”
Securing Biometric IDs
ID R&D cited recent research suggesting the human brain can’t tell human speech from the artificially generated version, but the respondents to the survey were quite divided over whether they could tell a deepfake from a real human voice. A little over a third of U.S. adults are confident they can distinguish real from false, while slightly under a third said they were not confident they could tell them apart, according to the survey. What version of artificial speech the respondents were thinking of wasn’t identified, which matters as the technology has improved rapidly and the best deepfakes are a long way beyond robotic tones spoken in an unnatural cadence.
To combat potential fraud, ID R&D and others in the field use hardware sensors and software analytics to test biometrics. An update earlier this year to its platform sped up how quickly ID R&D can match biometrics by a factor of ten, while adding multiple channels for enrollement. The point is to make the security as tight as possible while streamlining and speeding it up so that the person using their voice as an ID doesn’t even notice it happening.
“Just as the consumers in the early 1990s were suspicious about online commerce but now can’t imagine life without it, we believe that once users learn how biometrics can better protect their data and accounts while delivering an all-around better experience across all applications, voice technology will see exponential growth,” Khitrov said.