Lasers Can Hack Voice Assistants in Example Worthy of Mission Impossible But the Risk is Minimal for Consumers
Lasers can hijack voice assistants in some smartphones and smart speakers, according to a new study by researchers at the University of Michigan and the University of Electro-Communications in Tokyo. The microphones interpret the light as voice commands, leaving them vulnerable under certain circumstances to malicious attacks.
The voice-over in the video sounds awfully ominous, but that may be more for effect than an expression of actual risk. Like many of the recent voice assistant and smart speaker hacks the conditions of the lab or even controlled offsite experiments demonstrate potential vulnerabilities but unlikely real world scenarios.
Light Hacking Sound
Lasers can hack a voice assistant by vibrating sensors in the micro-electro-mechanical systems (MEMS) microphones used in Apple HomePod, Google Home, Amazon Echo, Apple iPhone, and other devices. A low-power laser pointer, modulated by a laser driver and audio amplifier, shines a light on the microphone, and, even from hundreds of feet, the microphone interprets it as the sound of a command. The lack of a PIN needed for authentication of most commands skips the need to bypass any additional security. The entire apparatus cost less than $400. All it needs is a direct view of a smart speaker near a window, as can be seen in the video above. The researchers explored an array of potential commands that the laser attack could trick the voice assistant into carrying out.
“[User authentication on these devices is often lacking or non-existent, allowing the attacker to use light-injected voice commands to unlock the target’s smartlock-protected front doors, open garage doors, shop on e-commerce websites at the target’s expense, or even locate, unlock and start various vehicles (e.g., Tesla and Ford) that are connected to the target’s Google account,” the researchers wrote in their report.
How Voice Assistant Providers are Responding
An Amazon spokesperson responded to a Voicebot inquiry on this research by saying, “Customer trust is our top priority and we take customer security and the security of our products seriously. We are reviewing this research and continue to engage with the authors to understand more about their work.”
Google offered a similar response through a spokesperson which shared by email, “We are closely reviewing this research paper. Protecting our users is paramount, and we’re always looking at ways to improve the security of our device.”
It’s not clear they need to do much. There are settings available today for consumers to protect against this type of vulnerability. If they have a smart lock, they typically can set up a voice PIN which would add a layer of authentication to the feature. While the researchers indicate authentication is “often lacking” that may more accurately be said as not “often enabled.” In addition, if a consumer unplugs their device or even presses the mute button on an Amazon Echo or Google Home, the microphones are not active and wouldn’t be activated by the attack.
There is even a setting for Alexa that enables a tone to play anytime a session is opened or closed. That would be an audible indication that something was amiss to a homeowner or office dweller.
However, a recommendation in the research would make sense. The researchers noted that the laser attack only activates a single microphone. That is atypical behavior for smart speakers where sound typically reaches multiple microphones before one takes over responsibility for interpreting the speech. The wake word algorithms could be programmed to check for activation of multiple microphones and either ignore commands only heard by a single microphone or require verbal confirmation of some sort that the user is present.
Vulnerability Does Not Equal Danger
The researchers attempted to make their tests of this potential vulnerability at least semi-realistic. They compared using visible laser light with an infrared version, determined how precise the line-of-sight has to be for the microphone to interpret the command correctly and looked at how responsive the various models were to this kind of attack.
But, while the capacity for a malicious attack is very real, it’s also very remote. The direct sightline with the device and with a small part of the microphone are tricky enough to arrange under controlled circumstances, let alone in the wild. Even if someone with the right equipment found the perfect vantage point to fire the laser commands, people will notice when their voice assistant starts acknowledging orders they have not given.
A burglar would have to arrange a perfect combination of locale and timing to get a voice assistant to unlock a door when the owner isn’t home. Plus, the smart speaker would have to be visible, without obstruction through a window. And, the smart speaker would need to be connected to something useful like a door lock without a PIN code required to actually enable a malicious act to occur.
The research is exciting, however. While there is empirical proof that the laser light can mimic the vibrations of a voice well enough to fool the voice assistant, the actual physics behind it is still fuzzy. There are avenues for pure research, as well as cybersecurity studies. Most importantly, now that this genre of vulnerability has been exposed, the researchers are working with the companies behind the devices to work on countermeasures. Attacks using light may become more feasible as the technology becomes more sophisticated, so it’s logical to develop defenses now. Even with the very remote possibility of hackers using this technique to hijack smart speakers, companies don’t want to deal with the kind of blowback Google faced recently when a report about a Google Assistant vulnerability led to a widespread outage of Google Actions.
Follow @voicebotai Follow @erichschwartz
How Loudspeakers Can Hijack Voice Assistants With Disguised Audio Commands
Vijay Balasubramaniyan CEO of Pindrop Security – Voicebot Podcast Ep 86