Google Assistant is Learning to Limit Accidental Activation on Android
Google is starting to run a new program to limit the accidental awakening of Google Assistant on Android, according to a 9to5 Google report. The new federated learning model builds on the hotword sensitivity customization that Google debuted on its smart speakers and displays last year, a frequent source of complaints and privacy concerns.
Listen and Learn
The new system appears in Google Assistant’s settings in the form of an option to “help improve Assistant,” asking users for permission to “save audio so speech technologies can learn over time.” The option is only rolling out to a few Pixel owners so far but presumably will pop up for more as Google finetunes the program. The goal is to improve how well Google Assistant performs detection of someone saying “Hey Google.” That means both ensuring it does wake up when the user says the phrase and not accidentally waking up from mistakenly detecting it.
The federated learning model lets Google accomplish this goal without having to process the audio in the cloud or store recordings of people’s voices on its servers. Instead, near activations, when the device thinks it hears something that sounds like Hey Google but holds off, are recorded onto the device and are encrypted. The processing happens locally when the phone is charging and connected to Wi-Fi. Only a log of changes that the device uses to improve wake word detection is sent to Google’s servers. It’s the same model Google uses for some of its other AI tools like Gboard. The voice recordings are deleted no more than 63 days after they are made.
“Your audio recordings stay on your device while a privacy-preserving technology combines information from you and many other participants to help Assistant learn over time and develop better smart features,” the menu explains.
The existing wake word sensitivity adjustment was born out of the paradoxical concerns that the voice assistant isn’t responsive enough and that it wakes up by accident too often and records audio when people don’t want it to. A survey in early 2020 reported two-thirds of voice assistant users have accidentally awakened a voice assistant over the course of a month, and an academic study last summer compiled a list of more than 1,000 terms that can activate Google Assistant, Alexa, or Siri by mistake. Google wanting to improve its voice assistant’s intelligence in this regard makes sense.
Better wake word responses are also a good way for Google to make people feel more comfortable about their privacy when using the voice assistant. It’s similar to how the company restarted the human review of audio recordings after pausing the program by turning it into an opt-in program and pushing a more transparent process. If Google and its rivals can prove that their voice assistants only wake up when users truly want them to, they can hope for reduced regulations like the ones that the European Union is currently reviewing.