Google Assistant Uses On-Device AI for Pixel 7 and 7 Pro ‘Personalized Speech Recognition’ Feature
Google Assistant will improve how well it understands Pixel 7 and 7 Pro users without sending data to the cloud. This personalized speech recognition service teaches the voice assistant elements of how the user speaks and their own preferences for things like acronyms and how to pronounce names without passing along data, an important element for those who want to use the voice assistant but have privacy concerns.
Personalized Speech Recognition
The feature saves voice interactions between the Pixel 7 and 7 Pro users with Google Assistant only on the device. The idea is that the AI can learn from repeated interactions and contexts the unique ways a user speaks. The AI saves audio and text interactions in the short term, deleting them once it’s learned a certain rule about how to interpret phrases, accents, and contextual interactions. When using Google Assistant, the AI might offer suggestions for how to refer to a place based on previous interactions. The corrections help the AI learn and improve. The feature is enabled during setup but can be turned off if the user wants.
“Personalized speech recognition helps Google Assistant learn your unique speaking style over repeated interactions to get better over time, which is why your interactions are stored on-device for a short time. This means that the more you interact with Assistant, the better it will become at recognizing your speaking style,” Google explained in a support article. “Your interactions with Google Assistant can include the audio, recognized text, and any corrections you make from voice and typed queries. Our personalized speech recognition technology learns from all these aspects on-device and makes it so Google Assistant is able to adjust to your speaking style.”
The limited set of interactions means that other Google Assistant features are unaffected, like voice typing, voice match, and personal results. It’s more for any use that doesn’t need to the cloud to run, such as inputting data into Google Maps, starting a call with a contact, or looking up information already stored on the phone. It’s like how users can manually recite contact names so that the AI will recognize the pronunciation when they ask the voice assistant to call that contact, but only for the new Pixel phones. That leaves out other Google devices, even though Google Nest Hub and Google Mini can locally process some common requests for a faster response.