Apple Changing ‘Hey Siri’ Wake Word to Just ‘Siri’ and Other WWDC News
Apple’s voice assistant is shortening its wake word from “Hey Siri” to just “Siri.” The tech giant announced the change at WWDC this year as part of the package of new and updated features for iOS 17.
Siri
The condensed wake word is aimed at speeding up Siri’s responses to requests. A single syllable might not seem like much, but it could add up, especially for issuing multiple commands. The plan has reportedly been underway since last year but faced engineering headaches as Siri needs to know when to activate from just two syllables in an enormous variety of accents and voices while limiting accidental awakenings. A third syllable, as with Alexa and the “Hey, Google” and “OK, Google” for Google Assistant, adds a lot of flexibility. That’s why Google has experimented with alternatives like Quick Phrase, commands that initiate conversations with Google Assistant without a wake word in phones, and smart displays. Instead, the AI uses the Voice Match vocal identification tool to ensure the approved user is speaking. It also applies to Google’s Continued Conversation feature, allowing users to add commands to their Google Assistant orders without saying the wake word after each request. The appeal of Quick Phrases has even led to work on allowing users to create and set up their own custom Quick Phrases. Despite not having an additional word in its phrase, Amazon is pursuing its own plans in that regard through the Alexa Conversation Mode feature. Siri’s new wake word will arrive in September with iOS 17.
WWDC Voices
The Siri wake word news was by far the biggest voice and AI announcement at WWDC this year, but a few other features stood out as notable. iOS 17 will automatically transcribe voicemails in real time, and the keyboard will use a new language model for better autocorrecting and predictive typing, including swear words. Beyond the new iOS, AirPods will get a software update that uses AI to adapt the relative noise cancellation based on a user’s environment, while the new Personalized Volume feature leverages machine learning to calculate and adjust for a user’s preferred media experience. For face-to-face human conversations, the new Conversation Awareness feature will automatically reduce volume and increase transparency while cutting down on background noise when the wearer begins speaking. And voice will be an input option for the forthcoming $3,500 virtual reality headset, which otherwise is mostly mysterious in many key ways at this point.
Follow @voicebotaiFollow @erichschwartz
Amazon and Panasonic Partnership Brings Simultaneous Alexa and Siri Access in Apple CarPlay
Apple HomePod Yes More? New Smart Speaker Debuts 2 Years After Original Discontinued
How One Developer Combined the Mind of ChatGPT with the Voice of Apple’s Siri