Siri-Yael-Garten-Apple FI

Apple Still in Holding Pattern on Voice, Siri Used 25 Billion Times Per Month But New Features Limited

Yael Garten, director, Siri data science and engineering. Credit:

Apple’s annual Worldwide Developer Conference (WWDC) 2020 arrived and went today with little notable progress around Siri or Apple’s overall voice strategy. That strategy can be summed up so far as minimalist incrementalism.

HomePod was referenced exactly one time in the event that lasted nearly two hours. And, that reference was in relation to facial recognition. What? Yes, the smart speaker with no visual user interface made a cameo appearance in relation to facial recognition technology employed by Apple for smart home applications. Apparently HomePod will announce to you who is at the door by name if the smart camera identifies the person based on your contact. Apple TV and your iPhone Home app will simply include the name in text below the live image.

Siri Gets Some Useful Features

The past couple of years at WWDC and Apple’s fall hardware events, Siri hasn’t made an appearance for over an hour into the presentation. Siri made an appearance this year at 13 minutes. The focus was first around Siri’s updated user interface. Craig Federighi, SVP of Software at Apple, led off the presentation by commenting:

“As much as Siri has Advanced over the years, the visual interface for interacting with it has remained largely unchanged…This year we’ve completely redesigned the Siri experience with a new compact design. It makes tasks like launching apps incredibly seamless. For example, if you say open Safari, Siri pops up the bottom of the screen and instantly launches the app. Or, if you ask for information like the weather, results appear at the top of the screen just like a notification.”

Siri Usage is Significant Despite Limitations

It is worth noting that the UI updates will make Siri more appealing for many tasks although complaints are Siri tend to be more about its limited capabilities. With that said, Siri is a powerhouse in a way that matters.

Yael Garten, director, Siri data science and engineering said that Siri was used for 25 billion requests each month. If you assume that there are about one billion iPhone users and they represent the majority of Siri usage, then you are talking about an average use of 25 times per month. That would represent almost daily use. However, at least half of iPhone users don’t use Siri at all so it may be more accurate to say that Siri users are averaging about two times daily. Siri may not win many plaudits for its capabilities but that is apparently not undermining widespread use.

Translation On-Device Processing

Garten noted that Siri also can be used to send voice texts and use enhanced dictation for text message transcription. Both of these features employ Siri as the automated speech recognition (ASR) and operate on device. That means there is more Siri functionality moving to the iPhone that can operate without accessing the cloud for ASR or speech synthesis. This is a fairly significant development if it is completely operational without the cloud though it is currently narrowly applied. On device processing is thought to be a key feature for both performance improvements and enhanced user privacy.

Taking this a step further, Apple has launched the new Translate app. This looks like a new competitor to the Google Translate app. The key difference is it can also operate solely on device without accessing the cloud. “It can work completely offline keeping your conversation private,” said Garten in her video presentation.

It is not clear how this will impact fidelity and where the cloud is called in for assistance. Most likely there are some limitations and exceptions. Supported languages include English, Mandarin Chinese, French, German, Spanish, Italian, Japanese, Korean, Arabic, Portuguese, Russian. The Translate app in landscape mode can be used by two people speaking different languages to assist with real-time communication.

Siri Knows More

Garten also mentioned that Siri now knows 20x more facts than three years ago with more sources for information-based questions. This makes sense given that Siri has been closing the gap with Google Assistant over the past couple of years of independent studies of voice assistant knowledge bases.

There was nothing revolutionary in the Siri announcements but there is some notable progress. The big gap is the ability of third-parties to go beyond the deep-linking capability offered by Siri Shortcuts and actually leverage voice interfaces throughout app experiences. For those capabilities, it looks like 2021 or later for launch.

Apple Smart Glasses and VR Headset Details Leak

Apple Acquires Machine Learning AI Startup Inductiv

Apple Smart Glasses Delayed by Coronavirus, Will Cost $500: Report