What Apple Didn’t Announce at WWDC 2019 Spotlights the Cracks in its Voice Strategy While the Announcements Offer Some Hope
When it comes to voice assistants and AI, Apple is either a sleeping giant or in a coma with do not resuscitate order. It was easy to make the case for the coma in 2018 when the best idea presented was Siri Shortcuts which at the time was a glorified routine builder. The HomePod received no meaningful feature upgrade for a full year and yet is priced well above its peers. Siri may recognize local points of interest better today than 2017, but that is not moving the needle in introducing new capabilities. It is not only that Siri hasn’t been moving forward, but Alexa and Google Assistant have continued to leap ahead and extend the voice assistant gap. Enter WWDC 2019 and Apple may be showing some signs of life even if the pulse is erratic and sometimes faint.
What Apple Didn’t Announce That Should Disappoint Apple Users
- SiriOS: Think of Siri operating system (OS) as the equivalent of Alexa with the Alexa Skills Kit or Google Assistant with the Actions on Google development environment. It would be a standalone set of capabilities, integrations, and frameworks that enable the development of voice-only user experiences and for the interoperability with other computing environments. Today, Siri is a feature within Apple’s iOS for mobile devices and to a lesser extent OSX for computers. Both of those operating systems were designed with a set of assumptions that are inconsistent with voice-first and voice-only user experiences. A SiriOS is desirable for enabling innovation and is viewed by many as required to match the progress made by Amazon and Google with their voice assistants. Apple organized its WWDC keynote presentations in 2019 around four operating systems: tvOS, watchOS, iOS, and macOS. Many hoped that SiriOS would join them, but there is always 2020.
No SiriOS in this list. #wwDc19 pic.twitter.com/EJR3qR5wJi
— Bret Kinsella (@bretkinsella) June 3, 2019
- New Siri domains: Siri is today constrained by what iOS allows it to do. Those allowances, called domains, define what third-party developers can do take advantage of Siri in their iOS apps. There currently are only 10 domains provided to third-party developers. They range from Fitness and Payments to phone calls, photo management, and ride sharing. Surprisingly, developers cannot use Siri to provide better user experiences with features such as search, audio playback, or even basic information sharing such as airline flight status. Rumors circulated that these and as many as ten other new domains would be announced at WWDC making Siri more open to developers. That did not come to pass. What this means is that Apple is not allowing developers to create new standalone Siri apps as a SiriOS would enable, and it is greatly circumscribing what Siri can do within iOS apps. David Gerbino yesterday started a #FreeSiri hashtag on Twitter because the metaphor that Siri is in chains is apt.
- Any developer tools related to the PullString acquisition: PullString is a software company that most recently provided a set of software tools that made it easier for developers and designers to create voice apps for Amazon Alexa and Google Assistant. If there were ever a SiriOS, PullString represents the type of tool that could accelerate adoption and new Siri app development. No developer tool or development environment of this nature was announced. That is not unusual because the acquisition closed less than six months ago. However, a WWDC announcement did say that developers will now be able to create apps for Apple Watch without a dependency on an iOS app. This is another option for Apple to apply PullString’s knowhow to assistant independent developers in creating apps for a product platform.
- Any features related to the Laserlike acquisition: Laserlike is another recent acquisition that many people speculate could help make Siri smarter and less dependent on rival services such as Google’s Knowledge Graph. Siri did not get any fancy new features that appear “laserlike” so this is likely a longer-term play around building a new knowledge engine for the assistant.
What Apple Did Announce That Should Encourage Apple Users
- Watch apps no longer dependent on iOS apps: As mentioned above, Watch apps will soon lose their dependence on iOS. This is the same technology required to support at SiriOS—the decoupling of Siri from the mobile operating system. So, the decoupling of Watch apps from iOS suggests Apple has made an architectural change that enables formerly dependent features of iOS to operate independently. We saw further fragmentation of Apple’s OS model with the introduction of iPadOS which also historically has been under iOS constraints. These moves suggest the possibility of a SiriOS in the future has improved. With that said, Siri will be accessible through watchOS but will still be subordinate as it is on iOS today. The open question is whether new domains will expand Siri access to third-party developers for the new Watch ecosystem. Siri remains a domain-limited assistant today.
Watch apps now independent from iPhone apps. That is what is needed for voice to take off for Siri as well. #WWDC19
— Bret Kinsella (@bretkinsella) June 3, 2019
- Voice-based user personalization: This feature was addressed so quickly it would have been easy to miss it. HomePod will now recognize users by their voices and personalize responses base on the speaker. The new feature suggests that Apple’s automated speech recognition and voice identification capabilities have improved and that will allow for tailored user experiences leveraging Siri in the future.
- Announce Messages AirPods feature: When AirPods are in use and an iMessage arrives, Siri will read the notification. The users can then respond by voice without saying the “Hey Siri” invocation phrase and send a message reply. AirPods are likely to be a key part of any “always available Siri” strategy and this is a convenient feature for on-the-go users.
- Updates to Siri Shortcuts: Siri Shortcuts haven’t taken the world by storm, but they are an attempt to introduce more voice interactive conveniences to users. Improvements announced this week will make it easier for consumers to create and access shortcuts. And, Siri will suggest shortcuts that may be helpful based on user behavior. There is also a conversational element that will enable Siri to ask clarifying questions to ensure it is accessing the correct shortcut or doing so the way the user intended. These are meaningful changes that may just make Shortcuts easier to adopt and popular with users.
- Improved Siri voice quality with Neural TTS: Siri sounds pretty good today and the demonstration revealed an even smoother, more natural sound that will be important as the voice assistant is called upon to read longer passages of content. This incremental change will only have a subconscious effect for most users but does demonstrate Apple’s continued investment in the Siri speech engine.
- Voice Control for screen navigation: It was hard not to see how this accessibility feature for voice navigation of screens could lead to better multimodal voice navigation for everyone over time. There was some discussion on Twitter that it was no better than Apple’s PlainTalk technology from the 1990s and that the real focus should be voice-only experiences. That seems like a very narrow view of what was demonstrated, but that is beside the point. Voice navigation of visual interfaces will be a required feature of voice assistants over time and Apple is demonstrating that it has a workable approach to this challenge and may actually be ahead of its peers.
.#Apple is introducing Voice Control for Mac and iOS devices. Navigating completely by voice. All speech is processed locally for privacy. Positioned as an accessibility feature. #WWDC19 pic.twitter.com/hjUyZrFiMY
— Bret Kinsella (@bretkinsella) June 3, 2019
Why Apple’s Siri Strategy Undermines User Experience
You have a mixed bag related to Apple’s voice strategy coming out of WWDC. As Voicebot’s Eric Schwartz said in yesterday’s coverage of WWDC, some new features were introduced but nothing radical happened. The real loser here isn’t Apple. It is Apple users. Voice assistants are introducing new convenience to consumers’ lives and those that have been loyal to the Apple ecosystem are either missing out on those benefits or straddling ecosystems by filling in the gaps with Alexa or Google Assistant. That approach is bifurcating users’ digital experiences which what Apple has been fighting against for more than a decade.
With that said, Apple is finally making some progress and it looks like some important pieces of a robust voice strategy are being developed. However, it is hard not to see these moves as two years late. Apple has succeeded in the past by creating great products combined with superior ecosystems. Siri’s ecosystem remains non-existent so every new feature only delivers a small incremental gain. A robust ecosystem can be a multiplier for consumer benefits as we see today for Alexa and are beginning to with Google Assistant. Apple’s grade for its voice strategy following WWDC 2019 is: Incomplete. See you in 2020.
If you could just use Siri properly, you wouldn’t need #darkmode. Let’s call it eyes-free mode. #wwdc19
— Bret Kinsella (@bretkinsella) June 3, 2019
Follow @bretkinsellaFollow @voicebotai
Here is Something that Did Impress Me from Apple’s WWDC – Voice Control
Apple Presents New Voice Features at WWDC, But Nothing Radical
What to Watch for in Voice at Apple’s WWDC Keynote 2019, And What Actually Happened