Here is Something that Did Impress Me from Apple’s WWDC – Voice Control

Voice Control is a new accessibility feature in MacOS and iOS that was demonstrated in the last quarter of the World Wide Developer Conference keynotes earlier today. It enables users of Macs and iPhones to navigate screens using only their voice. What is striking is the user interface that enables navigation. The AI identifies all clickable or touchable objects on the screen and assigns them a number. That enables the user to select a number to choose an object when it would be otherwise hard to describe what to click. For objects with formal names such as a send button, the user can select the number or say the name of the object or button.

The user in the video demonstration is either a quadriplegic or has limited use of his arms. So, Voice Control provides a clearly superior benefit than other screen navigation techniques for users that cannot manually manage a point and click or touch interface. If this feature goes no further, it will have been a worthwhile investment by Apple. Watching the video, you may also see ways these capabilities can offer benefit well beyond accessibility features in the future.

This is More Voice Assistive than Assistant

Think of this as voice assistive technology as opposed to a voice assistant technology. Voice Control is a good name for it. The man employs voice to navigate a user experience originally designed for visual interaction. Think of voice control as a tool you can use while a voice assistant is a virtual person with a tool that does things for you. I make this clarification to emphasize that Voice Control doesn’t suggest any improvements in Siri. As a voice assistant, Siri struggles because it lacks functionality breadth, not navigation control depth. Siri needs more voice-only capabilities and is too tied to screens today.

However, Apple’s conception of voice control and its deep integration with the OS is an interesting extension of what we have seen on smart displays that encourage voice-first navigation. There will be more multimodal interaction that joins voice and visual navigation over the next few years. Apple is already demonstrating a model that can work from both an architecture and user experience standpoint. Siri’s AI may need to be shored up, but this is one area where the OS may be ready ahead of Apple’s peers.

What to Watch for in Voice at Apple’s WWDC Keynote 2019, And What Actually Happened

Apple Presents New Voice Features at WWDC, But Nothing Radical

Wearables & Services are the Future of Apple