Nuance Emotion AI 2

Nuance Automotive Demonstrates Driver Emotion Detection at CES and Shows How Virtual Assistants Can Become Proactive

Image Credit Nuance Automotive

Nuance Communications announced in November the spin-off of Nuance Automotive as a separate company that generated $279 million in the 2018 fiscal year ending in on September 30th. Nuance Automotive was present in the North Hall of Las Vegas Convention Center for CES with two demonstration vehicles. On display were features first debuted in 2018 such as Just Talk which is a voice assistant that doesn’t require a wake word to activate. It recognizes relevant questions and commands automatically without the need to say, “Hey [VOICE ASSISTANT NAME].” However, the more interesting new features were baked into what the company calls Emotion AI.

Adjusting to Driver Emotions

Emotion AI relies on in-car eye tracking, facial recognition, and an analysis of the user utterances. The demonstrator first interacted with the voice assistant so we could get a sense for the standard dialogue. He then smiled while looking out the windshield as if he was driving. When he then interacted with the in-dash voice assistant, a small smiley face showed up in the upper right-hand corner of the display and the dialogue was much more verbose and less formal.

When he then frowned, the in-car camera picked up on the change in expression and transitioned to more direct and efficient language. It did not include any unnecessary phrases or words and simply answered the question or executed the command. Robert Policano, Nuance product manager in the automotive group said:

“We’re showing some pretty cool stuff that we’re really excited to be bringing to the show for the first time including using emotion recognition through facial expression and voice to register the mood or the demeanor of the driver and adapting how our virtual assistant speaks and reacts to that person when they’re engaged with it. So for example, if the car detects that I’m in a really good mood it’s going to be a lot more verbose and it’s going to use a sort of more natural sounding expressions. We often in speech call these gilded expressions or phrases like “ahhs” and “umms” and “hmmm” and it sounds a little bit more expressive. And, it’s kind of moving a little bit closer towards that concept of becoming more humanized [when] talking to a computer.”

The feature that changes how the voice assistant interacts is about adjusting based on emotional cues. Humans do this quite naturally by sensing the state of the person they are interacting with. The skill is honed over a lifetime, but even infants have this innate capacity.

Computers don’t have this capacity because they have no humanlike nature. However, some sensing cues can be taught to an assistant so it reacts to the human in a way that is best aligned with the mood, task, and situation. This is a higher order reactive capability of a voice assistant than simply fulfilling requests as if all interactions are the same. It remains to be seen whether consumers will view emotion detection positively. With that said, it is an ambitious effort to make voice assistants more humanlike in their interactions.

Providing a Proactive Voice Assistant

Image Credit Voicebot.ai

The ability to detect if the driver is drowsy is something quite different. It enables a proactive engagement where the voice assistant initiates an interaction with a human. That makes the capability even more noteworthy. If the in-car camera notes the driver’s eyes closing for a period of time longer than a blink or there are other expressions common to drowsiness, the assistant will initiate an interaction saying something like, “You seem drowsy. Would you like me to start a game we can play while you drive?” Eventually, it might say, “Would you like to pull over for a few minutes to rest?”

On the surface, this seems like a driver safety feature and it is. However, it is also more than that. The virtual assistant is recognizing a condition of the human and proactively suggesting something beneficial. Today’s popular consumer-grade virtual assistants such as Alexa and Google Assistant are entirely reactive to user inputs. Nuance Automotive’s latest demonstration offers us a window into where assistants are headed. Assistants will both react to our requests and proactively anticipate a user’s needs. The car might be the first step in this evolution because the capability is ensconced in a driver safety feature. However, proactive assistants will quickly move into more general settings where they can provide even more value to consumers.

 

Alibaba Tmall Genie Voice Assistant Coming to BMW Cars Sold in China in Late 2019

Tom Hebner Global Innovation Head for Nuance Talks 20 Years in Voice – Voicebot Podcast Ep 53