Alexa Developers Can Now Personalize Voice Skills With Names and Phone Numbers

Amazon has expanded the ways Alexa developers can use voice profiles in their skills. Voice apps using Alexa voice profiles can now incorporate contact information to personalize further how the voice assistant engages with users.

Name and Number

Alexa introduced voice profiles in 2017, allowing Alexa to distinguish between multiple speakers sharing an Amazon account on an Echo device. This past autumn, third-party developers gained the option to include voice profiles as part of their skills too. The idea is that more than one person can make a purchase or check a workout result using Alexa without having to log out and then back in on their own Amazon account.

“Now, when a customer’s voice is recognized, you can request customer permission to access certain contact information and further personalize their experience,” Amazon explained in its announcement. “For example, a game skill can request to incorporate the customer name for a global leaderboard. Further, a food delivery skill can request to send updates on the food delivery status to customers on their mobile number. By leveraging the Person Profile API, you’ll be able to customize the experience for recognized customers in a household and improve your customer experience.”

Personal Alexa

The resulting, more frictionless experience is exactly what Amazon wants. People will use Alexa more and feel more comfortable doing so if it doesn’t require extra mental effort. It’s the same principle behind the app-to-app account linking for Alexa skills Amazon debuted last summer. Voice app developers want the same thing, of course. If they are purely an Alexa skill, then more personalization can improve what they bring to their customers. If it’s a multi-platform app, so much the better, as the personalization available in text form on a mobile app won’t have to be sacrificed for the voice experience.

Of course, the user needs to permit the apps to use their name and contact information. Voice assistant platform developers are keen to encourage that, with mentions of privacy and safety, always peppered throughout any developer documentation for personalized skills. Amazon, Apple, and Google all devote resources to improving how well voice assistants recognize people and how to leverage that for better voice apps. Google recently updated its own Voice Match setup system to make it more accurate and secure at the same time. And Apple researchers recently published a paper about improving how well an AI can distinguish a wake word and who is saying it at the same time using 16,000 hours of audio from more than 100 people.


Google Assistant Can Better Tell Apart Voices After Enhancing Set Up Process

Apple Researchers Are Improving Voice Wake Word Detection and Speaker Identification

ID R&D Shrinks Voice Biometrics to Internet of Things Edge Processing