Amazon Alexa Now Allows Developers to Optimize Skills for the Car
Amazon announced today in a blog post that it now allows developers to customize Alexa skills for use in the car. The customized experiences can be made available to users driving cars from Alexa Auto partners such as Ford, Lexus, SEAT, and Toyota as well as after-market devices such as Echo Auto, which is still in private trials but Amazon says has received over 1 million pre-orders. Other aftermarket devices with Alexa Auto built-in such as the ROAV Viva, Garmin Speak, and Muse Auto also will be able to pass data to the app that the user is in the car. Amazon’s message to developers:
Now you can adapt your skill experience to be succinct, location-aware, and adaptive to your customer’s needs while they’re outside the home.
Identifying as an In-Car User Session
When an Alexa session is activated in a car or through an aftermarket device with Alexa Auto integration, a JSON message will be passed to the app indicating the “state of the Alexa service and device” type. The Alexa skill will need to be able to recognize this information about device capabilities and have capabilities that are optimized for the driving experience. Alexa skills have previously received information that let them know if a device can play audio or video, or has image display capabilities, and now if the user in a car. Amazon’s Leo Ohannesian says,
“Some design principles for in-car experiences that will be important to address are: keeping responses succinct, allowing customers to resume their interaction with your skill once they have left their cars (or vice-versa), being location-aware in your design, introducing pauses and breaks when customers need to maintain focus, and limiting your design to be voice-only.”
This change is welcomed by developers. Many have been clamoring for the ability to deliver more context-based user experiences through Alexa and other voice assistants. Niko Vuori is CEO of Drivetime.fm which makes voice interactive games specifically for the car. He commented today on Amazon’s move to enable customization based on in-car user sessions:
“Using your voice to interact with your surroundings is perfectly suited for the automotive environment, so we are excited to see Amazon driving awareness in this space. Context matters tremendously in design, and principles that work in the home on smart speakers don’t make sense in the car. As with any new technology platform, it is the developer community that ends up defining how that technology is used. Voice will be different [than what we’ve seen before], and we’re really excited to see what kinds of experiences the developer community dreams up.”
Why the Car is Some Important for Voice Assistants
The car is an important environment for voice assistants for the simple reason there are so many users employe voice interaction in automobiles today. Voicebot’s In-Car Voice Assistant Consumer Adoption Report published in January 2019 revealed that there were 60% more monthly active voice assistant users in the car as through smart speakers in the fall of 2018. And, automakers are moving aggressively to integrate popular voice assistants into cars. The missing link is content optimized for the car because so much of it was designed for in-home smart speaker user. Now that developers can identify which sessions are coming from users in the car, they can start to optimize for a very different context.
Smart speaker adoption has grown quickly and is now a consumer engagement channel with scale in many countries. In the U.S., there are now more than 66 million users. However, voice assistants are not limited to smart speakers and in 2019 we are already starting to see a lot more attention to customizing voice apps to additional contexts ranging from smart displays and smartphones to the car. Welcome to Phase 2 of voice assistant adoption.