Amazon Unveils Alexa Live 2022 Plans for July 20
Amazon has revealed that its annual Alexa Live event will be held on July 20 starting at 12 p.m. EST and conducted entirely online. The tech giant promised a day full of discussions and announcements, particularly focused on recent breakthroughs in ambient computing and new ways for voice app developers and smart device makers to take advantage of those improvements.
The idea of ambient AI is to enable users to access a virtual assistant without needing to take a device out of a pocket or look at a screen. The hands-free and sometimes eyes-free paradigm relies on AI that understands the user and their current context to respond appropriately to requests and even anticipate their needs before they are verbalized. When not needed, this ambient intelligence becomes invisible once again. The hundreds of millions of Alexa devices Amazon claims are currently in use make at least some degree of ambient AI feasible, and Alexa Live 2022 will showcase some of the new tools and methods clients and third-party developers can best take advantage of ambient AI. The event will include a keynote speech and multiple breakout sessions featuring Amazon executives as well as case studies representing companies like Disney, Philips, and Spotify.
“Our vision for ambient computing can only be realized by extensive collaboration between teams at Alexa and all of our partners,” Alexa director of business to business and developer marketing Kelly Wenzel said. “Alexa Live 2022 is our opportunity to showcase the many products, programs, and services we’ve designed specifically to help brands and builders leverage Alexa’s growing footprint to innovate on behalf of customers.”
Amazon called last year’s Alexa Live the single largest release of new developer tools, with more than 50 announcements, a major step up from the 31 new features from Alexa Live 2020. The last two years saw the introduction and advancement of the deep learning tool Alexa. The most recent iteration of Alexa Live also signaled the rollout of Alexa Skill Components, which streamline building Alexa skills by connecting standard elements of voice apps into the developer’s models and databases without needing to code the basics repeatedly. Last year also brought a significant number of accessibility features to the public’s attention. For instance, the Customized Pronunciations feature lets developers directly teach Alexa the names, titles, or scientific jargon within their voice app so that the voice assistant will say them correctly and recognize when a user says them as well. Amazon also set up the new Skill A/B Testing Service for developers to try out any new or updated Alexa skill they want to release, with the data aiding the scheduling of the skill’s release and plans for future updates.
“Our vision for Alexa is to be an ambient assistant that is proactive, personal, and predictable, everywhere customers want her to be. This enables them to do more and think about technology less. It’s our long-term vision, which means there’s a lot of work to be done to make this a reality,” Alexa chief technology evangelist Jeff Blankenburg explained last year. “These tools and features will make it easier to drive discovery, growth, and engagement, unlock more ways to delight customers, and address a few key focus areas.”