Why You Want Amazon to Provide Alexa Skill Utterance Data to Developers
The Information reported (paywall) last week that Amazon may start providing Alexa skill utterance data to developers. Citing anonymous sources, the article reports:
Amazon is mulling changes to a policy that could help developers build higher quality voice apps for Alexa, but would mark a subtle yet significant shift in its stance on guarding users’ privacy. Such changes may signal that Amazon is getting more serious about making Alexa a more attractive platform for commercial developers.
Since Amazon launched the Alexa digital assistant—which runs on the Echo voice-activated speaker—it hasn’t given developers access to any information about what people say while speaking to Alexa. The policy is designed to protect users’ privacy, Amazon has said. Developers get general information like the total number of times a person speaks to Alexa.
But in recent discussions with developers, Amazon representatives have talked about the possibility of opening up access, said the people.
How Utterance Data Impacts App Performance
While the blogosphere is abuzz about privacy concerns, this debate is truly about application performance and not nefarious intentions. Operlo co-founder and CTO Oscar Merry recently reflected on the situation:
“I would say that for a lot of use cases, developers are looking to access user’s free form text, either to capture some kind of user message in free text, or to carry out their own natural language processing. The way Alexa’s voice model currently works makes this challenging, and although there are workarounds, they don’t work that well. If Amazon gave developers the option to access free form text this would make things a lot easier, and also open up the kinds of use cases possible on Alexa.”
Ahmed Bouzid is founder and CEO of Witlingo and a former Amazon Alexa team member. He agrees with Merry’s sentiment and contrasted the ability to deliver a basic skill versus “excellence.”
“Jeff Bezos often advises against waiting for certainty to launch a product and instead proposes that one make do with 70% of the knowledge one wishes they could have. Right now, anyone who launches a skill, is at best launching with 70% of what they need to know. This is fine for delivering a minimally viable product, but not fine for delivering customer-obsessed excellence.
“With the full text of what Alexa users say, skill designers and product managers can finally do that magical thing that takes a product from minimal viability to excellence — and that’s iteration based on actual customer data. It’s a crucial moment of truth for Amazon. If they want their partners to deliver excellence and obsess over customer experience, they need to enable us skill builders to do our job.”
Amazon Must Now Consider Its Competition
Michael Myers, Chief Product Officer at XAPPmedia, was able to contrast Amazon’s approach for Alexa data with Google Assistant’s more comprehensive sharing today.
“The most important use for the Google Action transcription is to analyze what your users are asking for that you aren’t catching. It is crucial to improving the skills and making the them more conversational.
“For cross-platform skills that we operate on both Google Assistant and Amazon Alexa, we just look at the missed utterances in the Google data, then apply them to the Amazon skill as well. But the platforms are different. It would be better to have data from each platform so we could optimize voice applications for both.”
This should be of even greater urgency to Amazon as Google is starting to become a more effective global competitor. Although Google Home has trailed Amazon Echo sales in the U.S. by a large margin, Google Assistant is now available on hundreds of millions of Android phones and Google Home beat Alexa to market in both Canada and Australia. This means that Amazon won’t be able to rely on a dominant early market position as it enters new countries behind Google. Instead it will need to compete on features and the quality of the voice applications consumers can access. It will also need to compete for developer attention.
This is About User Experience More Than Privacy
Is this really a showdown between the desires of consumers and the nefarious wishes of developers? No. Google and Microsoft both provide user utterance transcript data today. Facebook does it for chatbots. The users are interacting with a piece of software. They are not in a confessional. If there are HIPPA concerns, that may be another matter. However, Amazon does’t provide HIPPA compliance for Alexa today so that isn’t an issue.
Bouzid said that lacking this data means his teams must search Alexa Skill reviews and social media to identify issues. This is both inefficient and damaging to consumer perception of individual skills, the brands that build them and the Alexa platform overall. If Amazon intends to rely on third party developers to deliver a great Alexa user experience, it is important to offer the tools required to deliver on that objective. Transcripts of user interactions are an example of tools to help developers and do not violate existing terms of service.
We should embrace this imminent move by Amazon to bring the company in line with existing practices of the other voice platforms. Alexa users will be the ultimate beneficiaries.