8 Experts Offer Tips for Building Better Actions for Google Home
For the final installment of our four-part series on building better voice applications, Voicebot asked 8 experts for advice when developing for Google Home. This really means developing for Google Assistant because that makes your voice application available on Google Home and on hundreds of millions of Android devices. And, Google won’t certify your action if it doesn’t support multi-modal use (i.e. both Google Home and Google Assistant). Here is the question:
What is one thing people should do when building for Google Assistant/Home to deliver better Actions?
Enable App Discovery
Pat Higbie, XAPPmedia
Action builders need to understand the value of deep link discovery (i.e. Voice SEO) and create appropriate deep links into the action so users can find the voice app with phrases related to its market without having to know the invocation name. For example, today when users say “I want car insurance”, Google Assistant refers them to the Progressive action.
Dan Whaley, Sabio
One of the biggest issues for adoption of new skills for voice-first interfaces is discoverability. It’s a simple thing to do, but make sure you register the kind of things your action is designed to do so that Google Assistant can suggest your action to the appropriate user questions. That way, you’ll get over the first user experience hurdle of actually having people find your app.
Stephane Nguyen, Assist
With Google Assistant, we have to keep in mind that voice is not the only path here. As you probably know, Google Assistant is not “just” Google Home, but needs to work on device visually because Assistant is also included in phones. So, as you build an application you have to build a translator between voice and visual. As an example, a list displayed in a carousel with CTAs (call to actions) needs to be coded differently on voice: should you enumerate all list items, then present CTA choices once one list item is selected? Or, should you read everything at the same time? These are the kind of challenges we are thinking about here at Assist.
Leverage Unique Google Action Features
Nick Schwab, Independent Developer, Alexasounds.com
Take advantage of the “Suggestion Chips” for Google Actions as a way to provide a better screen-assisted experience for users on a phone or tablet.
Jo Jaquinta, Independent Developer, Tza Tza Tzu
Use the permission system. Account linking is difficult and cumbersome and gets in the way of the user experience. If you need a name or location for the user, you can get that with their permission system. But, even better, you can choose when to ask the user for it. So, you can get them into and engaged with the skill, and then ask them for the information. It is a much better user experience.
Jess Williams, Opearlo
One of the great features of using API.ai to build Google Actions is that you can go back and assign historical utterances to intents. Using this feature on the results of your beta test is a really effective way of making sure your Action caters for the different ways in which people speak.
Go Multi-Model
Adam Marchick, VoiceLabs
Support Multi-Modal immediately! There are now millions (going to billions) of Assistant app downloads on both on Android and iOS, and this will be a major way to drive Discovery. Even if you have no visuals to start, at least supporting the skill in the Google Assistant app is a must have.
Ahmed Bouzid, Witlingo
[Developers] should leverage the multi-modal capabilities and think about use cases where the various surfaces that are out there can be brought into the Action value chain.
Avoid These 11 Common Mistakes When Building Voice Applications