travis-teague-voicebot-interview

Interview with Travis L. Teague – A Leader in the Alexa Developer Community

Voicebot recently caught up with Travis L. Teague, a pioneer in the Amazon Alexa development community, to get his take on the current state of voice application development on the Alexa, Google Home and Mycroft platforms. Travis was developing skills for PDAs in the 1990s and has worked with voice interfaces for several years. He also is the moderator of both the Alexa and Google Home Slack groups.

You are well known among Alexa developers as the moderator of the Alexa Slack group. How did you originally get involved with the Alexa platform and community?

Travis L. Teague – I’ve been involved in home automation for years. My father had picked up a number of Echo devices for members of the family in 2015 and I started playing around with Alexa a bit. I moved over from using the voice recognition/text to speech system I had been using — a homegrown system comprised of Cepstral/Insteon/Asterisk and a lot of glue — and got hooked on Alexa.

I also do a fair amount of hardware projects. The most recent release of AlexaPi allows AVS  to be accessed from the Raspberry Pi, OrangePi, C.H.I.P., Edison and PC. You can customize your wake words, you can have Airplay streaming on it. There is even Hyperion and MagicMirror integration. A lot of individuals are dedicating their time and skills to making this an amazing platform.

How does AlexaPi work?

Teague – AlexaPi is written in Python and based on the initial code that Sam Machin wrote. Echosim.io is also based on his code. Mason Stone (http://github.com/maso27)  integrated PocketSphinx and really got the ball rolling on custom wakewords. René Kliment (http://github.com/renekliment) has since taken over the lead on the project, and he is an absolute maniac (in a *very* good way)! He has been the driving force behind getting everyone moving on this expansion. There are a lot more folks that are heavily involved, and you may find them at http://github.com/alexa-pi/AlexaPi/wiki/Contributors.

Voicebot had the opportunity to interview Mycroft founder Josh Montgomery recently. What are you thoughts on Mycroft?

Teague – I have heard some people say that Mycroft is a “niche product.” I wouldn’t say that by any means. There are more members in the Mycroft Slack group than the Alexa group. Mycroft is a very open project that has some brilliant minds behind it. They also don’t have the “veil of secrecy” that you see with Amazon. Very recently there have been huge changes dumped on the Alexa dev community with no warning. Why do that to the people that are pouring their time into making your platform a success?  Steve Penrod (CTO Mycroft) is truly remarkable. You very rarely see this nowadays. He is always working and engaging with the developers. I don’t think that man sleeps. Someone should give him a spa day.

Why did you create the Slack group?

Teague – I didn’t.  I took over moderation last year because Maciej Zywno (the founder) was increasingly involved with his development firm. I try to let the thing run itself. It’s not really about moderation, it’s more about education. The community is growing fast with nearly 700 people. Everyone is a developer or at least working in that direction.

Have you considered starting a similar group for Google Home?

Teague – Yes. I started up the Google Home Slack group because I was working in the early programs with some other developers and we didn’t want to muddy the waters [with the Alexa group]. There are 50-60 developers in the Google Home Slack and most are pretty serious Alexa developers as well.

How are the conversations different on the Alexa and Google Home Groups?

Teague – There aren’t a lot of people that have had Actions certified on Google Home that weren’t involved in the Alpha and Beta programs. Jo Jaquinta is porting his Alexa skills to Google Home. There are a lot of conversations going on about how you can develop software for multiple platforms and not have completely different code bases.

How are having different platforms a challenge for developers?

Teague – There are so many bot frameworks out there right now, many of which are very interesting. You have to be careful who you get into bed with at this point. It doesn’t take much to register an “.ai” domain. That doesn’t mean you are doing anything interesting in the space. There have been a ton of acquisitions recently. We’ll see how things shake out.

What are some of the key differences between the leading platforms?

Teague – Something that concerns me is that there will be a land grab going on with Google Home. For Alexa you can have multiple skills with the same invocation name and they follow a FIFO (first in, first out) invocation pattern, but with Google Home that is not the case. We are already seeing people trying to get name properties on Google.

What I would really like to see happen on Google, which has already happened to a small degree on Alexa, is allowing users to default to specific skills. I would like to be able to default to an app in different categories — Uber or Lyft, which pizza delivery service, etc. Unfortunately, we can’t do that at this time on Google Home. We can do that on Alexa only to a certain degree. A user can specify Spotify as their music service of choice, but not where they get their weather from.

What have you learned from being part of the Alpha and Beta programs for Google Actions?

Teague – When you build in API.ai they have a lot that I wish Amazon had put in a long time ago, such as domain knowledge and synonyms. Another example is if I wanted different delayed responses, such as a re-prompt, I don’t have to listen to the same one every time. It can feel much more natural if the designer puts the time into it.

What are the biggest issues in the Alexa and Google Home development communities today?

Teague – The lack of monetization opportunities is a huge issue. There is “the snow cone dilemma.” For there to be quality skills for Alexa, quality developers have to spend quality time writing them. To do this, they are going to want a return on their investment. The question is how can time spent be recouped. Amazon has made a number of recent changes in a positive direction here, including landing pages for skills, integration of subscription models and in app purchasing. But skill visibility on the Alexa store remains a concern for most developers.

As certain skills are getting featured by Amazon, the developers have been faced with server fees for those skills but there hasn’t been a viable path to monetization. As a hobby, do you want to pay for people to use your skill? I believe Amazon has been active in working with developers who are faced with this situation.

A lot of developers are also coming up against the one year free system that Amazon has offered. They have put some policies in place with free Lambda, but many developers don’t see the benefit of putting their own money into something when prices are going to increase and no system is in place for covering development costs.

What else is interesting in the Alexa world right now?

Teague – Tom Harrigan released a skill for WordPress at AWS ReInvent that allows a blog publisher to provide an Alexa interface to their content. He is putting this technology in the hands of any journalist, blogger, or enterprising individual. Very powerful use of the technology.  

COPPA is also very big in the community right now. There was a competition for Hackster.io that was sponsored by Amazon. The second place winner was Tickle Monster.  After showcasing it around the internet and in the press, Amazon banned it because it targeted children as users.

The developer, Colin McGraw, was understandably put off, and pretty much told Amazon to take a hike. According to Amazon, “You can target a skill to a family that includes children but not to just children.” You cannot by law record children’s voices. This is tricky from a tech standpoint, as the technology of server side recognition is dependent on this.

Amazon started going through and kicking out skills recently. One of particular note was a math skill. When asked what I would recommend to change it, I told the developer to make the math harder. It is strange because there was a big push recently by Amazon for games. Games are not inherently for kids, but I don’t know where they are going if you can’t have games for kids.

How many Alexa skills have you developed?

Teague – I have no certified skills, but I have written well over fifty. Certification doesn’t make sense for many of them. I use them personally. People will put skills up on GitHub because they want to share them, but don’t want to go through the Amazon submission process, which is notoriously tricky.

There are a ton of Github repositories with Alexa skills that you can install and use on your own Echo. Many likely wouldn’t get approved. Yes, there are thousands of skills approved for Alexa, but how many are truly unique? A much smaller number. Many are “template” skills that are not really extending the reaches of the platform. There are some incredible open source Alexa skills out there that probably don’t get enough attention.

One of my favorites is one that Jo Jaquinta wrote called SubWar. He entered it into a Hackster.io competition with an open source requirement and has an hour long video on how he broke the development out into tiers. He brings a lot of professionalism to the Alexa community. He is very involved in education. There are also a large number of smart home skills that are customized to that level but they are not certified. People maintain their own repositories.

What about your personal development for Google Actions?

Teague – I have put in two submissions that include an Action that allows you to control an Arduino robot using a Photon. The Google Action review is a clean process and the feedback that Google gave on my certification rejection was two pages long with links of where to contact them. A lot of times, Amazon responses seem canned without much guidance.

I noticed that you were a developer of PDA applications in the 1990s. How was that experience similar or different from what you see happening with Alexa and voice interfaces today?

Teague – We are in a completely different environment now. Everyone was not online at that point. The sheer level of buy-in that you can get on projects is different today that back then would have been very niche. Palm Pilots were very popular, but everyone didn’t have one in their pocket. The level of saturation you see in the marketplace now is not comparable to that time. We are seeing neural networks really starting to make sense. I don’t have to run all of my own servers. I can scale up very complex software very quickly. If a concept or idea gets hugely popular, I can spin up a load-balanced Amazon Linux server in two minutes. The coolest thing you can get any kid right now is an Amazon developer account.

What is your favorite Alexa skill and why?

Teague – Big Sky. It is a weather skill that is very localized. He doesn’t charge to use the skill although he pays for use of the API. Another skill I saw recently is Weather Sky which uses the dark sky API which allows you to get historical weather. The gentleman who developed it did a very good job of the set up and on boarding of new users.

I have one skill I like that I could never get certified because it uses the Aqua Teen Hunger Force theme song in the intro. It is for personal use and gives people wireless access credentials for my home network. When you say, “Alexa, ask about the network?” it spells out the network credentials. It’s convenient and that is really where you see the best skills – people writing software that they themselves want to use.

Voicebot Interviews David Beisel of NextView Ventures

Voicebot Interview with Todd Mozer of Sensory