Voicebot Interviews Arte Merritt, Co-Founder and CEO of Dashbot.io
Arte Merritt is co-founder and CEO of Dashbot.io which bills itself as an actionable bot analytics program. Mr. Merritt has a relevant history here. He founded and built the mobile analytics company, Motally, which was acquired by Nokia only 2.5 years after its launch. He has now turned his analytics expertise to chatbots and voice assistants and has been building momentum in terms of customers and a $2 million funding round in December. Voicebot connected with Arte to learn more about Dashbot.io and his perspectives on analytics for voice applications.
I understand your background in analytics for mobile might position you well for providing analytics on a new platform like voice assistants. However, I also see dozens of companies positioning to be the analytics provider for bots. What makes Dashbot different?
Arte Merritt: I’ve been doing analytics for almost 20 years. I started out at Random House publishing and that was before there were any tools. You had to build your own. The reason why you need a bot or conversational analytics tool is that the bots are entirely different. The data is all very different. People are using different words to speak to the bot and telling you what they think about the bot. You need to be able to address this with different tools. For example:
- The tracking mechanisms are different. Traditional analytics track click stream or event based data. If you do that with bots you lose the context of the metadata. You need the full message to really see interesting insights.
- The data is different. It is unstructured. Users are sending texts, audio, video, and location apps. Whether text or voice, it is in their own words, “This is what I want and what I think.” That is quite a bit different than clicking on buttons and interpreting structured data.
- The processing is different partially because of the first two things, but also because the sessions can be asynchronous and multi-user. Some bots can send things out with time delays. There are multi-user scenarios where you can pull a bot into a channel on Slack or Kik and instead of it going out to Yelp, you can have the bot suggest a restaurant.
- The reports are different. This is quite a bit different than what web and mobile offer. You can define sessions as active and passive. There is sentiment analysis. You also want to know if the conversational flow works – how do users respond? You can also sometimes get full transcripts to analyze. You can do reports on different content types such as video and audio. You can actually report on that. We had a customer that was ignoring those images. They started supporting it and it improved the personality of the bot and user engagement. These are fundamentally different problems than what someone like Mixpanel is doing. You could argue that mobile was much more similar to web because bots have asynchronous sessions. It requires a lot of different considerations.
On the platform side, they could offer analytics but on the mobile side, they were late to the game and it was rudimentary. They provided user counts or message counts. We provide much richer features. A lot of bot building platforms use us for analytics. Google reached out to us to build analytics for Google Home even though they have Google Analytics.
We launched 8 months ago and we have over 1,800 bots on the platform already. No one else is even close. We are using the data so you can take action on the conversation so that brands can increase user acquisition, engagement and monetization. Some of the things I just mentioned to you, the competitors can’t answer those questions.
You don’t get full transcripts in voice do you?
Merritt: A copy of every message sent to and sent out from the bot gets sent to Dashbot. Google gives you the raw text, what the person said and the intent. The person said, “what is the Yankees score?” We get that text and the intent, “check score.” That is useful because you can see if you are handling the messages properly. If the message didn’t match to the correct intent then you can fix it.
Being able to see the transcripts makes a big difference. You get that from our platform. We also provide the message funnel. Pick any message into or out from the bot and you can see how the bot responds and how users response after that. You follow that flow and can see what flows led to an error message. You can then fix the bot to eliminate those issues.
Google doesn’t give us the transcript itself. When a message gets sent into and out from the bot, they send a copy to use. If you build an action for Google Home, when you get the inbound message from Google, you send a message to us. We natively support Facebook, Slack, Kik, and Google Assistant. We have a generic API that works for any platform. That is what we are using for Alexa to date. In the case of Alexa, you just get the intent. But with Google Assistant, you also get the text of the message that Google captured.
It looks like you also support Microsoft’s Bot Framework. Does that mean Dashbot also supports Cortana voice applications?
Merritt: We get the underlying messaging. Text in and out. It is a generic connection and not a native connection like the others.
Early in your career you worked on mobile at Yahoo! and then founded Motally which was later sold to Nokia. How is the current growth in AI and voice technologies similar or different from what you experienced during the rise of mobile?
Merritt: It’s almost identical. If you swap out bots and insert mobile into the discussions, it transfers automatically. The same issues that mobile faced early on is what you are seeing right now. Some of the biggest challenges with the bots, especially with Facebook, is discoverability. Facebook doesn’t have a store. We have a Facebook demo bot and after nine months we have fewer than 100 users. At Kik, we had 15,000 users after two weeks and that has grown to over 20,000 users. The same exact bot is on Facebook. The difference is that Kik has a directory. The store makes a difference.
Are messaging chat and voice different from an analytics perspective?
Merritt: Yes. There are a couple of big differences. You are getting unstructured data such as audio and video on Facebook and Kik. You need to pay attention to those other types of data. Companies that respond to those types of images get higher engagement. It is important to look at the different content types.
In the case of Alexa and Google Home it is not asynchronous in terms of engagement initiation. For chat, when the response goes to a web hook before it goes back to the user, the bot could send back 4-5 answers to the user. Its not a one-for-one response. The bot can initiate the conversation in Facebook, Kik and Slack without you communicating to me as long as you have communicated with me previously. You can’t do this with voice.
What is something else that might be interesting for people to learn?
Merritt: We have some write-ups on things people ask bots? “Hi” and “Hello” are the top messages. These are conversational interfaces. Roughly 70% of bots receive this and many bots don’t know how to handle this. This is an example of why you need to pay attention to the analytics.
What is your favorite Alexa skill or Google Home action and why?
My favorite voice enabled skill/action is playing music from Pandora. It’s so easy to do via Alexa and Google Home, I found myself not only listening to more music, but I changed my behavior. I used to sit in the back room to work, but now I sit in the kitchen area where the Alexa and Home are as it’s so easy to listen to music through the devices.
I do like the Google Home “tell me about my day” action too – it’s cool that it reads from my calendar. It would be great if it could distinguish voices and be able to read mine vs my wife’s calendar.
Arte will be speaking about Dashbot.io at the RE-WORK’s second annual Virtual Assistant Summit on January 26, 2017.
Voice Will Be the Primary User Interface Says Lola’s Bryan Healey