Interview with Smartly.ai CEO Hicham Tahiri

What did you see in the market that motivated you to start building Alexa Designer and then transition to Smartly.ai?

When we saw Steve Jobs presenting Siri in November 2011, we immediately saw a huge opportunity to help developers create vocal apps. So, we started to build a tool where Siri developers could easily add cognitive apps within the Siri brain. 5 years later, Apple finally opened Siri for developers. Before that, it was really limited and didn’t allow enough room for creativity.

The funny thing is that an unexpected challenger came in that surprised everybody! Alexa by Amazon launched in 2015 and it delivered on all of our dreams for openness. We were super excited and immediately started working on it. Our first iteration, called Alexa Designer, quickly got hundreds of developers to sign up. Then with the Google Home announcement, we decided to opt for a more universal brand reflecting our new focus on conversational AI. And Smartly.ai was born.

What problem do you solve?

While a lot of progress has been made in speech recognition, speech synthesis and natural language processing, not much has been achieved in the conversational AI field. This is key because, in a way, it’s the brain that decides the proper answer to give and the proper action to do given a certain context.

How many users do you have today and what platforms are they developing for?

Today, our solution is used by almost 1000 developers. From our own data, the most used channels are 1) Alexa 2) Messenger and 3) SMS. Our API is also powering many IoT devices.

I see that you support designing for both chatbots and voicebots. How big is the difference between them?

It’s quite different. In a voice interface, you literally can play with the audio environment: speech, sounds, songs, marketing jingles, voice emojis. You can show some stuff on a secondary screen but it’s Voice First. With a chat interface, you will prefer to display stuff such as buttons, menus, carousels, cards, GIFs, and maps.

If someone builds a skill for Amazon Alexa, can Smartly port that directly to Siri or Messenger or is there unique configuration for each platform?

We are totally cross platform. The port can be done in a few clicks. But, you have to keep in mind that Voice and Chat are quite different. So our portability paradigm will be more relevant between voice platforms such as Alexa, Google Home, Siri, and others and between chat interfaces such as Messenger, Skype, Slack and others but not necessarily from voice to chat. And that’s always what we recommend. You may fail if you want to do a one size fits all between voice and chat.

In a June blog post you provided some insight into the capabilities of the Siri SDK. Has that market had any developer traction in the interim and if so what are people doing with it?

Not directly from independent developers but we had some car manufacturers interested in this feature.

How do you expect the Google Home availability next week to impact the consumer market?

From what I see within our community and from our perspective, Google is warmly welcomed into this battle to be the ultimate voice assistant. It will create the same positive emulation and competition that we have seen with iOS and Android. I think it will be a great race to watch as one player is far ahead with proven and successful products, tools and community. The other is a bit late but has the AI powerhouse and expertise in growing a developer community. Plus, Google just added a booster to its engine with the API.ai acquisition.

How do you distinguish between what Siri and Google Now have done previously with Alexa, if at all?

I think Apple and Google were too busy fighting for the mobile space to see the Smart Home opportunity. In a sense, the failure of Amazon in the mobile space has led to its success in the Smart Home one.

Do you expect the various voice platforms to maintain a closed or open architecture approach?

It’s hard to speak about the roadmap of others. But, if we look at history and the recent announcements, chances are that Apple will stay semi-closed for a while arguing that third party skills can lower the user experience, which is in some cases is not totally false, and that Amazon and Google will go for the openness.

What Alexa skill built on Smartly by another developer are you most impressed with?

A skill that plays chess in a virtual chessboard.

Who should be building for the voice web / voice ecosystem?

Every brand, business, student, and developer because it’s the future.

What is your favorite voice-enabled skill, application or utility?

I have one that is not published. I use it every morning to see how my day is looking. It gives me a mashup of agenda, weather, traffic, my to-do list and finishes with a relaxing song to let me wake gently.

What question should I have asked but didn’t?

What do you see are the accelerators and brakes to Alexa adoption?

Evangelization is a big deal. The tech startups and agencies have to convince brands to jump in the Alexa Ecosystem and in fact at Smartly.ai we do that a LOT. I like to see my job as Chief Evangelist Officer  But seriously, the more we can get from Amazon to inform and convince users and companies to jump in, the better it is for the ecosystem to grow.

I also see that having more compelling apps will drive adoption. This is in flux at the moment and apps will only happen if there are multiple monetization tools in place to provide developers with a good ROI for their creative work. So now is the time for voice to get in-app purchases, paid apps, subscriptions, and ads to make it a real business. I am confident that this will come. The Alexa platform is getting better and better and I am often delighted by the announcement of features we were dreaming of just a few months ago!