roger-kibbe-interview-samsung-bixby-01

Roger Kibbe Talks about Voice App Development, His Move to Viv Labs, and What He Will Cover in the Upcoming Samsung Bixby Webinar

Samsung-Bixby-Roger-Kibbe-Interview

Image Source: Samsung

Roger Kibbe picked up an Amazon Echo in early 2015 and later decided to try his hand at building Alexa skills. After 20 years in enterprise IT and nearly two-thirds of that in retail and ecommerce it drew his interest as a new computing interface. He even left a cushy job at Gap to found his own software company that provided code-free voice app development only to see it fail to get traction.

However, Roger learned a lot about voice app development eventually winning first prize in Samsung’s first Bixby capsule developer challenge. Not long afterward, Samsung’s Viv Labs invited Roger to join the team full time working with developers on Bixby. Roger will be hosting a webinar later this week to help developers learn how they can engage with voice assistant and take advantage of promotion through the new Bixby Marketplace. Voicebot caught up with Roger to learn more about this background as a developer and what he plans to cover in this week’s webinar.

Register for Webinar

You have a webinar coming up with Voicebot about Bixby. What do you plan to cover in the presentation?

Roger Kibbe: First, some broad stroke thinking about the massive market opportunity with Bixby (500 million devices a year). I’ll discuss how Bixby is different and the model-driven declarative approach to development versus the imperative, code-heavy approach. I’ll show how Bixby combines these models with AI to drive Dynamic Program Generation. And importantly, I’ll show you how to get started with Bixby development. I will definitely contrast and talk about equivalencies between Bixby and Alexa and Google development to give developers on those platforms a head start on Bixby development.

What one thing, if it could only be one thing, do you hope each attendee takes away from the experience?

Kibbe: An understanding of how to get started with Bixby development and a desire to do so. It’s an amazing ground floor opportunity on a next-generation platform.

Less than a year ago you were brand new to Bixby as a developer but had experience on Alexa and Google Assistant. What struck you at the time as most interesting and different about Bixby?

Kibbe: Without a doubt, it’s the declarative, model-driven approach. I have many years of experience developing web apps and worked quite a bit on voice development. Developing for Bixby was definitely an adjustment and to be frank, challenging at times. The closest thing some may have used is the React Javascript library which is definitely declarative although still driven by Javascript.

Once you get used to it, the declarative approach becomes more natural. You also start to see, particularly with more complex applications, how much easier it is to develop and how much less code you need to write versus an imperative approach.

You’ve been speaking with a lot of developers. What hesitation do they have, if any, about supporting Bixby and what is the most common challenge they run into?

Kibbe: Anyone developing for Bixby needs to be comfortable being a pioneer and building on a very capable but rapidly evolving platform that changes frequently and still has some kinks to work out. Past being willing to be a pioneer, the declarative model-based approach is far and away the biggest challenge. It’s simply a different and less familiar way to develop.

I recall last year before you began working for Samsung’s Viv Labs, you won the Bixby Developer Challenge for your capsule (i.e. voice app) called What Bin. What was that capsule all about and why did the judges award you the win?

Kibbe: What Bin was inspired by my daughters. They were always asking “Dad, can I recycle . . .” or “Can I compost . . .” This is an ideal use case for voice as you are ready to put something into a bin – needing to launch an app or a website is very cumbersome here. You need an immediate answer and voice is the right solution.

So, simply put, What Bin allows you to ask those questions. For example, “Can I recycle receipts” (bet you get the answer to that wrong) or “Can I compost nutshells?” and get an answer. Sometimes the answer is more complex. It may depend upon your local curbside program or require special handling (e.g. toxic materials). What Bin gives you this additional info.

I believe I won (though I wasn’t privy to the scorecards or thinking) because What Bin was very practical and an excellent voice use case. During my presentation at Moscone Center (where Samsung’s Software Developers Conference was held last year), I literally pointed to the three bins (compost, recycle and trash) across the aisle. Making a presentation real like this, by showing the obvious and in this case physically present challenge and solution, is powerful.

You should note that I renamed What Bin to Green Planet. It turns out the automated speech recognition has some issues with that name. Green Planet is live on the Bixby Marketplace. Go enable it today!

How did you first become interested in voice?

Kibbe: For several years I worked for a huge retailer (Gap Inc) on customer experience technology and strategy. I’ve always been a huge believer that tech should be an enabler of a simpler and better experience for all of us.

It took a few years after getting an Alexa in early 2015 for me to put two and two together. Voice is a huge unlock for enabling better experiences – it really unlocks natural human and technology interaction. Typing and swiping are fine but you need to be taught the device’s input method. Voice is THE human way to communicate. Voice enabling our tech is humanizing our tech!

After realizing this, I played for a bit then got serious and left my well paying Fortune 500 job to start a voice startup. I tried, failed, got up again, did voice consulting, and now I’m at Viv Labs working on the next generation of voice experiences.

What is something that is happening in voice now that you think is exciting and forward thinking?

Kibbe: Our voice interactions today are largely siloed to a single device and simple interactions. The future is multiple devices and longer conversations.

What if I’m driving and thinking about my next vacation (a fun thing to think about!). I start a conversation with my voice assistant about possible places to vacation and narrow it down to the Caribbean. Then later while having lunch at work, I explore deeper on my phone. Here I use voice and a UI for a great multimodal experience. I narrow my thoughts down to a few different islands and some amazing eco-resorts on them. That night, after I go home, I share my thoughts via the TV with my whole family. Again, this is voice-driven but is a multimodal experience. We decide on a place and have our voice assistant handle all the travel arrangements for us!

So, what I have above is a multi-device, long-running and multimodal conversation throughout the day as I plan a vacation. And I use voice as a primary method of interfacing with technology. This whole idea of a conversation and discovery over a longer period of time and with multiple devices is really exciting. We are at the start today building these next great voice, and shall I say, intelligent assistant, experiences that support long-running conversations across multiple devices. This is where we are going and where we can start putting a capital “A” in Assistant! I look forward to discussing further with all of you on July 11th!

Register for Webinar

 

Samsung Launches Bixby Marketplace with Participation by Spotify, NPR, and Google Maps

Amazon Alexa Skill Growth Declines Worldwide in the First Half of 2019, But Alexa Skills Pass 60k in U.S. and 30k in U.K.

Samsung Launches Bixby Premiere Developer Program and Announces Three Developer Events in San Francisco, Los Angeles, and New York