What’s Working and What’s Not in AI Projects with Emerson Sklar from Applause
The market for AI applications and adding AI features such as voice assistants to existing applications has evolved over the past three years. Whereas everything was an experiment in earlier projects, we are now getting to a point where there are best practices and known pitfalls to avoid. That is the theme of our upcoming Voicebot webinar with Applause and we are taking it a bit further than you might expect.
Revealing Hidden Knowledge – Project Insights and a 5k Consumer Survey
After a recent briefing by Applause’s Emerson Sklar and Ben Anderson, it was clear to me that they had accumulated some valuable insights about what is working and what is not in AI projects generally. These were insights from inside large projects that aren’t commonly shared publicly. Applause does a lot of projects for big companies and the stories they were telling me should be heard by the voice AI community and enterprises planning for their next initiative.
Sklar will review five different AI production solutions in the webinar that each offer useful insights for similar projects. And, there are some more general learnings that can apply to any AI project. In addition, Applause just completed a global survey of over 5,000 consumers about their experience with and perception about chatbots and voice assistants. Sklar will cover some of the key findings from that research and I will compare that to Voicebot data around similar themes.
The webinar is scheduled for Tuesday, March 22nd, at 1:00 pm EDT. Register to attend live and have access to the Q&A or to receive the recording afterward.See More
A Front Row Seat for AI Innovation
Emerson also shared his perspective on a few different topics in advance of the webinar. He has been in the industry for nearly a decade and has had a front row seat to many innovative projects and technical advances. Here are some excerpts:
1) What is the most elegant AI application you’ve come across to date?
One of the coolest uses of AI that I’ve seen is Babbly, a fellow member of last year’s Google Voice/AI Startup Accelerator. It has an app that analyzes babies’ speech and provides custom-tailored recommendations to help them learn and grow.
Most use cases in Conversational AI today are focused on supporting adult speakers and specific languages. Even though babies may not have yet learned an actual language, Babbly has shown that speech development is universal, and they enable parents to get early, actionable insight into their child’s development.
2) What is a common mistake people make in AI projects and what type of problems result?
A major pitfall that I see companies make, especially those new to implementing AI, is biting off more than they can chew — trying to tackle an overly-ambitious use case from the outset. instead of demonstrating success with something achievable and then learning and iterating from that experience. Users’ tolerance for poor digital experiences is ever decreasing. The best approach for both improving customer satisfaction and delivering a significant impact to the business is targeting a meaningful, achievable subset of the possible AI use cases, doing that really well, and learning and iterating from that experience.
3) What is something that would surprise people to learn about some of the successful AI projects you’ve been involved with?
One thing that has become apparent from working with hundreds of organizations across the space — from pre-seed startups to the largest global enterprises — is that everyone is still learning and trying to figure out AI development best practices. It’s broadly understood that AI is a very different paradigm than traditional app development and every company struggles to determine the best architecture to adopt, the best way to collect data to train their models, and the appropriate volume of data necessary to do so.
However, what I find fascinating is that what often separates successful projects from unsuccessful ones is not the technical expertise of their team or the programmatic perfection of their solution, but simply whether or not they spent time to truly listen to, understand, and determine how to best support their users’ real expectations and behaviors. We’re fortunate to be able to leverage our incredibly diverse, global community of participants to support our customers’ AI training and testing needs. The unusual and often unexpected insight that this real-world community enables us to unearth regularly reminds me that AI, for all its incredible possibilities, needs the human touch to succeed.
Join us on Tuesday, March 22nd to hear the details and real-world projects that led to these insights and others.