10 Minutes On Cutting Voice AI Cost by 10x to Get 100x More Adoption with Deepgram
Voice AI applications from speech-to-text transcription to voice assistants have proliferated over the past five years. However, Deepgram CEO Scott Stephenson says the application market is cost constrained. There are many more applications that could be employed, but the cost of traditional cloud processing is just too high to consider them.
In the latest 10 Minutes On interview, Stephenson breaks down the economics of voice AI processing and how Deepgram is changing the cost structure for many new applications. He believes this will lead to an explosion of new enterprise usage of voice AI technologies.
ASR from 2 Miles Underground
We also have some fun recalling how Deepgram’s core technology originated two miles underground. As a physicist measuring dark matter, Stephenson developed the end-to-end natural language processing (NLP) model behind Deepgram to capture discussions about scientific findings without having to take copious notes. Dark matter experiments are conducted miles below the earth’s surface which offers an entirely new perspective on deep learning.
CPU versus GPU
Stephenson also discusses the difference between traditional NLP using a CPU versus Deepgram’s new approach employing GPUs. This enables far higher system throughput which then translates into scalability and cost benefits.
More About 10 Minutes On Interview Series
This interview is part of a new series called 10 Minutes On by Voicebot.ai. The interviews focus on a single topic, are short enough to watch between Zoom meetings, and long enough to share some interesting insights and solution details. You can find more video interviews like this on Voicebot’s YouTube channel or by clicking Videos in the website’s top navigation bar.