Startup Audioburst is Using AI to Change Audio Discovery
Audioburst is a curation and search index for radio. It aims to do for audio what Google has done for web pages – make it easier to find the audio content that you are looking for. It does this by using natural language processing to understand the meaning behind audio content and index it accordingly, making it more easily accessible to search engines. Audioburst was founded in 2015 and just announced a $6.7 million round in funding led by speech recognition tech company Advanced Media.
Audioburst for Voice-First Devices
The platform also features screen-free, speech-based technology that enables search and interaction with audio which will pair nicely with voice-first devices. Audioburst just launched a new API that will allow third-party developers to use its library of audio content to deliver a personalized audio listening experience for the user.
If you’re thinking Alexa, Audioburst already has. It has an Alexa skill called “News Feed” that will give you the latest audio “bursts” on any topic. However, when I tried the skill, it only gave me one “burst” per topic. I asked about the NBA Finals and while it did give me a clip that only aired 52 minutes prior, it only last about 30 seconds and then faded out. Even when I used one of the suggested queries, “Alexa, ask News Feed what’s the latest on Donald Trump?” I only received one short clip. The idea of having multiple audio clips from various media outlets across the country on one topic sounded promising, but the Audioburst Alexa skill did not deliver. I could have received the same information in a more concise manner through a Flash Briefing.
To be fair, Audioburst’s focus isn’t on the consumer market at the moment. It’s mainly focused on working with developers for now which could be why its Alexa skill and search site are lacking. Audioburst may be building the “world’s largest library of audio content,” but its success will come from how the content is applied, not the content itself.