Conversational AI Content Moderation Startup Spectrum Labs Raises $32M
AI content moderation startup Spectrum Labs has raised $32 million in a Series B funding round led by Intel Capital. Spectrum’s platform detects and responds to dangerous and offensive text and voice comments in real-time as a way for online communities to build trust and ensure the safety of their members.
Spectrum Detection
Spectrum Labs uses natural language understanding to monitor what people are saying and typing within online communities. The platform uses real-time natural language understanding for both text-based and audio interactions. The company claims its AI can recognize and distinguish among more than 40 kinds of toxic comments, including threats, hate speech, sexual abuse material in more than 30 languages. Spectrum’s AI listens to audio without transcribing it, which helps speed up its response significantly and would be crucial for operating on social audio platforms like Clubhouse or Twitter Spaces. The AI can spot and respond to such language according to the rules of the forum in less than half a second. The responses include issuing a warning, deleting a message from public view, or kicking someone off a server.
Each incident is flagged for human moderators to assess and audit. Spectrum’s clients include about 20 major platforms, including game companies like Riot Games, dating websites like Grindr, and social media services like Pinterest. The new money will go toward scaling the company’s reach and further improvements of the technology powering the platform. Spectrum has raised about $46 million in total, including a $10 million round in the fall of 2020.
“Our AI can be deployed across languages, behaviors, and industries very effectively to fuel growth and build trust through safer and more inclusive communities,” Spectrum Labs CEO Justin Davis explained. “As more of our living, working, and playing happens on the Internet, leading companies are increasingly thinking about how to better understand, protect, and grow their communities. This investment led by Intel Capital, the company empowering the future of computing, will help Spectrum Labs to achieve our ambition of enabling every company with the infrastructure and real-time, easy to deploy AI that’s critical to build trust with every person, everywhere in the world, at every moment.”
Moderating AI
Online conversations have spiked in the wake of the COVID-19 pandemic, but that hasn’t lowered the possible dangers highlighted by Spectrum’s list of toxic interactions. AI may be the best solution to finding and responding when they arise, as human moderators could never keep up with the volume of online conversation. For instance, Microsoft filed a patent last year for an AI to measure how people are feeling based on their voice and other signals with an eye toward moderating voice chat on its game servers.
There are also other startups in the field, such as Modulate, whose ToxMod tool can detect and flag problematic and toxic speech in real-time video game voice chat. ToxMod detects and alerts moderators when it hears racist slurs, threats, or anything else the developers want to know. The AI uses context and emotional indicators to distinguish when something said actually rises to the level of a problem. And while Intel Capital is leading Spectrum’s round, Intel the company has also been working on a voice AI tool to filter out offensive and abusive language during in-game voice chats. The Bleep program applies the AI on Intel PC chips in real-time to detect and remove any slurs uttered by one player before another hears it.
Follow @voicebotai Follow @erichschwartz
Intel is Developing a Real-Time Language Filter for Online Games