McAfee’s New Project Mockingbird Aims to Detect Deepfake Voices
McAfee has introduced a new deepfake audio detection tool called Project Mockingbird. The cybersecurity giant used proprietary AI methods to create the new system for identifying audio produced with generative AI with 90% accuracy, pitching it as a way of helping people avoid scams involving voice clones, as well as so-called ‘cheapfakes,’ which use real video combined with deepfake audio.
The malicious use of deepfakes to deceive people or falsely associate them with scams is becoming ubiquitous. A deepfake video of trusted British consumer advice guide Martin Lewis attempting to trick people into sending money for a scam investment provoked outrage from Lewis over how the AI could mislead people who trust him. Even Tom Hanks isn’t immune to AI impersonation attempts, recently alerting fans about a deepfake video of himself selling dental plans on social media. McAfee’s advanced models use AI to analyze context, behavior, and content to assess if audio is synthesized.
“With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems,” McAfee CTO Steve Grobman explained. “So, much like a weather forecast indicating a 70% chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”
The evidence of how far synthetic media produced by generative AI has advanced has provoked corresponding concerns among the populace, according to McAfee. The company ran a survey that found 84% of Americans worry how deepfakes will be used in 2024, up 68% from a year ago. Over a third know someone who has experienced a deepfake scam. The top concerns are impacts on elections, media trust, impersonation of public figures, and AI-powered fraud. That’s why McAfee sees the time as ripe for Project Mockingbird, named for the mimicking skills of the eponymous avian.
There’s been a recent rush among synthetic media developers to create deepfake AI detectors. Synthetic speech startup Resemble AI has an audio watermark feature, followed more recently by ElevenLabs and Meta. While those watermarks detect deepfakes produced by those companies, Resemble AI took it a step further with the Resemble Detect, which can spot deepfake voices from almost any source. There’s also Reality Defender, which raised $15 million in October for its multi-modal synthetic media detection services. The text side has been a bit trickier. OpenAI released a tool for detecting AI-written text to significant fanfare, only to quietly remove it six months after its release. And while Turnitin boasts nearly 100% accuracy in detecting AI writing and GPTZero claims to accurately identify 99% of human-written articles and 85% of AI-generated ones, there’s some skepticism over their claims.
“The use cases for this AI detection technology are far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation. With McAfee’s deepfake audio detection capabilities, we’ll be putting the power of knowing what is real or fake directly into the hands of consumers,” Grobman said. “We’ll help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time giveaway, and also make sure consumers know instantaneously when watching a video about a presidential candidate, whether it’s real or AI-generated for malicious purposes. This takes protection in the age of AI to a whole new level. We aim to give users the clarity and confidence to navigate the nuances in our new AI-driven world, to protect their online privacy and identity, and well-being.”