Music Streaming Service Deezer Building AI to Hunt and Remove Deepfake Songs
Popular Europe-based music streaming service Deezer unveiled plans to use AI to detect and remove deepfake singers and synthetically generated songs from its platform. Deezer is looking to automate the checking and tagging of pieces involving generative AI in some capacity, pitching the system as a way to protect artists who might lose income and listeners who might not appreciate being duped.
Deezer is starting the project with a focus on spotting voice clones of real people via its Radar tool for rapidly processing music catalogs for telltale distortion, tempo shifts, and other signs of AI assistance. Each song that fits the bill will be tagged accordingly, though humans will be a backstop and check to make sure the AI isn’t just marking a song using a lot of autotune. The streaming platform’s long-term vision includes a whole new payment structure for artists depending on the tech used to produce the music.
“With over 100,000 new tracks uploaded per day to our platform, it’s becoming increasingly important to prioritize quality over quantity and defend real artists that create truly valuable content,” Deezer CEO Jeronimo Folgueir explained. “As a leading streaming platform, Deezer has a responsibility to create a fair and transparent environment for music consumption. Our goal is to weed out illegal and fraudulent content, increase transparency, and develop a new remuneration system where professional artists are rewarded for creating valuable content. This is why we have embraced the discussion around a new artist centric model, and we are now also developing tools to detect AI-generated content.”
Deezer’s plan follows a rush by musicians and music labels to experiment, condemn, or otherwise take a stance on the technology. Universal Music cited copyright infringement when it convinced streaming services to remove a deepfake Drake and The Weeknd song called Heart On My Sleeve in April after receiving more than 10 million TikTok views. Music producer and artist Timbaland shared a sample of a song featuring a deepfake voice of the Notorious B.I.G. before pausing work on it after some backlash from fans of the deceased rapper.
Others are testing the deepfake waters more publicly, as when French DJ and music producer David Guetta created and played an AI-written and performed Eminem song at some of his recent shows, calling it “Emin-AI-em.” Meanwhile, musical artist Grimes has gone so far as to offer 50% of the royalties on any AI-generated song that uses her voice. Holly Herndon outright offers a free synthetic version of her voice to make music with called Holly+, and the musical group YACHT trained an AI model to write an entire album called “Chain Tripping.”
Identification may be challenging if people attempt to hide that they used generative AI. Voice cloning can make audio deepfakes good enough to fool some biometric tests and the untrained ears of a concert crowd. Artists wanting to stamp authenticity on their tracks end up working with companies like Resemble AI for an audio watermark that can mark AI-generated speech without compromising sound quality. Resemble has come up with a way of layering a sound entirely inaudible for humans underneath someone speaking or singing, encoded with information decipherable to a computer that confirms its origin and authenticity.
“AI can be used to create new incredible content and I believe there are massive benefits of using generative AI, but we need to ensure it’s done in a responsible way,” Folgueir said. “There’s an opportunity now to get things right from the start of the AI revolution, and not make the same mistakes as the social media giants did when fake news started to flood their platforms. We owe it to the artists and the fans.”