Ads AI

Google Mandates Deepfake Generative AI Disclaimers on Political Ads Ahead of 2024 Election

 

Google has updated its advertising policy to require political ads using generative AI and deepfakes to clearly disclose that synthetic media is present. The rule aims to combat deception as deepfakes and other AI-produced media proliferate in campaigns. It takes effect in November 2023, a year ahead of the next U.S. presidential election.

AI Safety

Google’s new rule mandates that text, audio, or video generated by any AI system must be prominently labeled as such. The only exceptions are for inconsequential uses like removing red eye from photos. Advertisers must place disclaimers prominently to ensure voters notice them. The move comes as candidates and Political Action Committees increasingly leverage generative AI in ads to smear opponents and stoke outrage. Deepfakes portraying politicians in false videos are all too easy to make, and they have been popping up with accelerating frequency. Even if there isn’t a deliberate lie in an ad, overuse of AI content also risks fostering general distrust.

“In mid-November 2023, we are updating our Political content policy to require that all verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events,” Google’s new policy states. “This disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users. This policy will apply to image, video, and audio content.”

Google’s examples of ads that need the disclaimer include if the ad “makes it appear as if a person is saying or doing something they didn’t say or do,” or if it “alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place.” Google’s action aims to counter these emergent threats, pressing political advertisers to reveal when AI is used to manipulate perceptions. However, effective enforcement may prove challenging as the policy isn’t foolproof. The understandable flexibility for Google’s requirements may enable willful obfuscation of deepfake media. The policy does nothing to address the potential for generative AI to mislead viewers when it’s not explicitly in a paid political ad, such as a YouTube upload. And the line for what’s acceptable is ultimately subjective, leaving Google open to accusations of bias, regardless of whether they are merited.

Still, the policy does at least acknowledge that the technology and its potential for causing problems are a real factor and that they should be considered when formulating advertising and other regulatory rules. Political ads are a prominent example of where generative AI regulation may be urgent, but it’s not the only one. Despite major U.S. AI companies agreeing to a set of safety and responsibility principles as announced by President Biden in July, there hasn’t been much in the way of concrete decisions yet. The ease by which deepfake scams can already trick people out of their money is worrisome. Bills on AI governance are in the works, something OpenAI CEO Sam Altman encouraged at a U.S. Senate hearing, but they aren’t done yet. That’s not even considering the international scale, though the U.S. and the rest of the G7 are working to devise an international standard for generative AI, with the “Hiroshima AI Process” producing a report by the end of the year.

  

Leading AI Developers Agree to White House’s AI Safety Principles

G7 Leaders Create “Hiroshima AI Process” to Discuss Generative AI Regulation

OpenAI CEO Sam Altman Urges Congress to Create AI Regulation at Senate Hearing