YouTube AI Policy

YouTube Starts Requiring AI and Deepfake Disclosure in Videos

YouTube is rolling out a new policy that will require creators to disclose when they have used generative AI tools to produce synthetic or manipulated media that could be mistaken for reality. Under the updated rules, YouTube creators must indicate if their videos contain altered audio or visual content depicting realistic people, places, or events created with generative AI tools or deepfake tech.

Deepfake Disclosure

YouTube laid out the kind of generative AI use that would require such a disclosure. That includes using generative AI to recreate someone’s appearance or voice or modify real-world footage. There’s a new tool in YouTube’s Creator Studio allowing uploaders to disclose relevant AI usage as seen above. More prominent labels may be applied on videos relating to sensitive topics like news and health information. You can see how it might look to a viewer of the video on the right. However, YouTube is keen to make it clear that the label is not necessary in cases where the result is obviously not real or when generative AI helped make the video in other ways.

“Generative AI is transforming the ways creators express themselves – from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” YouTube explained in its announcement. “Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.”

The disclosure requirements aim to curb potential misinformation as realistic AI-generated media becomes increasingly accessible. Enforcement actions could be taken against creators who do not comply by properly disclosing applicable synthetic visuals. YouTube also said it may attach disclosure labels left out by users to videos containing unidentified AI-generated content that has the potential to mislead viewers.

As generative AI capabilities rapidly advance, major platforms are implementing guardrails and transparency requirements. For instance, YouTube’s parent company, Google, has mandated such disclosures on political ads this year, while OpenAI published a whole strategy guide to how it is working to prevent misuse of its tech during the election. And it’s not just about politics. Celebrities like Tom Hanks now have to regularly issue warnings about deepfake scams using their faces and voices.

That doesn’t mean YouTube is against generative AI for more benign purposes. The platform has many such experiments, including making music, offering AI-produced summaries of comments, and a conversational AI assistant to answer questions about a video and its content. There’s also the synthetic image and video generator for YouTube Shorts videos called Dream Screen and even a text-to-image tool for creating custom playlist album covers.

 

  

YouTube and Music Stars Preview Generative AI Experiments Developed with Google DeepMind

YouTube Tests Generative AI Chatbot and Comment Summaries

Google Mandates Deepfake Generative AI Disclaimers on Political Ads Ahead of 2024 Election