How China’s New Deepfake Rules Restrict Synthetic Media
Deepfakes must indicate their artificial origin and have their subject’s consent, according to new Chinese government regulations coming into effect this month. The Cyberspace Administration of China (CAC) cites consumer protection and national security as motivation for the new synthetic media rules, and they could set a template used in other countries. That said, there are open questions of whether officials could use these regulations to censor speech, not to mention how well they can cope with the rapidly evolving tech as it proliferates across products like the popular Chinese deepfake app Zao, seen above.
Chinese Deep Synthesis Rules
China’s generative AI rules are designed to control how and where people employ “deep synthesis services,” as the government refers to synthetic media. That includes AI-generated text, images, and video. The two central elements of the rules are verification and content management. Synthetic media platforms have to incorporate some kind of sign that they are produced by AI, like an indelible version of the color bar at the bottom of images created by DALL-E or otherwise “marked prominently to avoid public confusion or misidentification.”And no deepfakes that replicate a real person’s image, voice, or other characteristics can be made at all without their consent. The platforms must also have a way of confirming online identities and set up a system to respond to reports of suspicious deepfake content.
“In recent years, deep synthesis technology has developed rapidly. While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honor, and to counterfeit others’ identities,” the CAC stated in its announcement, as translated by Google. “Committing fraud, etc., affects the order of communication and social order, damages the legitimate rights and interests of the people, and endangers national security and social stability. The introduction of the “Regulations” is a need to prevent and resolve security risks, and it is also a need to promote the healthy development of in-depth synthetic services and improve the level of supervision capabilities.”
Even with the best of intentions, malicious actors could likely find ways around these kinds of rules. The flexibility of generative AI is part of its appeal, but also why controlling it might prove difficult. While tools for spotting deepfakes are coming out in tandem with new generative AI creators, they are still imperfect. Plus, while a way to fight deepfake identity theft makes sense, it’s all too easy to imagine law enforcement agencies using this kind of regulation. Specious accusations of misuse might force the editing or removal of valid illustrations or commentary that the authorities disapprove of seeing shared. Instead of preventing nightmares like the protagonist of The Capture TV show experiences when a deepfake video of him is published, the rules might shut down production of the satirical show Deep Fake Neighbour Wars for mocking celebrities.