Google and YouTube will require that political ads “prominently disclose” the use of artificial intelligence starting in November.
The companies, both owned by Alphabet, updated its political content policy to include the requirement late Wednesday.
The new policy says that election advertisers “must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.”
“This disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users,” the policy states, noting that it applies to image, video and audio content.
Synthetic content that is “inconsequential to the claims made in the ad” will be exempt from the disclosure requirements, the policy states. “This includes editing techniques such as image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events.”
But any deep fakes — or ads “with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” or ads that use AI to alter footage of a real event or to generate a “realistic portrayal of an event to depict scenes that did not actually take place” — must use a “clear and conspicuous disclosure,” the policy states.
The policy is kicking in during an off-year election, with just a three states — Kentucky, Mississippi and Louisiana — holding gubernatorial races and about a dozen states electing legislators.
That will give the online platforms some time to judge how the new policy stands up before the presidential and congressional elections in 2024.
Fake images are certainly nothing new in political advertising, but the generative software released in recent months makes it much easier to create false images and sounds and make them appear more realistic.
The potential use of AI in politics in on the radar. Last week, an interview with former President Donald Trump done on a glitchy phone line that distorted his voice had many online questioning if the far-right Real America’s Voice had gotten duped by a prankster using AI to sound like Trump.
The network insisted that it was Trump on the line, and the former president posted a clip from the interview on Truth Social, but the incident nevertheless raised flags for those concerned about the spread of misinformation during campaigns.
Some 2024 presidential campaigns have already dipped their toes in AI. Florida GOP Gov. Ron DeSantis released an ad in July that used an AI-generated voice to impersonate Trump. Another one pushed out in June showed Trump hugging Dr. Anthony Fauci.
And Trump used an an AI-voice cloning tool to manipulate video of CNN host Anderson Cooper, with the distorted result posted to Truth Social, The Associated Press reported.
In addition, the Republican National Committee put up an AI-generated ad that aimed to show the future of the United States if President Joe Biden is reelected, The AP said. It used realistic, but fake, photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad acknowledged that it used AI.
Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House. Several states have also passed or are considering legislation related to deepfake technology.