Google will soon require political advertisers to “prominently disclose” when they use AI to create their advertisements. Google has said that starting in November, marketers would need to disclose any election-related “synthetic content” that shows “realistic-looking people or events.”
That involves altering footage of an actual incident (or constructing one that seems realistic) to create a scene that never occurred, as well as political advertisements that employ AI to make someone appear as though they are saying or doing something they never did.
Google states that disclaimers for these kinds of advertisements must be “clear and conspicuous” and that they must be included in all visual, audio, and video content. Such statements as “This audio was computer generated” or “This image does not depict real events” must be included on the labels. Any “inconsequential” adjustments, such as brightening an image, changing the background, or using AI to remove red eyes, won’t need a label.
It’s spokesperson Allie Bodack says in a statement to The Verge that the company is “expanding our policies a step further to require advertisers to disclose when their election ads include material that has been digitally altered or generated.”
Already, some political campaigns use AI to generate advertisements. The Republican National Committee published an attack ad in April that uses AI-generated images to criticize President Joe Biden’s reelection campaign. The former chief medical advisor for the White House, Anthony Fauci, and Donald Trump appear in an attack ad that Florida Governor Ron DeSantis issued.
Representative Yvette Clarke (D-NY), who presented a measure requiring disclaimers for political ads incorporating AI-generated content, is among the lawmakers who have expressed worry about these false advertisements. The Federal Election Commission is also considering imposing limitations on AI-based election advertisements.