Google Announces New Policy Requiring Clear Disclosure of AI Use in Political Ads
In a move aimed at increasing transparency around emerging technologies, Google has unveiled updates to its political advertising policies that will require advertisers to clearly disclose any use of artificial intelligence. The changes come as experts warn AI could increasingly be used to generate or alter visual and audio content in misleading ways during election campaigns.
Under the updated rules, which take effect in mid-November 2023, verified political advertisers on Google platforms will need to disclose whether an ad contains AI-generated imagery, video or audio. Specifically, advertisers must state if AI was used to make it appear a person said or did something they did not, or generate realistic depictions of events that did not actually occur.
The disclosure must be prominently displayed in a clear and conspicuous manner. Google said the goal is to ensure voters understand when AI has been used to create or manipulate political messaging.
The decision was welcomed by lawmakers and campaign finance watchdogs who have been voicing concerns over the potential for ‘deepfake’ technology and other advanced AI to deceive voters. “Requiring disclosure of AI use in political ads is a positive step that will help address the threat of manipulated media,” said Senator Amy Klobuchar (D-MN) in a statement.
Other experts note the requirements still leave some gray areas. “Advertisers may try to skirt the rules by making minimal or vague disclosures,” said Claire Wardle, co-founder of the nonprofit First Draft. “Regulators need to provide clarity and actively monitor for compliance.” Google says it will work with election authorities to refine the guidelines based on real-world experiences.
The move comes as AI is already being experimented with in the political sphere. Last year, the Republican National Committee released an ad describing a simulated future if President Biden was reelected, without disclosing the contents were AI-generated. And Governor Ron DeSantis’ campaign tweeted photos that had been manipulated with Deepfakes technology to depict President Biden kissing Dr. Anthony Fauci, also without labeling them as such.
Lawmakers on both sides of the aisle have since introduced legislation seeking rules around sensitive uses of AI, such as generating deceptive political content. Representative Yvette Clarke (D-NY) noted the 2024 election “will be the first time in U.S. history where AI-generated content will be used in political ads.” Her bill would mandate disclosures for all partisan ads employing AI to alter audio, video or images.
With major campaigns and Super PACs expected to invest heavily in cutting-edge digital tools and algorithms, calls are growing for strengthened oversight. “Transparency around AI is critical as these technologies become more widespread and sophisticated,” said Representative Lori Trahan (D-MA). “Google’s new rules are an important step, but Congress must also take action to safeguard our democratic process.”
As the midterm and 2024 election cycles approach, Google and other online platforms will face greater pressure to ensure AI is not abused to intentionally mislead voters. The company’s disclosure policy could set an example for competitors and help start a broader public discussion on drawing appropriate lines around new computational propaganda. With continued technological progress, transparency and oversight will remain key challenges – but also opportunities – in maintaining democracy in an era of rapidly advancing artificial intelligence.
Comments are closed.