Did Artificial Intelligence Shape The 2024 US Election?

24

Days after New Hampshire voters received a robocall with an artificially generated voice that resembled President Joe Biden’s, the Federal Communications Commission banned the use of AI-generated voices in robocalls.

It was a flashpoint. The 2024 United States election would be the first to unfold amid wide public access to AI generators, which let people create images, audio and video – some for nefarious purposes.

Institutions Rushed To Limit AI-Enabled Misdeeds.

Sixteen states enacted legislation around AI’s use in elections and campaigns; many of these states required disclaimers in synthetic media published close to an election.

The Election Assistance Commission, a federal agency supporting election administrators, published an “AI toolkit” with tips election officials could use to communicate about elections in an age of fabricated information. States published their own pages to help voters identify AI-generated content.

Experts warned about AI’s potential to create deepfakes that made candidates appear to say or do things that they didn’t. The experts said AI’s influence could hurt the US both domestically – misleading voters, affecting their decision-making or deterring them from voting – and abroad, benefitting foreign adversaries.

But the anticipated avalanche of AI-driven misinformation never materialised. As Election Day came and went, viral misinformation played a starring role, misleading about vote counting, mail-in ballots and voting machines. However, this chicanery leaned largely on old, familiar techniques, including text-based social media claims and video or out-of-context images.

“The use of generative AI turned out not to be necessary to mislead voters,” said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. “This was not ‘the AI election.’”

Daniel Schiff, assistant professor of technology policy at Purdue University, said there was no “massive eleventh-hour campaign” that misled voters about polling places and affected turnout. “This kind of misinformation was smaller in scope and unlikely to have been the determinative factor in at least the presidential election,” he said.

The AI-generated claims that got the most traction supported existing narratives rather than fabricating new claims to fool people, experts said.

For example, after former President Donald Trump and his vice presidential running mate, JD Vance, falsely claimed that Haitians were eating pets in Springfield, Ohio, AI images and memes depicting animal abuse flooded the internet.

Comment via Facebook

Corrections: If you are aware of an inaccuracy or would like to report a correction, we would like to know about it. Please consider sending an email to [email protected] and cite any sources if available. Thank you. (Policy)


Comments are closed, but trackbacks and pingbacks are open.