Americans worry about AI deepfakes ahead of elections, Adobe pushes for tools to combat misinformation

0
11

Americans are worried about how AI deepfakes could spark a misinformation wildfire ahead of the U.S. presidential election this fall — and some are even curtailing their social media use because of that concern.

Several studies have shown that the majority of Americans are uneasy about how generative artificial intelligence can be used to spread misinformation, and a number have confirmed that many (if not most) U.S. residents are particularly worried about AI-made misinformation as it pertains to the 2024 presidential election. For example, an Axios survey found that 53% of Americans think misinformation spread by AI will impact which candidate wins.

Adobe added to that pile of research, commissioning an opt-in survey of over 2,000 U.S. residents ages 18 and older and finding that an even more alarming number of Americans are worried about deepfakes influencing election results: 80%. The study shared with Quartz also found that 40% of respondents reduced or stopped their use of a specific social media platform due to the amount of misinformation on it.

Their concerns are well-founded: AI-generated content is already being used by politicians and hackers alike. Florida Gov. Ron DeSantis’ now-defunct presidential campaign posted deepfakes of Donald Trump kissing former National Institute of Allergy and Infectious Diseases director Anthony Fauci on X last summer. Also in the summer of 2023, the political party of Poland’s prime minister, Donald Tusk, posted a partially AI-generated video of his opponent during an election campaign. Meanwhile, a China-backed online group, “Spamoflauge,” spread videos of AI-generated newscasters reporting fake news ahead of Taiwan’s presidential election. And a group of researchers at Microsoft said China will likely use AI to meddle in the U.S. presidential election.

“At Adobe, we see the transformative potential of generative AI in boosting creativity and productivity. However, in an era of deepfakes, AI-powered misinformation poses a significant risk to election integrity,” said Andy Parsons, Adobe’s Senior Director of the Content Authenticity Initiative. “Our opinion on the team is that there will be some impact from AI in this election. It won’t be the end of civilization and democracy, but over time, without proper check marks in place, this will get worse.”

“Once fooled by a deepfake, you may no longer believe what you see online. And when people begin to doubt everything, when they can’t tell fiction from fact, democracy itself is threatened,” he said.

Adobe’s answer: Content credentials

Adobe has spearheaded industry initiatives to fight AI-generated misinformation over the last several years.

Adobe launched the Content Authenticity Initiative (CAI) in partnership with The New York Times and Twitter in 2019. The CAI developed an open source code called Content Credentials that companies can integrate into their products for free. The technology attaches metadata to digital content and is designed to be interoperable between systems. The end result is that photos and videos have an icon users can click to see content was made, how much it was edited, and whether AI was used; that icon then goes with the picture or video when it’s exported from Adobe Premiere, for example, and posted on platforms like YouTube or Instagram.

“The mission of the CAI has remained steadfast and constant, and that is to give information consumers, fact checkers, [and] creatives the ability to know what they’re doing,” Parsons said.

Adobe doesn’t make money from its participation in the CAI or the group’s collaborative project, the Coalition for Content Provenance and Authenticity (C2PA), which it launched with big players such as Microsoft, Arm, and Intel in 2021 to create a digital content certification standard. Meta announced in February that it would use Content Credentials, and that same month, Google joined the C2PA to help further develop those very standards in collaboration with Adobe and its now 2,500 partners. Microsoft and Adobe also worked on a Content Credentials tool for political campaigns ahead of the upcoming election.

“That little CR [Content Credentials] pin, which we imagine can be as recognizable in the next few years as the copyright symbol, indicates not that [digital content] is good, true, or false, but that there’s more information [about where the content came from], which is why we like that nutrition label metaphor. You know, you can walk into a supermarket, and nobody’s gonna stop you from buying unhealthy food — but you have a fundamental right to know what’s in it.”

Parsons said Adobe started this whole effort when a presenter at its annual conference in Las Vegas in 2016 showed just how easy it would be to create deepfake audio of Jordan Peele’s voice.

#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud

Parsons said Adobe’s efforts to create more standards around AI-generated content are to the benefit of their Creative Cloud customers, who want such tools. He also said Content Credentials are stronger than digital watermarks, a piece of code that identifies AI-made content and has been introduced as a potential solution to deepfakes.

“It’s fairly easy to train an AI to defeat watermarks. So we support the idea of watermarks, but we think the combination of a watermark and a Content Credential together is a much stronger countermeasure,” he said.

Still, Parsons acknowledges that more will need to be done to prevent AI-related misinformation and disinformation. He said widespread adoption of a digital content certification standard such as Content Credentials and education in media literacy are needed.

“None of these countermeasures is a silver bullet. It’s going to take government, civil society, technology companies, and a variety of technological approaches to really address misinformation,” he said.

LEAVE A REPLY

Please enter your comment!
Please enter your name here