AI ‘deepfakes’ poised to wreak havoc on 2024 election: experts

[ad_1]

An onslaught of high-quality, AI-generated political “deepfakes” has already begun ahead of the 2024 presidential election – and Big Tech firms aren’t prepared for the chaos, experts told The Post.

The rise of generative AI platforms such as ChatGPT and photo-focused Midjourney have made it easy to create false or misleading posts, pictures or even videos – from doctored footage of politicians making controversial speeches to bogus images and videos of events that never actually occurred.

Striking examples of AI-generated misinformation have already circulated on the web – including a deepfake video of President Biden verbally attacking transgender people, false pictures of former President Donald Trump resisting arrest and viral photos of Pope Francis wearing a Balenciaga puffer jacket.

The result, according to experts, is uncharted territory for tech firms such as Facebook, Twitter, Google-owned YouTube and TikTok, who are set to face an unprecedented swell of high-quality deepfake content from US social media users and nefarious foreign actors alike.

So far, the companies have provided few details about their plans to protect users.

The Silicon Valley giants “are not prepared” to contend with election-related deepfakes because they have “no incentive” to deal with the issue, according to Bradley Tusk, a political consultant and CEO of Tusk Venture Partners.


Generative AI advances have prompted a wave of deepfake images.
Twitter / Eliot Higgins

“In fact, the incentives are virtually reversed — if someone creates a deepfake of Trump or Biden that ends up going viral, that’s more engagement and eyeballs on that social media platform,” Tusk told The Post.

“The platforms have been unable, and unwilling, to prevent human-generated harmful content from spreading. This problem gets exponentially worse with the proliferation of generative AI,” he added.

Candidates have also begun making use of generative AI. Last month, Trump shared a deepfake video that depicted CNN anchor Anderson Cooper claiming the former president had just finished “ripping” the network “a new a—hole.”

GOP presidential contender and Florida Gov. Ron DeSantis’ campaign team shared an ad with manipulated pictures depicting Trump hugging Dr. Anthony Fauci during the COVID-19 pandemic.


Pope Francis
AI pictures of Pope Francis decked out in a Balenciaga jacket fooled millions of users.
TikTok/@vince19visuals

Misleading AI-generated posts from political campaigns are only one part of the problem.

The bigger issue, according to many experts, is the likelihood that foreign adversaries and rogue elements will use generative AI to manipulate voters or otherwise impact the integrity of US elections.

In May, a likely AI-generated photo of a fake explosion at the Pentagon went viral on Twitter – where it was shared by Kremlin-backed news outlet RT – and prompted a brief stock market selloff.

The rapid advancements in generative AI mean the “rate of misinformation could increase dramatically” compared to recent elections, according to Center for AI Safety director Dan Hendrycks, whose nonprofit recently organized a letter comparing the threat of AI to nuclear weapons or pandemics.


One fake video showed President Biden ranting against transgender people.

“They were creating content without today’s AI systems,” Hendrycks said. “Imagine how much more efficient they will be when they have AI to help them generate stories, rewrite them to be more persuasive, and tailor them for specific audiences.”

Some of the tech world’s most prominent figures, including Elon Musk and OpenAI CEO Sam Altman, have flagged AI-generated misinformation as one of the most serious risks posed by the burgeoning technology.

In May, Altman told a Senate that he was “nervous” about the possibility of AI disrupting elections and called it a “significant area of concern” that required federal regulation.

Other experts, including the “Godfather of AI” Geoffrey Hinton and Microsoft chief economist Michael Schwarz, have also publicly warned of bad actors using AI to manipulate voters during elections.

When reached for comment, a Google representative pointed to recent marks from CEO Sundar Pichai, who touted the company’s investments in tools to detect and label synthetic content.


Pentagon photo
An AI-generated photo of a fake Pentagon explosion triggered a brief stock selloff in May.
Twitter/@KobeissiLetter

Last month, the company said it would begin labeling AI-generated images with identifying metadata and watermarks.

YouTube’s content policies ban the posting of content that has been doctored to manipulate other users and removes offending posts through machine learning and human reviewers.

A TikTok spokesperson noted the ByteDance-owned app rolled out a synthetic media policy earlier this year, which requires any AI-generated or otherwise manipulated content that depicts a realistic scene to be clearly labeled.

“We are firmly committed to developing guardrails for the safe and transparent use of AI, which is why we announced a new synthetic media policy in March 2023,” the TikTok spokesperson said in a statement. “Like most of our industry, we continue to work with experts, monitor the progression of this technology, and evolve our approach.”

A representative for Snapchat said the company “regularly evaluate[s] our policies to make sure our protections keep pace as technologies evolve, including AI.”


Donald Trump
Some of the fake photos showed Trump “resisting arrest.”
Twitter / Eliot Higgins

Representatives for other major tech platforms, including Twitter, Meta and Microsoft, did not return requests for comment.

Aside from the unprecedented technical difficulty of combating AI-generated content, tech companies have to walk a fine line between blocking misinformation and delving into censorship, according to Sheldon Jacobson, a public policy consultant and professor of computer science at the University of Illinois at Urbana-Champaign.

Efforts to stop AI deepfakes could be construed as political bias against a particular party or candidate, Jacobson said.

Additionally, the tech firms have “very little control” over the actions of foreign adversaries who decide to misuse the technology for nefarious reasons.

“We aren’t China where we’re trying to control things,” Jacobson said. “This is a free communication system – but with that are risks, and there is going to be misinformation communicated. And now that you bring in generative AI, this is a whole new level.”


Donald Trump
A whole set of AI-generated photos featuring Donald Trump circulated earlier this year.
Twitter / Eliot Higgins

With the election still more than a year out, Jacobson said tech leaders at major companies are likely scrambling to develop a strategy to combat AI-generated deepfakes.

“I don’t think they’re saying anything because they don’t know what they can do. That’s the problem,” he added.

In Tusk’s view, Big Tech firms won’t take decisive action to prevent the flow of misinformation through AI-generated content unless lawmakers repeal Section 230 – the controversial clause that shields companies from viability for damaging content published on their platforms.

In May, the Supreme Court decided to leave Section 230 intact in a pair of cases that were considered the most significant challenges of the liability shield to date. However, lawmakers from both parties are still calling for Section 230 to be altered or repealed.

“If the financial repercussions of doing nothing are big enough, the platforms will actually act and help prevent harmful content that has a negative impact on our democracy,” Tusk said.

[ad_2]
Source by [New York Post]

Leave a Reply