Guides

Coming soon

Market insights

Coming soon

Search

Personalize

0%

Political deepfakes are spreading like wildfire thanks to GenAI

3 mins

Alina  Chernomorets

Published by: Alina Chernomorets

18 March 2024, 10:30AM

In Brief

Global elections face heightened challenges due to a 130% monthly increase in AI-generated deepfake images, as revealed by a study from the Center for Countering Digital Hate (CCDH).

Deepfakes, with a 900% growth from 2019 to 2020, are a rising concern, with 85% of Americans worried about the spread of misleading video and audio deepfakes, according to a YouGov poll.

The study focused on X (formerly Twitter) and found that 41% of attempts to generate election-related deepfakes were successful, using popular AI image generators like Midjourney, DALL-E 3, DreamStudio, and Image Creator.

Despite some generators having policies against election disinformation, Midjourney emerged as the most prolific, producing election deepfakes in 65% of tests, raising concerns about the ease of creation.

Recommendations to address the deepfake crisis include implementing responsible safeguards for AI tools, collaboration with researchers, increased investment in social media platform trust and safety staff, and urging policymakers to enact legislation for AI product safety and transparency. Positive steps include a voluntary accord by image generator vendors and measures by Meta and Google to label AI-generated content in political ads.

Political deepfakes are spreading like wildfire thanks to GenAI

This year, many people worldwide are participating in elections, with high-stakes races in over 50 countries. However, the usual challenges faced by democracies are exacerbated by a surge in AI-generated disinformation and misinformation. A study by the Center for Countering Digital Hate (CCDH) reveals a 130% monthly increase in AI-generated deepfake images related to elections on X (formerly Twitter) over the past year.

Callum Hood, head of research at CCDH, points out the risks, stating, “There’s a very real risk that the U.S. presidential election and other large democratic exercises this year could be undermined by zero-cost, AI-generated misinformation.” The study highlights the impact of free, easily accessible AI tools and insufficient social media moderation on the deepfake crisis.



Without the proper guardrails in place . . . AI tools could be an incredibly powerful weapon for bad actors to produce political misinformation at zero cost, and then spread it at an enormous scale on social media. Through our research into social media platforms, we know that images produced by these platforms have been widely shared online.

Callum Hood

Deepfakes have been on the rise, with a 900% growth between 2019 and 2020, according to the World Economic Forum. A recent poll by YouGov indicates that 85% of Americans are concerned about the spread of misleading video and audio deepfakes. The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will contribute to the spread of false information in the 2024 U.S. election cycle.

To assess the increase in election-related deepfakes on X, the study looked at community notes mentioning deepfakes on the platform. Most deepfakes on X were created using AI image generators like Midjourney, OpenAI’s DALL-E 3, Stability AI’s DreamStudio, or Microsoft’s Image Creator. The study conducted tests using 40 text prompts related to the 2024 U.S. presidential election, revealing that 41% of the attempts resulted in deepfake generation, despite some generators having policies against election disinformation.

While Midjourney was the most prolific generator, producing deepfakes in 65% of tests, the study emphasized variations in generator behavior and the ease with which election-related deepfakes could be created. Social media played a crucial role in the spread of deepfakes, with instances of fact-checking inconsistencies and deepfakes garnering significant views.



AI tools and platforms must provide responsible safeguards, [and] invest and collaborate with researchers to test and prevent jailbreaking prior to product launch … And social media platforms must provide responsible safeguards [and] invest in trust and safety staff dedicated to safeguarding against the use of generative AI to produce disinformation and attacks on election integrity.

Callum Hood

Addressing the issue, Hood suggests implementing responsible safeguards for AI tools, collaboration with researchers to prevent misuse, and increased investment in trust and safety staff by social media platforms. Policymakers are urged to use existing laws to prevent voter intimidation and disenfranchisement caused by deepfakes and to enact legislation ensuring AI product safety and transparency.

Positive steps have been taken, with image generator vendors signing a voluntary accord to address AI-generated deepfakes, and Meta and Google implementing measures to label and disclose AI-generated content in political ads. Despite these efforts, the study underscores the need for urgent action to safeguard democracy against the rising threat of political deepfakes.



It’s incumbent on AI platforms, social media companies and lawmakers to act now or put democracy at risk

Callum Hood

User Comments

There are no reviews here yet. Be the first to leave review.

Hi, there!

TAKE QUIZ TO GET

RELEVANT CONTENT

Blue robot
Brown robot
Green robot
Purple robot

Share this material in socials

Copy link

Join our newsletter

Stay in the know on the latest alpha, news and product updates.