A survey indicates that the majority of American adults believe that AI will contribute to the spread of false information during the 2024 election.

As 2024 approaches, experts are sounding the alarm about the potential impact of artificial intelligence on the upcoming presidential election. The increasing use of AI tools could greatly amplify the spread of misinformation in a way that has not been seen before.

A recent survey conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that the majority of adults in the United States share a similar perspective.

According to the survey, a majority of adults (58%) believe that AI technology, which has the ability to target specific political audiences, create numerous convincing messages, and produce realistic fake images and videos within seconds, will contribute to the proliferation of inaccurate and deceptive information during the upcoming elections.

Compared to this, only 6% believe that AI will contribute to reducing the spread of misinformation, while a third of individuals think it will not have a significant impact.

“66-year-old Rosa Rangel from Fort Worth, Texas remarked on the events of 2020, specifically in regards to social media.”

Rangel, a member of the Democratic party, stated that she observed numerous false information being spread on social media in 2020. She believes that AI will exacerbate the situation in 2024, likening it to a simmering pot about to overflow.

Only 30% of adults in America have utilized AI chatbots or image generators, and less than half (46%) are familiar with AI tools. However, there is a general agreement that candidates should not be utilizing AI.

When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).

The majority of both Republicans and Democrats share the sentiment that it would be detrimental for presidential candidates to fabricate false images or videos (85% of Republicans and 90% of Democrats) or to dodge voter inquiries (56% of Republicans and 63% of Democrats).

The mutual lack of optimism among the political parties towards candidates utilizing AI stems from its previous implementation in the Republican presidential primary.

The Republican National Committee unveiled an advertisement in April that was completely created by artificial intelligence. The purpose was to depict a potential future for the country if President Joe Biden wins a second term. The ad featured fabricated, yet convincing, images of abandoned shops, military presence in the streets, and a surge of immigrants causing chaos. The disclaimer stating that it was AI-generated was barely noticeable.

The GOP nominee, Ron DeSantis, utilized AI technology in his Florida governor campaign. He shared an advertisement that featured AI-generated visuals depicting former President Donald Trump embracing Dr. Anthony Fauci, a leading expert in infectious diseases who was in charge of managing the country’s actions against the COVID-19 outbreak.

Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.

Andie Near, a 42-year-old resident of Holland, Michigan who generally supports the Democratic party, believes that politicians should focus on showcasing their strengths rather than using fear tactics to sway voters.

She utilized AI technology to edit images for her job at a museum, but she believes that when politicians use this technology to deceive, it can exacerbate the impact of traditional attack advertisements.

Thomas Besgen, a Republican college student, also opposes the use of deepfake technology in political campaigns to create false audio or visual content that misrepresents a candidate’s statements.

“That’s morally incorrect,” the 21-year-old from Connecticut stated.

Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he is in favor of banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.

The Federal Election Commission is currently reviewing a request to oversee the use of AI-created deepfakes in political advertisements leading up to the 2024 election.

Although he has doubts about the role of AI in politics, Besgen is optimistic about its potential impact on the economy and society. He regularly utilizes AI tools, such as ChatGPT, to better understand historical subjects and generate ideas. Additionally, he enjoys using image-generators for leisurely purposes, such as envisioning the future appearance of sports stadiums.

According to him, he usually has confidence in the information provided by ChatGPT and plans to utilize it for further understanding of the presidential candidates. This is in contrast to only 5% of adults who claim they will do the same.

The survey revealed that Americans are more inclined to seek information about the presidential election from traditional news sources (46%), personal connections (29%), and social media (25%) rather than AI chatbots.

According to Besgen, I would approach any response with caution.

Most Americans have doubts about the accuracy of information provided by AI chatbots. Only a small percentage (5%) are highly confident in its factualness, while a larger portion (33%) are somewhat confident. The majority (61%) express low levels of confidence in the reliability of this information.

This aligns with the caution expressed by numerous AI specialists regarding the use of chatbots for information retrieval. The chatbots rely on large language models powered by artificial intelligence, which repeatedly choose the most logical next word in a sentence. While this makes them effective at imitating writing styles, it also makes them susceptible to fabricating information.

Individuals belonging to the two main political parties tend to be receptive towards implementing regulations for AI. They expressed a greater level of support rather than opposition towards measures such as prohibiting or identifying content created by AI that could be enforced by technology companies, the government, social media platforms, or news outlets.

Approximately 66% support the government prohibiting political ads that feature fake or deceptive images created by AI. Similarly, a majority also wants technology companies to clearly mark all AI-generated content posted on their platforms.

On Monday, Biden implemented federal regulations for AI by signing an executive order to steer the advancement of this rapidly advancing technology. The order mandates the industry to establish safety and security protocols and instructs the Commerce Department to provide instructions for identifying and labeling AI-generated content.

The majority of Americans believe that preventing the spread of AI-generated false or misleading information during the 2024 presidential elections is a joint effort. Approximately 63% believe that technology companies who develop AI tools hold a significant responsibility, while approximately half also assign a lot of responsibility to the news media (53%), social media companies (52%), and the federal government (49%).

Democrats are slightly more inclined than Republicans to believe that social media companies carry a significant amount of responsibility, but they generally share the same views on the responsibility held by technology companies, the news media, and the federal government.


A survey of 1,017 individuals over the age of 18 took place from October 19-23, 2023. The participants were chosen from NORC’s AmeriSpeak Panel, which uses a probability-based sampling method to accurately represent the demographics of the United States. The margin of error for all respondents is +/- 4.1%.


Reporter O’Brien, based in Providence, Rhode Island, wrote this article. Linley Sanders, a writer for the Associated Press in Washington, D.C., also contributed to the report.


The AP receives funding from various private foundations to improve its coverage of elections and democracy. Learn more about the AP’s democracy project here. The AP is solely responsible for all of its content.

Source: wral.com