Experts caution against the influence of AI deepfakes as social media regulations become less strict and their use becomes more widespread during elections.

Experts caution against the influence of AI deepfakes as social media regulations become less strict and their use becomes more widespread during elections.

After almost three years since the assault on the U.S. Capitol by rioters, the misinformation about the election that fueled the violence continues to circulate on social media and cable news, including claims of ballot-filled suitcases, late-night ballot drops, and deceased individuals casting votes.

Specialists caution that the upcoming presidential election could potentially be even more problematic. The measures put in place to combat false allegations in the previous election are weakening, while the methods and technologies used to generate and disseminate them continue to grow in potency.

A significant number of Americans, encouraged by former President Donald Trump, persist in promoting the unfounded notion that elections across the United States lack credibility. A majority of Republicans (57%) do not accept that Democrat Joe Biden legitimately won the presidency.

Recently, the availability and simplicity of generative artificial intelligence tools have significantly reduced the cost and effort required to disseminate false information that could deceive voters and potentially impact election outcomes. As a result, social media companies, who previously prioritized correcting inaccuracies, have changed their focus.

Oren Etzioni, a professor emeritus at the University of Washington and expert in artificial intelligence, expressed concerns about an overwhelming amount of false information. While he cannot confirm this, he hopes to be proven incorrect. However, he believes all the necessary factors are present and is deeply fearful.

Manipulated images and videos surrounding elections are nothing new, but 2024 will be the first U.S. presidential election in which sophisticated AI tools that can produce convincing fakes in seconds are just a few clicks away.

Etzioni warned that deepfakes, which are digitally altered images, videos, and audio recordings, have begun to surface in experimental ads for presidential campaigns. He also expressed concern that more malicious versions could circulate on social media without any indication of their falsity, potentially deceiving individuals just days before an election.

The speaker suggested that it was possible to witness a situation where a political candidate, such as President Biden, is quickly taken to a hospital. They also mentioned the potential for a candidate to be falsely attributed with statements they never made, as well as possible financial chaos and fabricated acts of violence.

According to Larry Norden, the senior director of the elections and government program at the Brennan Center for Justice, advanced counterfeit technology has already had an impact on elections worldwide. In the days leading up to Slovakia’s recent elections, artificial intelligence created audio recordings that falsely portrayed a liberal candidate discussing tactics to manipulate the election, including raising beer prices. Despite efforts from fact-checkers to verify their authenticity, these recordings were still shared as genuine on various social media platforms.

Experts say that these instruments could potentially be employed to aim at particular groups and fine-tune deceptive messages concerning voting. This could manifest as convincing SMS, fake notifications regarding voting procedures disseminated in various languages through WhatsApp, or fraudulent websites designed to resemble legitimate government sites in your locality.

According to misinformation expert Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, our natural instincts may lead us to believe fabricated content that appears realistic and convincing rather than the actual truth.

Congress members from both the Republican and Democratic parties, along with the Federal Election Commission, are examining methods to control the use of technology. However, no definitive regulations or laws have been established yet. Therefore, individual states have taken the lead in implementing restrictions on political deepfakes created with artificial intelligence.

Several states have implemented legislation mandating the labeling of deepfakes or prohibiting those that falsify candidates. A few social media platforms, such as YouTube and Meta (owner of Facebook and Instagram), have also implemented AI labeling measures. It is uncertain if they will effectively identify and penalize offenders on a regular basis.

It was just over a year ago that Elon Musk bought Twitter and began firing its executives, dismantling some of its core features and reshaping the social media platform into what’s now known as X.

Since that time, he has changed and disrupted the way its verification process works, making public figures susceptible to being impersonated. He has significantly reduced the number of teams that used to combat false information on the platform, placing the responsibility of moderation on the users themselves. Additionally, he has reinstated the accounts of individuals who spread conspiracy theories and promote extremist beliefs, despite their previous bans.

The modifications have been praised by numerous right-leaning individuals who claim that Twitter’s previous efforts to moderate content were a form of censoring their opinions. However, supporters of democracy argue that the acquisition has transformed what was once a flawed yet valuable source of news and election updates into an uncontrolled space that promotes hate speech and false information.

According to Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group, Twitter was once considered a highly responsible platform that was willing to experiment with features aimed at decreasing misinformation, even if it meant sacrificing engagement.

“Clearly, they have completely shifted to the opposite stance,” he remarked, noting that he thinks the adjustments made by the company have allowed other platforms to ease their own policies. X did not respond to emailed inquiries from The Associated Press, instead sending an automated reply.

According to a report from Free Press, a nonprofit organization promoting civil rights in technology and media, X, Meta, and YouTube have collectively eliminated 17 policies aimed at preventing hate and misinformation leading up to 2024.

YouTube stated in June that it will continue to monitor and regulate content that spreads misinformation about current or future elections. However, it will no longer remove content that falsely asserts the 2020 election or past U.S. elections were impacted by widespread fraud, errors, or glitches. This decision was made in order to safeguard the freedom to openly discuss political concepts, even those that may be contentious or based on disproven beliefs.

Lehrich suggested that although technology companies may choose not to remove deceptive material, there are alternative methods that platforms can use to decrease the spread of false information. These methods include marking older articles and implementing measures that require reviewing content before it can be shared.

Since 2020, X, Meta, and YouTube have all terminated numerous employees and contractors, including content moderators.

According to Kate Starbird, an expert on misinformation at the University of Washington, the downsizing of certain teams, often attributed to political influence, will likely lead to a more problematic situation in 2024 compared to 2020.

According to their website, Meta has approximately 40,000 employees dedicated to ensuring safety and security. Additionally, they claim to have the largest fact-checking network among all platforms. They also frequently remove groups of fraudulent social media accounts that aim to create conflict and mistrust.

According to the post, Meta is the leading tech company in terms of protecting online elections, with consistent efforts throughout the year rather than just during election periods.

According to Ivy Choi, a representative from YouTube, the platform is strongly committed to promoting valuable content on YouTube, especially during election periods. She highlighted the platform’s recommendation system and information panels, which offer users trustworthy election updates. Additionally, she mentioned that the platform takes action to remove any content that deceives voters or promotes disruption in the democratic process.

The emergence of TikTok and other, less controlled platforms like Telegram, Truth Social, and Gab, has resulted in the growth of online information silos where unsubstantiated allegations can easily circulate. Certain apps that are highly favored among minority communities and immigrants, such as WhatsApp and WeChat, utilize private messaging, making it difficult for external organizations to detect any false information that may be shared.

Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas, expressed concern about the potential for recycled and deeply ingrained false narratives with more advanced tactics in 2024. However, she also expressed hope that there is increased social resilience to combat these issues.

The fact that Trump is currently leading in the Republican presidential primary is a concern for researchers studying misinformation. They fear that this could contribute to false information during the election and possibly result in vigilantism or violence.

The ex-president continues to make unfounded assertions of victory in the 2020 election.

According to Starbird, Donald Trump has actively promoted and amplified unfounded allegations of election fraud in the past. It is likely that he will continue to use this tactic to rally his supporters.

Trump has prepared his followers to anticipate fraud in the upcoming 2024 election without any proof. He has encouraged them to take action and “protect the vote” in order to prevent any potential vote manipulation in Democratic cities with diverse populations. This is not the first time Trump has insinuated that elections are rigged if he is not declared the winner, as he did so prior to the 2016 and 2020 elections.

According to Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, the ongoing erosion of voter confidence in democracy could potentially result in acts of violence. The organization, which monitors misinformation, is concerned about this trend.

He stated that if individuals lose faith in election information, democracy will fail. If a false information campaign is successful in convincing a significant portion of the American people that the election results are not accurate, then Jan. 6 will seem like a small event in comparison.

In the years following 2020, election officials have been getting ready for a potential increase in denial of election results. They have taken measures such as sending out teams to clarify voting procedures, enlisting external organizations to track and combat false information, and strengthening security measures at vote-counting facilities.

In Colorado, Jena Griswold, the Secretary of State, stated that paid social media and TV campaigns that portray election workers in a relatable manner have successfully protected voters from false information.

She stated that it will be a difficult struggle, but we must take initiative. False information poses a significant danger to the current state of American democracy.

The National Association of Secretaries of State is launching #TrustedInfo2024, an online campaign led by Minnesota Secretary of State Steve Simon’s office, to raise awareness and establish election officials as a reliable source of election information for the 2024 election.

The office is scheduling meetings with election officials from counties and cities. They will also be keeping their website’s “Fact and Fiction” page updated with false claims. A recent law in Minnesota aims to protect election workers from threats and harassment, prohibit the spread of misinformation before elections, and make it a crime for individuals to share deepfake images without consent in order to harm a political candidate or influence an election.

Simon stated that while we remain optimistic, we also prepare for potential negative outcomes by implementing multiple layers of safeguards.

Kim Pytleski, the Oconto County Clerk, has been actively promoting voting and elections in small gatherings throughout the rural Wisconsin county north of Green Bay. This effort is aimed at increasing confidence in the voting process among voters. Additionally, the county holds public equipment tests to allow residents to witness the process.

She stated that having the ability to communicate directly with election officials is crucial. Seeing that there are actual individuals behind these procedures who are dedicated to their roles and strive to do their best work, helps people comprehend that we are here to serve them.

___

Fernando provided coverage from Chicago. This report also includes contributions from Associated Press writer Christina A. Cassidy in Atlanta.

___

Private foundations provide support to the Associated Press in order to improve their coverage of elections and democracy. For more information on AP’s democracy initiative, please visit their website. The AP is accountable for all of their content.

Source: wral.com