Disinformation in elections is taking a significant step forward as artificial intelligence is being utilized to deceive people globally.

Disinformation in elections is taking a significant step forward as artificial intelligence is being utilized to deceive people globally.

The advancement of artificial intelligence is amplifying the risk of election misinformation on a global scale. It has become simple for individuals with a phone and a deceitful mindset to produce deceptive material that can deceive voters.

This represents a significant advancement compared to a few years ago, when producing fraudulent images, videos, or audio recordings involved assembling a team, significant time, technical expertise, and financial resources. Today, with accessible and affordable generative artificial intelligence tools from companies like Google and OpenAI, individuals can easily generate realistic “deepfakes” just by entering a basic text prompt.

Over the past few months, a series of AI-generated deepfake videos related to upcoming elections in Europe and Asia have flooded social media. This serves as a cautionary tale for the 50+ countries who will be holding elections in the current year.

According to Henry Ajder, a top specialist in generative AI from Cambridge, England, there are individuals who are visibly unsure about the authenticity of certain things without having to search too far.

According to Ajder of Latent Space Advisory, the focus has shifted from whether AI deepfakes can impact elections to the extent of their potential influence.

FBI Director Christopher Wray has recently expressed concern about the increasing danger posed by generative AI in the midst of the intense U.S. presidential race. He stated that this technology has made it effortless for foreign adversaries to carry out malicious influence.

Using AI deepfakes, a candidate’s appearance can be manipulated to either tarnish or enhance their image. Voters can be influenced to support or reject certain candidates, or even abstain from voting altogether. However, experts warn that the biggest risk to democracy is the potential for a proliferation of AI deepfakes to undermine the public’s trust in visual and audio evidence.

Some recent examples of AI deepfakes include:

– The leader of Moldova, who is pro-Western, publicly endorsed a political party that supports Russia.

Slovakia’s leader of the liberal party is caught on audio discussing plans of rigging votes and increasing the cost of beer.

A video circulating online shows a member of parliament in Bangladesh, a country with a significant Muslim population, dressed in a bikini.

The advanced and complex nature of the technology presents challenges in identifying the creators of AI deepfakes. Researchers suggest that governments and companies do not currently possess the ability to prevent the overwhelming influx of deepfakes, and are not taking sufficient action to find a solution to the issue.

According to Ajder, as advancements in technology occur, it will become increasingly difficult to obtain definitive responses regarding much of the falsified content.

Some AI deepfakes aim to sow doubt about candidates’ allegiances.

In Moldova, an Eastern European country bordering Ukraine, pro-Western President Maia Sandu has been a frequent target. One AI deepfake that circulated shortly before local elections depicted her endorsing a Russian-friendly party and announcing plans to resign.

It is believed by authorities in Moldova that the actions are being orchestrated by the Russian government. The purpose of the deepfakes, which coincide with this year’s presidential elections, is to undermine faith in the electoral system, candidates, and institutions. Olga Rosca, an adviser to Sandu, stated that the goal is also to weaken trust between individuals. The Russian government has refused to provide a statement regarding this matter.

China has faced allegations of using generative AI as a weapon for political motives.

Earlier this year, an AI deepfake in Taiwan, a self-governing island that China considers its own, caused concerns about U.S. involvement in domestic politics.

The fabricated video being shared on TikTok depicted U.S. Congressman Rob Wittman, who serves as the vice chairman of the House Armed Services Committee, making assurances of increased military aid to Taiwan if the current party’s nominees were chosen in the January elections.

Wittman accused the Chinese Communist Party of attempting to interfere in Taiwanese politics by utilizing TikTok, a company owned by China, to disseminate “propaganda.”

The government’s representative from the Chinese foreign ministry, Wang Wenbin, stated that they do not address fabricated videos and condemn any involvement in the internal matters of other nations. He emphasized that the Taiwan election is solely China’s domestic matter.

It is challenging to verify audio-only deepfakes as they do not have obvious clues of being edited, like photos and videos.

The voice of the leader of the liberal party in Slovakia, a country often influenced by Russia, was heard in widely shared audio clips on social media just before the parliamentary elections. In the clips, the leader was allegedly discussing raising beer prices and manipulating the vote.

According to Ajder, it is comprehensible that voters may be fooled by deception since humans tend to rely on their visual judgement more than their auditory judgement.

In the U.S., robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January’s primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deepfakes.

In impoverished nations where media literacy is lacking, even low-quality artificial intelligence forgeries can still be successful.

Last year in Bangladesh, the situation was similar when opposition member Rumeen Farhana, who has been outspoken in her criticism of the governing party, was wrongly portrayed wearing a bikini. This video went viral and caused anger in the traditionally conservative, predominantly Muslim country.

According to Farhana, individuals tend to believe everything they see on Facebook.

The upcoming elections in India, the world’s biggest democracy, have experts worried due to the potential for social media platforms to spread disinformation.

Certain political campaigns are leveraging generative artificial intelligence in order to strengthen their candidate’s portrayal.

The campaign for Prabowo Subianto, a presidential candidate in Indonesia, utilized a basic mobile application to establish a stronger bond with supporters throughout the large archipelago. Through the app, voters could upload pictures and create AI-generated images of themselves with Subianto.

As the number of AI deepfakes continues to increase, governments worldwide are rushing to establish regulations and safeguards.

The EU has regulations in place for social media platforms to reduce the potential for disseminating false information or influencing elections. They will also enforce specific labeling for AI-generated deepfakes, which will begin next year but will not be implemented in time for the EU’s parliamentary elections in June. However, other countries are significantly lagging behind in this area.

The largest tech giants have recently agreed, of their own accord, to a pact aimed at avoiding interference of AI tools in elections. One of these companies, which owns both Instagram and Facebook, has announced its intention to begin identifying deepfakes that are posted on its platforms.

However, controlling deepfakes is more challenging on platforms such as the Telegram messaging service, which did not agree to the voluntary agreement and utilizes encrypted chats that are not easily monitored.

There are concerns among professionals that attempts to control AI deepfakes may result in unforeseen outcomes.

Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.

Major artificial intelligence services have guidelines in place to regulate the spread of political misinformation. However, analysts suggest that it is still relatively simple to circumvent these restrictions or turn to other services without similar protective measures.

The increasing use of AI is causing problems, even when there are no malicious intentions. Numerous well-known chatbots, derived from AI technology, continue to generate incorrect and deceptive information, which poses a threat to the voting process.

“Not only software poses a threat. Candidates may also try to mislead voters by falsely attributing unfavorable events to AI technology.”

Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia, stated that a society where everything is considered questionable and individuals are allowed to decide their own beliefs poses challenges for a thriving democracy.

__

New York was the location where Swenson filed the report. Other journalists from Associated Press, including Julhas Alam in Dhaka, Bangladesh, Krutika Pathi in New Delhi, Huizhong Wu in Bangkok, Edna Tarigan in Jakarta, Indonesia, Dake Kang in Beijing, and Stephen McGrath in Bucharest, Romania, also contributed to this report.

__

The AP is aided by various private foundations in enhancing its explanatory reporting on elections and democratic processes. To learn more about AP’s democracy initiative, click here. The AP is solely accountable for all of its content.

Source: wral.com