Meta plans to start marking political advertisements that utilize AI-generated images in order to assist 2024 voters.

On Wednesday, the parent company of Facebook and Instagram announced that they will now require political ads on their platforms to disclose if they were made using artificial intelligence.

Beginning January 1, Meta’s latest policy will display labels on users’ screens to indicate the presence of AI when they interact with ads. This rule will be implemented globally.

On Tuesday, Microsoft announced its plans for the upcoming election, which include a feature allowing political campaigns to add a digital watermark to their advertisements. This feature aims to clarify the source of the ads and prevent any unauthorized changes from being made without traceable evidence.

The advancement of AI technology has simplified the process of producing realistic audio, images, and video at a rapid pace. However, in the hands of malicious individuals, this technology can be utilized to create fabricated videos of a candidate or terrifying visuals depicting election tampering or violence at polling places. When combined with the influential algorithms of social media, these falsified materials have the potential to deceive and disorient voters on an unprecedented level.

Meta Platforms Inc. and other technology companies have faced backlash for not taking sufficient action to mitigate this potential threat. The recent announcement from Meta, which coincided with a congressional hearing on deepfakes, is unlikely to alleviate these concerns.

European officials are currently developing extensive rules for the implementation of AI, while US lawmakers face a tight deadline to enact regulations before the 2024 election.

In the beginning of this year, the Federal Election Commission initiated a procedure to possibly control AI-generated deepfakes in political advertisements prior to the 2024 election. Last week, President Joe Biden’s administration released an executive order aimed at promoting responsible advancement of AI. This order will include a requirement for AI developers to submit safety data and other details about their programs to the government.

Democratic U.S. Rep. Yvette Clarke of New York is the sponsor of legislation that would require candidates to label any ad created with AI that runs on any platform, as well as a bill that would require watermarks on synthetic images, and make it a crime to create unlabeled deepfakes inciting violence or depicting sexual activity. Clarke said the actions by Meta and Microsoft are a good start, but not sufficient.

“We are on the brink of a new age of disinformation warfare, with the help of emerging A.I. technology,” she stated in an email. “It is crucial for Congress to implement measures to safeguard our democracy and combat the flood of deceptive content generated by A.I. that could potentially mislead the American public.”

Next year, several countries including Mexico, South Africa, Ukraine, Taiwan, India, and Pakistan will hold national elections in addition to the U.S.

AI-created political advertisements have been seen in the United States. One was released by the Republican National Committee in April and was entirely generated by AI. It aimed to depict a potential future for the country if a Democrat, Biden, is reelected. The ad included fabricated, yet convincing, images of boarded-up businesses, military presence on the streets, and a surge of immigrants causing chaos. It was clearly marked to acknowledge the use of AI.

During the month of June, the presidential campaign of Ron DeSantis, the governor of Florida, released a negative advertisement targeting his Republican primary rival Donald Trump. The ad featured computer-generated images of the former president embracing Dr. Anthony Fauci, a renowned infectious disease expert.

Vince Lynch, CEO of AI company IV.AI and an AI developer, expressed the challenges for a casual observer in determining what to believe in this field. He believes that a combination of government regulation and self-imposed guidelines by tech companies is necessary to ensure the safety of the public. Lynch also emphasizes the importance of companies taking responsibility for their actions.

The latest policy on Meta will address any promotions related to social causes, elections, or political candidates that include manipulated images of people or events using AI. However, minor adjustments such as resizing or enhancing an image would be permitted without the need for disclosure.

In addition to indicating when an advertisement features AI-created images, details about the ad’s utilization of AI will be listed in Facebook’s virtual advertisement archive. Meta, located in Menlo Park, California, states that any content that violates this regulation will be deleted.

In September, Google announced a comparable policy for AI labeling on political advertisements. According to this guideline, political ads displayed on YouTube or other Google platforms must reveal any utilization of AI-modified voices or visuals.

In addition to its updated guidelines, Microsoft published a statement stating that countries like Russia, Iran, and China are expected to utilize AI to disrupt elections in the United States and other countries. The report also cautions that the U.S. and other nations should take necessary measures to prepare for this potential threat.

According to the report from the technology company based in Redmond, Washington, groups operating on behalf of Russia are actively working.

The authors of the report stated that since July 2023, individuals connected to Russia have been using creative techniques to attract audiences in both Russia and the western world with fake, but advanced, multimedia material. As the election season continues, it is predicted that these individuals will continue to enhance their skills while the technology they use becomes more advanced.