Technology companies have joined forces to sign an agreement aimed at fighting deceptive tactics during elections that are enabled by artificial intelligence.

Technology companies have joined forces to sign an agreement aimed at fighting deceptive tactics during elections that are enabled by artificial intelligence.

On Friday, major technology corporations agreed to take voluntary measures to avoid the misuse of artificial intelligence in tampering with democratic elections globally.

Leaders from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok came together at the Munich Security Conference to unveil a fresh approach for handling AI-generated deepfakes that intentionally deceive voters. Additionally, twelve other companies, including X by Elon Musk, have also pledged their commitment to this agreement.

Nick Clegg, the president of global affairs for Meta (the parent company of Facebook and Instagram), stated in an interview before the summit that it is widely acknowledged that no single technology company, government, or civil society organization is equipped to address the emergence of this technology and its potential malicious exploitation on their own.

The agreement is mainly representative, but focuses on the advancement of more believable AI-generated images, audio, and video which deceitfully manipulate or modify the appearance, voice, or actions of political candidates, election officials, and other important individuals in a democratic election. It also addresses the dissemination of false information to voters regarding the time, location, and legality of voting.

The companies have not made a commitment to prohibit or eliminate deepfakes. Instead, the agreement outlines strategies they will implement to identify and classify deceitful AI content when it is generated or shared on their websites. It states that the companies will exchange effective methods with each other and take prompt and appropriate actions when this type of content begins to circulate.

The ambiguous nature of the promises and absence of mandatory obligations may have appealed to a wide range of companies, but it left advocates wanting more substantial guarantees.

Rachel Orey, senior associate director at the Bipartisan Policy Center’s Elections Project, noted that the language used may not be as forceful as anticipated. She believes that it is important to recognize the companies’ efforts to prevent their tools from being used to disrupt fair elections, but also acknowledges that their participation is voluntary. The organization will continue to monitor their actions.

Clegg stated that every company has its own set of content policies, which is appropriate.

According to him, this is not an attempt to restrict everyone’s actions. Additionally, it is widely believed within the industry that ignoring potential issues and playing a constant game of catch-up will not effectively address the challenges posed by a new technological landscape.

Numerous politicians from both Europe and the United States also participated in the announcement on Friday. Vice President of the European Commission, Vera Jourova, expressed that while the agreement may not cover everything, it does include significant and beneficial aspects. She further encouraged her colleagues to take accountability for not misusing AI technology and cautioned against the potential downfall of democracy due to AI-driven misinformation in EU member nations.

Leaders at the annual security meeting in Germany have reached an agreement, coinciding with over 50 countries holding national elections in 2024. Notably, Bangladesh, Taiwan, Pakistan, and Indonesia have already completed their elections.

There have been efforts to use AI to interfere in elections, including a recent incident where automated robocalls imitated the voice of President Joe Biden in an attempt to dissuade voters from participating in New Hampshire’s primary election.

Shortly before the November elections in Slovakia, there were AI-generated audio recordings pretending to be a candidate discussing intentions to increase beer prices and manipulate the election. Fact-checkers rushed to debunk these false recordings as they went viral on social media.

Politicians have also utilized the technology, incorporating AI chatbots for voter communication and incorporating AI-generated images into advertisements.

The agreement requests that platforms prioritize considering the context, particularly when it comes to protecting educational, documentary, artistic, satirical, and political forms of expression.

The statement indicated that the businesses will prioritize being open with users regarding their practices and strive to educate the general population on ways to recognize and avoid being deceived by artificial intelligence imposters.

Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure to do more.

The pressure is increased in the United States, as Congress has not yet enacted legislation to regulate the use of AI in politics. This has resulted in companies primarily self-governing.

The FCC has officially stated that using AI-generated audio in robocalls is illegal, however this does not extend to the use of audio deepfakes on social media or in political ads.

Several social media platforms have established guidelines to discourage misleading content related to voting procedures, whether it is created by artificial intelligence or not. According to Meta, they actively remove false information regarding voting dates, locations, times, methods, as well as any other deceptive posts that aim to disrupt an individual’s civic involvement.

According to Jeff Allen, one of the creators of the Integrity Institute and a former employee at Facebook who worked with data, the agreement appears to be a beneficial move. However, he believes that social media companies should also implement other measures to fight against false information, such as creating content suggestion systems that don’t prioritize user engagement above everything else.

On Friday, Lisa Gilbert, the executive vice president of Public Citizen, a group that advocates for certain causes, stated that the agreement is insufficient. She believes that AI companies should refrain from releasing technology like hyper-realistic text-to-video generators until there are strong and effective measures in place to prevent numerous potential issues.

Along with the businesses involved in facilitating the agreement on Friday, other participants include Anthropic and Inflection AI, developers of chatbot technology; ElevenLabs, a startup focused on voice cloning; Arm Holdings, a company specializing in chip design; security firms McAfee and TrendMicro; and Stability AI, recognized for their image-generating tool, Stable Diffusion.

One notable AI image-generator, Midjourney, is missing. The San Francisco-based startup did not promptly reply to a comment request made on Friday.

The inclusion of X — not mentioned in an earlier announcement about the pending accord — was one of the surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist.”

On Friday, X’s CEO Linda Yaccarino stated that it is the duty of every individual and organization to protect the integrity of democratic elections.

She stated that X is committed to contributing and working together with others to address the dangers of AI, while also safeguarding freedom of speech and promoting transparency.

__

Private foundations provide funding to the Associated Press in order to improve its coverage of elections and democracy. Further information about the AP’s democracy project can be found here. The AP is fully accountable for all of its content.

Source: wral.com