OpenAI, the creator of ChatGPT, has outlined their strategy for preventing the spread of false information during the 2024 election.

OpenAI, the creator of ChatGPT, has outlined their strategy for preventing the spread of false information during the 2024 election.

OpenAI, the creator of ChatGPT, has announced a strategy to prevent the misuse of its tools for disseminating false information during upcoming national elections in over 50 countries.

The San Francisco-based company that specializes in artificial intelligence released a blog post this week outlining measures to prevent the misuse of its highly sought-after generative AI tools. These measures include a combination of existing policies and new initiatives designed to combat the creation of deceptive content such as fabricated text and convincing fake images.

The steps will apply specifically to OpenAI, only one player in an expanding universe of companies developing advanced generative AI tools. The company, which announced the moves Monday, said it plans to “continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.”

The statement indicated that individuals will be prohibited from utilizing their technology to develop chatbots that mimic actual candidates or governments, falsely portray the voting process, or dissuade individuals from voting. Until further studies can be conducted on the influential capabilities of their technology, the company will not permit users to create applications for political campaigning or lobbying.

Beginning “at the beginning of this year,” OpenAI announced that it will apply a digital watermark to AI-generated images from its DALL-E image creator. This will leave a permanent mark on the content, providing information about its source and making it simpler to determine if an image found elsewhere on the internet was produced with the use of the AI technology.

The corporation has announced a collaboration with the National Association of Secretaries of State to direct ChatGPT users seeking logistical information about voting to reliable resources on the group’s impartial website, CanIVote.org.

According to Mekela Panditharatne, a legal expert in the democracy program at the Brennan Center for Justice, OpenAI’s strategies are a beneficial move in the fight against false information during elections. However, their effectiveness will rely on their execution.

She inquired about the extent and thoroughness of the filters for flagging questions regarding the election process. She wondered if any items would go undetected.

The ChatGPT and DALL-E models developed by OpenAI are among the most advanced generative AI tools currently available. However, there are numerous other companies with equally advanced technology that lack adequate measures to prevent the spread of election misinformation.

Although certain social media platforms, like YouTube and Meta, have implemented AI labeling guidelines, it is uncertain if they will be successful in consistently detecting offenders.

According to Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation, it would be beneficial for other generative AI companies to implement similar guidelines in order to ensure that practical rules are enforced across the industry.

If the industry does not voluntarily implement these policies, laws would have to be passed to regulate the spread of AI-generated false information in politics. In the United States, Congress has not yet passed any laws to regulate the industry’s involvement in politics, despite some support from both parties. However, over a third of U.S. states have passed or proposed bills to tackle deepfakes in political campaigns as federal legislation remains at a standstill.

Sam Altman, the CEO of OpenAI, expressed that despite the precautions taken by his company, he still feels uneasy.

“He expressed his belief that our heightened sense of anxiety will drive us to strive for perfection and make every possible effort to achieve it,” he stated in an interview on Tuesday at a Bloomberg event during the World Economic Forum in Davos, Switzerland. “We will need to closely monitor the situation this year and maintain a rigorous feedback system.”

Multiple private foundations provide funding to The Associated Press in order to improve their coverage of elections and democracy. Learn more about AP’s democracy initiative here. The AP takes full responsibility for all of its content.

Source: wral.com