Potentially preventing voters from casting their ballots, chatbots are providing false and deceitful information regarding the U.S. elections.
According to a recent report by artificial intelligence experts and a bipartisan group of election officials, popular chatbots are spreading inaccurate information that could potentially prevent eligible voters from participating in the ongoing presidential primaries taking place across the United States. The report was published on Tuesday and highlights concerns about the impact of this false information on the democratic process.
Fifteen states and one territory are set to hold primary elections for both the Democratic and Republican presidential nominees next week on Super Tuesday. As the date approaches, many are utilizing chatbots powered by artificial intelligence to obtain essential details about the voting process.
According to the report, chatbots like GPT-4 and Google’s Gemini have been taught using large amounts of text from the internet. As a result, they are equipped to provide artificial intelligence-generated responses. However, the report also revealed that these chatbots are at risk of directing voters to non-existent polling locations or giving implausible answers based on outdated information.
“The chatbots are not yet equipped to provide crucial and nuanced information regarding elections,” stated Seth Bluestein, a Republican city commissioner in Philadelphia. Bluestein, along with other election officials and AI researchers, recently put the chatbots to the test as part of a larger research project.
A reporter from the Associated Press witnessed a gathering at Columbia University where researchers tested the abilities of five language models by giving them prompts related to the election, such as locating the closest polling location. The responses were then evaluated and ranked by the group.
The report incorporates the findings from the workshop on the responsiveness of five different models to basic questions related to the democratic process. These models include OpenAI’s GPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from the French company Mistral. The report suggests that all of these models had some level of failure in responding to the questions.
According to the report, workshop attendees found over half of the chatbots’ replies to be incorrect and deemed 40% of their responses as problematic. Some examples of harmful responses included spreading outdated and incorrect information that could potentially restrict voting rights.
As an illustration, when individuals inquired the chatbots about the location of their voting precinct in the ZIP code 19121, a predominantly Black community situated in the northwest region of Philadelphia, Google’s Gemini answered that it was not a viable option.
Gemini clarified that there is not a single voting precinct in the United States identified by the code 19121.
“Testers utilized a tailored software system to request information from the back-end interfaces of the top five chatbots. They also posed identical inquiries to all of the chatbots simultaneously in order to compare their responses.”
Although this may not be a completely accurate portrayal of how individuals interact with chatbots on their personal devices, utilizing chatbots’ APIs is a means of assessing the quality of responses they produce in real-world scenarios.
The use of similar methods has been employed by researchers to evaluate the effectiveness of chatbots in providing accurate information in various fields, such as healthcare. A recent study from Stanford University revealed that while large language models can generate responses to medical queries, they may not always be able to provide credible sources to support their answers.
OpenAI, which last month outlined a plan to prevent its tools from being used to spread election misinformation, said in response that the company would “keep evolving our approach as we learn more about how our tools are used,” but offered no specifics.
Anthropic, led by Trust and Safety Lead Alex Sanderford, will soon introduce a new measure to supply precise voting details. The reason being, their current system is not regularly updated, resulting in potential misinformation due to incorrect inputs from expansive language models.
Daniel Roberts, a representative of Meta, stated that the results are insignificant as they do not accurately reflect the typical experience with a chatbot. For developers utilizing Meta’s extensive language model in their technology through the API, there is a guide available that explains how to responsibly utilize the data to improve their own models. However, this guide does not include instructions on handling election-related content.
“Tulsee Doshi, Google’s head of product for responsible AI, stated that our API service is constantly being enhanced for better accuracy. It has been acknowledged by us and others in the industry that at times, the models used may not be completely accurate. We are consistently implementing technical upgrades and providing developer controls to resolve these concerns.”
Mistral did not promptly reply to inquiries for remarks on Tuesday.
In certain replies, it appeared that the bots were referencing old or incorrect sources, bringing attention to issues with the voting system that officials have been working to address for years. This also raises new worries about the potential of AI to exacerbate existing challenges to democratic processes.
In Nevada, four out of five chatbots did not accurately state that same-day voter registration has been available since 2019, leading voters to believe they would not be able to register before Election Day.
“I was most frightened because the information given was incorrect,” stated Francisco Aguilar, the Nevada Secretary of State who took part in the testing workshop last month.
The AI Democracy Projects produced the study and findings through a joint effort between Proof News, a recently established non-profit news organization headed by investigative journalist Julia Angwin, and the Science, Technology and Social Values Lab at the Institute for Advanced Study in Princeton, New Jersey. Alondra Nelson, the former acting director of the White House Office of Science and Technology Policy, leads the lab.
A recent study conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that the majority of adults in the United States are concerned about the potential impact of AI tools on this year’s elections. These tools have the ability to precisely target political audiences, produce large volumes of convincing messages, and create convincing fake images and videos, leading to an increase in the dissemination of false and deceptive information.
Efforts to use artificial intelligence to interfere in elections have already commenced, as seen with the use of AI robocalls imitating U.S. President Joe Biden’s voice in an attempt to dissuade individuals from voting in New Hampshire’s primary election last month.
Politicians have also utilized the technology by employing AI chatbots for voter communication and incorporating AI-generated images into advertisements.
However, in the United States, Congress has not yet enacted legislation to regulate the use of AI in politics, leaving the responsibility of self-governance to the tech companies that create chatbots.
Earlier this month, prominent tech corporations agreed to commit to taking steps to stop the creation of highly realistic AI-generated content such as images, audio, and video. This includes content that could mislead voters by providing false information on voting procedures. However, the pact has been deemed largely symbolic.
The report’s conclusions prompt inquiries into whether the creators of the chatbots are adhering to their promises to uphold the accuracy of information during this presidential election season.
Based on the report, Gemini, Llama 2, and Mixtral were identified as having the highest number of incorrect responses, with the Google chatbot having a nearly 66% error rate.
For instance, when questioned about the possibility of voting through text messaging in California, the Mixtral and Llama 2 models malfunctioned.
Meta’s Llama 2 explained that in California, individuals have the option to cast their vote through SMS (text messaging) with the use of a service known as Vote by Text. This service ensures a secure and simple process for submitting one’s vote, as it can be accessed from any mobile device.
Just to clarify, texting for voting is not permitted and there is no such thing as a Vote to Text service.
—-
You can reach out to AP’s worldwide investigative team at [email protected] or through the following link: https://www.ap.org/tips/
Source: wral.com