The use of artificially generated voices in automated phone calls can mislead voters, and the FCC has recently outlawed this practice.

The use of artificially generated voices in automated phone calls can mislead voters, and the FCC has recently outlawed this practice.

The Federal Communications Commission has banned robocalls that use artificially generated voices, making it clear that using this technology to deceive individuals and manipulate voters will not be acceptable.

The decision, which was reached unanimously, focuses on automated phone calls created using voice-cloning technology and falls under the Telephone Consumer Protection Act. This law was established in 1991 and aims to limit unwanted calls that utilize artificial or pre-recorded voice messages.

The news is in response to the ongoing investigation by New Hampshire officials regarding automated phone calls, created through artificial intelligence, that imitated the voice of President Joe Biden in an attempt to dissuade people from participating in the state’s primary election last month.

The new rule gives the FCC the authority to penalize companies that utilize artificial intelligence voices in their phone calls or hinder the service providers who facilitate them. It also allows individuals who receive these calls to take legal action and provides state attorneys general with a means to enforce penalties against those who violate the rule.

According to the chairwoman of the agency, Jessica Rosenworcel, individuals with malicious intent have been utilizing artificially intelligent voices in automated phone calls to spread false information to voters, pretend to be famous people, and blackmail loved ones.

“It seems like something from the far-off future, but this threat is already here,” Rosenworcel told The Associated Press on Wednesday as the commission was considering the regulations. “All of us could be on the receiving end of these faked calls, so that’s why we felt the time to act was now.”

As per the law for consumer protection, telemarketers are not allowed to use automated dialers or artificial or prerecorded voice messages to contact cellphones. They also require written consent from the recipient before making these calls to landlines.

The FCC stated that under the new ruling, AI-generated voices used in robocalls will be considered “artificial” and subject to the same regulations.

Individuals who violate regulations may receive significant penalties, up to a maximum of $23,000 per call, according to the FCC. The organization has previously utilized consumer protection laws to address robocallers who disrupt elections, such as a $5 million fine imposed on two conservative individuals who falsely claimed that voting by mail in predominantly Black areas could result in arrest, debt collection, and forced vaccination.

The legislation also grants individuals who receive calls the ability to pursue legal recourse and potentially obtain up to $1,500 in compensation for each unsolicited call.

According to Josh Lawson, who is in charge of AI and democracy at the Aspen Institute, despite the FCC’s decision, individuals should brace themselves for receiving customized spam via phone, text, and social media.

“The real malicious individuals often ignore the consequences and are aware that their actions are illegal,” he stated. “We must acknowledge that there will always be bad actors who will stir up trouble and test the boundaries.”

According to Carnegie Mellon professor Kathleen Carley, an expert in computational disinformation, the key to identifying AI misuse of voice technology is being able to definitively determine if the audio was created by AI.

She stated that it is now feasible because the technology for creating these calls has been available for some time. It is widely understood and prone to common errors, but the technology will continue to improve.

Complex artificial intelligence tools, ranging from software that can clone voices to generators that create images, are currently being utilized in elections both in the United States and globally.

During the previous year, as the U.S. presidential election began, various campaign commercials utilized AI-generated sound or visuals, while certain candidates tested the use of AI chatbots for interacting with potential voters.

Attempts from both sides of the aisle in the US Congress have been made to control the use of AI in political campaigns. However, no laws have been enacted at the federal level, despite the upcoming general election being only nine months away.

Congresswoman Yvette Clarke, who proposed a bill to govern the use of AI in politics, praised the FCC’s decision but emphasized the need for Congress to take action.

Clarke, a representative from New York, stated that both Democrats and Republicans can come to a consensus that the use of AI-generated content to mislead individuals is harmful. It is important for us to collaborate and provide individuals with the necessary resources to distinguish between what is genuine and what is not.

The automated phone calls created by artificial intelligence aimed to sway the Jan. 23 primary election in New Hampshire by mimicking Biden’s voice and using his commonly-used phrase, “What a bunch of malarkey.” These calls also made false claims that voting in the primary would prevent voters from participating in the November election.

“The recent experience in New Hampshire has demonstrated the potential misuse of AI in election procedures,” stated David Scanlan, the Secretary of State for New Hampshire. He believes it is necessary to regulate and monitor its use to prevent any misleading information that could compromise the integrity of our elections.

The attorney general of the state, John Formella, stated on Tuesday that the source of the calls made to thousands of state residents, primarily registered Democrats, has been identified as Life Corp., a Texas-based company, and its owner, Walter Monk. He also mentioned that Lingo Telecom, another company based in Texas, was responsible for transmitting the calls.

The FCC has conducted investigations into both Lingo Telecom and Life Corp. for making illegal robocalls in the past.

According to a statement released by Lingo Telecom on Tuesday, the company promptly assisted with the investigation of the fraudulent robocalls pretending to be from Biden. The company also stated that they had no role in creating the content of the calls.

On Thursday, a male employee at Life Corp. refused to provide a statement when contacted through the company’s business phone number.


This report was contributed to by Associated Press writers Christina A. Cassidy in Washington and Frank Bajak in Boston.


Private foundations provide support to The Associated Press in order to improve its comprehensive reporting on elections and democracy. Learn more about AP’s democracy initiative here. The AP holds full responsibility for all of its content.