Biden is eager to expedite the implementation of AI protections and has issued an executive order to address any potential concerns.


On Monday, President Joe Biden signed an executive order on artificial intelligence, aiming to find a balance between the interests of advanced technology companies and the protection of national security and consumer rights. This order establishes initial guidelines that may be strengthened through laws and international agreements.

Prior to finalizing the agreement, Biden stated that AI is rapidly bringing about transformation and holds both great promise and risks.

According to Biden, AI is present in all aspects of our lives. In order to harness its potential and prevent any negative consequences, we must establish regulations for this technology.

The initial step of the order is to guarantee that AI acts with trustworthiness and usefulness, instead of being deceitful and harmful. Additional congressional measures may be necessary to enhance the order and dictate the development of AI, allowing companies to benefit without compromising public safety.

By utilizing the Defense Production Act, the mandate mandates prominent AI creators to disclose safety evaluations and other data to the government. The National Institute of Standards and Technology will establish regulations to guarantee the safety and security of AI tools prior to their public release.

The Department of Commerce will provide instructions on labeling and watermarking content created by AI, in order to distinguish it from genuine human interactions. The thorough directive covers topics such as privacy, civil rights, consumer protection, scientific research, and worker rights.

Jeff Zients, the White House chief of staff, remembered that Biden gave his staff a specific instruction while creating the order to act quickly.

Zients reported that the Democratic president expressed the need to move at a rapid pace, matching or exceeding the speed of technology rather than adhering to a typical government timeline.

Biden believes that the government has been slow to address the dangers of social media, and as a result, young people in the U.S. are facing mental health problems. While AI has the potential to speed up cancer research, predict the effects of climate change, increase economic growth, and enhance government services, it also has the potential to distort our understanding of what is true through fake images, exacerbate racial and social disparities, and be exploited by fraudsters and wrongdoers.

According to Alexandra Reeve Givens, president of the Center for Democracy & Technology, while the European Union is close to passing a comprehensive law to regulate AI issues and Congress is still in the early stages of discussing safeguards, the Biden administration is taking action by utilizing the resources it has available. This includes providing guidance and setting standards for private sector practices and setting an example through the use of AI within the federal government.

This mandate expands upon the previous pledges made by tech companies. It is a component of a larger plan that government officials state will involve passing laws and engaging in global relations, indicating the potential consequences of the recent emergence of advanced AI applications like ChatGPT, which can produce text, images, and audio.

The instructions in the directive must be carried out and completed within a period of 90 to 365 days.

On Thursday of last week, Biden held a meeting with his team in the Oval Office to go over and complete the executive order. The meeting, originally scheduled for 30 minutes, ended up lasting 70 minutes, despite other important events happening such as the mass shooting in Maine, the Israel-Hamas conflict, and the decision for a new House speaker.

During the months of meetings that preceded the creation of the order, Biden displayed a strong interest in technology. His science advisory council held two meetings specifically dedicated to AI, and his Cabinet discussed it in two separate meetings as well. Additionally, the president questioned tech executives and civil society advocates about the capabilities of this technology at various gatherings.

In a conversation, deputy White House chief of staff Bruce Reed stated that the subject was both amazed and concerned by the capabilities of AI. The subject saw fabricated images of themselves and their dog created by AI, as well as poorly written poetry. They also witnessed the frightening technology of voice cloning, which can replicate an entire conversation using only three seconds of someone’s voice.

Biden couldn’t avoid the topic of AI, even during a weekend getaway at Camp David. He took some time to unwind by watching the movie “Mission: Impossible – Dead Reckoning Part One,” starring Tom Cruise. In the film, the antagonist is a self-aware and rebellious AI called “the Entity,” which sinks a submarine and murders its crew in the first few minutes.

Reed, who viewed the movie with the president, stated that if there were any previous worries about the potential dangers of AI, they were amplified after watching the film.

Countries globally have been competing to implement safeguards, with some being stricter than Biden’s guidelines. The European Union has been in the process of creating a comprehensive set of regulations for over two years, specifically targeting high-risk applications with stringent limitations. China, a major competitor to the U.S. in the field of AI, has also established regulations.

The U.K.’s Prime Minister, Rishi Sunak, is aiming to establish Britain as a leading AI safety center at a conference starting on Wednesday, which Vice President Kamala Harris is scheduled to attend. Additionally, representatives from the Group of Seven (G7) have reached an agreement on a set of AI safety principles and a voluntary code of conduct for developers in the field.

The United States, particularly the West Coast, is a hub for top developers in advanced AI technology, including major companies like Google, Meta, and Microsoft, as well as startups focused on AI such as OpenAI, creator of ChatGPT. The White House utilized this industry presence earlier this year by obtaining commitments from these companies to incorporate safety measures in their development of new AI models.

The White House was under pressure from Democratic allies, such as labor and civil rights organizations, to address their concerns about the negative impacts of AI in its policies.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.

“These locations have been identified as areas where automation has caused significant problems, specifically with the implementation of facial recognition and drone technology,” stated Venkatasubramanian. Studies have revealed that facial recognition technology has displayed inconsistent accuracy among different racial groups, and has been linked to wrongful arrests.

The upcoming AI legislation in the EU will prohibit using real-time facial recognition in public, while Biden’s order only calls for federal agencies to examine their use of AI in the criminal justice system. This may not meet the demands of certain activists for stronger language.

The ACLU’s racial justice program director, ReNika Moore, attended the signing on Monday where groups met with the White House to ensure that the tech industry and tech billionaires are held accountable and that algorithmic tools benefit everyone, not just a select few.

Moore praised the order’s attention to addressing workplace and housing discrimination and other negative effects of AI. However, they expressed disappointment that the administration did not take stronger measures to protect individuals from the increasing use of AI by law enforcement.

Source: wral.com