State-of-the-art artificial intelligence sparks concerns regarding potential dangers to mankind. Is there adequate action being taken by technology and political figures?

The emergence of advanced artificial intelligence systems, such as ChatGPT, has impressed the world with their skills in speechwriting, vacation planning, and conversation. However, there are growing concerns about the potential dangers of this frontier AI and its impact on humanity.

Numerous entities, including the British government, prominent researchers, and leading AI corporations, are sounding the alarm about the potential dangers of frontier AI and advocating for measures to safeguard individuals from its potential existential hazards.

On Wednesday, British Prime Minister Rishi Sunak will host a two-day conference centered on frontier AI, which is expected to gather around 100 officials from 28 nations. Among the attendees are U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen, and representatives from major U.S. AI companies such as OpenAI, Google’s Deepmind, and Anthropic.

The location is Bletchley Park, a previous classified headquarters utilized by codebreakers during World War II under the direction of Alan Turing. This renowned property is considered the origin of contemporary computing as it was the site where Turing and his team successfully decoded Nazi Germany’s communications using the first ever digital programmable computer.

During a recent speech, Sunak stated that it is the responsibility of governments, not AI companies, to protect individuals from the potential dangers of technology. Despite acknowledging the potential risks, he also emphasized the U.K.’s approach of not hastily implementing regulations. He highlighted potential threats such as the use of AI in creating hazardous substances like chemical or biological weapons.

Jeff Clune, a computer science professor at the University of British Columbia specializing in AI and machine learning, emphasized the importance of addressing the issue proactively. He urged for immediate attention and action towards finding solutions.

Clune, along with a group of prominent researchers, recently published a paper urging governments to take action in mitigating the risks associated with AI. This adds to a string of cautionary statements from influential figures in the tech industry, including Elon Musk and OpenAI CEO Sam Altman, who have expressed concerns about the rapid advancement of AI and the varying perspectives on how to effectively address the associated risks and regulations.

Clune stated that it is not definite that AI will eradicate humanity, but there is a significant possibility of it happening. Therefore, it is imperative that we address this issue and take action immediately instead of waiting for a catastrophic outcome.

Sunak’s main objective is to reach a consensus on a statement addressing the risks associated with AI. He is also introducing a plan for an AI Safety Institute that will assess and trial new forms of the technology. Additionally, he is suggesting the establishment of a worldwide panel of experts, modeled after the U.N. climate change panel, to gain a better understanding of AI and produce a report on the current state of AI science.

The summit demonstrates the eagerness of the British government to host international meetings in order to prove that it has not become isolated and can still take a leading role on the global stage, even after withdrawing from the European Union three years ago.

The United Kingdom is also asserting its position in a controversial policy matter that is being addressed by both the United States and the European Union, which consists of 27 countries.

Brussels is putting the final touches on what’s poised to be the world’s first comprehensive AI regulations, while U.S. President Joe Biden signed a sweeping executive order Monday to guide the development of AI, building on voluntary commitments made by tech companies.

China, which along with the U.S. is one of the two world AI powers, has been invited to the summit, though Sunak couldn’t say with “100% certainty” that representatives from Beijing will attend.

The document, authored by Clune and over 20 other specialists, including esteemed figures in AI, such as Geoffrey Hinton and Yoshua Bengio, urges governments and AI corporations to implement tangible measures, such as allocating a third of their research and development efforts towards ensuring the safe and ethical application of advanced autonomous AI.

Frontier AI refers to the most advanced and robust systems that push the boundaries of AI’s abilities. These systems are built on foundation models, which are algorithms trained on a wide range of data gathered from the internet to offer a broad understanding, although not without flaws, of various topics.

According to Clune, frontier AI systems are risky because they are not completely knowledgeable. He explains that people often overestimate their level of knowledge, leading to potential problems.

However, the gathering has been subject to negative feedback, citing that its focus is on potential threats in the distant future.

Francine Bennett, the temporary director of the Ada Lovelace Institute, a London-based policy research organization that specializes in AI, expressed that the summit’s focus is too limited.

At a recent panel hosted by Chatham House, she expressed concern that we may overlook the larger range of risks and safety concerns as well as the algorithms that are currently integrated into our daily lives.

Deb Raji, a researcher at the University of California, Berkeley, who has examined bias in algorithms, highlighted issues with existing systems in the U.K. This includes facial recognition systems used by police, which had a significantly higher rate of falsely identifying Black individuals, and an algorithm that made errors on a high school exam.

According to an open letter addressed to Sunak, over 100 civil society groups and experts believe that the summit was a lost chance and further marginalized communities and workers who are heavily impacted by AI.

Critics argue that the U.K. government’s summit objectives are insufficient, as they do not include regulating AI and instead only aim to establish “guardrails.”

Sunak’s call to not rush into regulation is reminiscent of “the messaging we hear from a lot of the corporate representatives in the U.S.,” Raji said. “And so I’m not surprised that it’s also making its way into what they might be saying to U.K. officials.”

Raji stated that tech companies should not be responsible for creating regulations as they often underestimate or downplay the severity and extent of negative impacts. She also noted that they are not likely to support proposed laws that could potentially harm their profits.

Requests for comment from DeepMind and OpenAI were left unanswered. Anthropic confirmed that co-founders Dario Amodei and Jack Clark will be in attendance.

In a recent blog post, Microsoft expressed anticipation for the U.K.’s upcoming actions in organizing the summit, making progress in AI safety testing, and promoting increased global cooperation on AI regulation.

The government maintains that it will have a balanced representation of participants from government, academia, civil society, and business.

According to the Institute for Public Policy Research, a center-left think tank in the United Kingdom, it would be a grave error to allow the tech industry to self-regulate without government oversight.

According to Carsten Jung, a senior economist within the group, there is a lack of knowledge among regulators and the general public regarding the widespread use of AI in various industries. Previous attempts at self-regulation by social media and finance companies have failed, and it is unlikely to be effective for AI as well.


This report was written by Associated Press journalist Jill Lawless.