Can European leaders come to an agreement on AI regulation and maintain their position as global leaders?

The rise of generative artificial intelligence has caused governments around the world to rush to create regulations for this emerging technology. However, these efforts could potentially disrupt the European Union’s plans to approve the first comprehensive set of rules for artificial intelligence.

The EU’s Artificial Intelligence Act, which covers 27 nations, has been praised as a groundbreaking set of regulations. However, there are concerns about whether the EU’s three branches of government will be able to reach an agreement on Wednesday, as time is running out. Officials are hopeful that this will be the final round of negotiations.

Europe has been struggling for many years to create guidelines for regulating AI. However, this process has been hindered by the recent introduction of generative AI systems such as OpenAI’s ChatGPT. These systems have impressed people with their ability to create work that resembles human work, but have also caused concerns about the potential dangers they may bring.

The worries have prompted the United States, United Kingdom, China, and international alliances such as the Group of 7 prominent democracies to compete in regulating the fast-growing technology, although they are still trying to catch up to Europe.

In addition to overseeing generative artificial intelligence, EU negotiators must address numerous contentious matters, including restrictions on AI-based facial recognition and other surveillance technologies that have raised privacy worries.

According to Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, the likelihood of reaching a political agreement among EU lawmakers, member state representatives, and executive commissioners is high due to the shared desire for a successful outcome on a key legislative initiative.

“He stated that the matters being discussed are important and crucial, therefore we cannot dismiss the possibility of not reaching an agreement.”

During a press conference in Brussels on Tuesday, Carme Artigas, the AI and digitalization minister for Spain and current holder of the EU presidency, stated that approximately 85% of the technical language in the bill has already been approved.

If an agreement is not reached during the current round of discussions, which is set to begin on Wednesday afternoon and is predicted to continue late into the night, those involved in negotiations will have to resume them next year. This increases the likelihood that the legislation may be postponed until after the EU-wide elections in June, or potentially take a different course as new leaders come into power.

A significant obstacle lies in foundation models, the sophisticated frameworks that support general artificial intelligence services such as OpenAI’s ChatGPT and Google’s Bard chatbot.

These systems, commonly referred to as large language models, are trained on extensive collections of written materials and images obtained from the internet. They provide generative AI systems with the capability to produce original content, unlike conventional AI that operates on set rules to process data and perform tasks.

The purpose of the AI Act was to regulate the safety of products, similar to existing EU laws for cosmetics, cars, and toys. It categorizes AI applications into four risk levels, ranging from low or no risk (such as video games and spam filters) to high risk (such as social scoring systems that evaluate individuals based on their actions).

The recent influx of general purpose AI systems, which have been introduced since the initial draft of the legislation in 2021, has prompted European lawmakers to strengthen the proposal to also include foundation models.

Avaaz, a nonprofit advocacy group, has cautioned about the potential consequences of utilizing powerful foundation models developed by a few major tech companies. These models have the ability to enhance online disinformation and manipulation, enable cyberattacks, and even support the creation of bioweapons. Due to their role as fundamental building blocks for AI-powered services, any flaws in these models can result in flawed end products that cannot be easily rectified.

The countries of France, Germany, and Italy have opposed the legislation update and are now advocating for self-regulation. This shift is believed to be an attempt to support domestic generative AI companies like Mistral AI in France and Aleph Alpha in Germany, in their competition against large American tech companies like OpenAI.

Italian MEP Brando Benifei, who is co-leading negotiations for the European Parliament, expressed confidence in reaching a resolution with member states.

He mentioned that there has been progress on foundation models, but there are still challenges in coming to an agreement on facial recognition systems.


Matt O’Brien, a technology writer for AP, contributed from Providence, Rhode Island.