Europe’s regulations for artificial intelligence, which are considered the best in the world, are currently at a critical juncture.

The European Union’s regulations on artificial intelligence, which are being hailed as a groundbreaking achievement, are currently in a crucial stage of negotiation. This week, negotiators are facing the challenge of finalizing the details, which has been made more complex by the emergence of generative AI technology that can create work that is similar to that of humans.

In 2019, the EU proposed the AI Act, which aimed to become the first set of comprehensive regulations for AI and solidify the bloc’s reputation as a leader in regulating the tech industry.

The progress has been hindered by a recent conflict regarding the regulation of foundational AI systems such as OpenAI’s ChatGPT and Google’s Bard chatbot. Major tech companies are pushing back against what they view as excessive regulation that hinders creativity, while European legislators are pushing for further protections for the advanced AI systems being created by those companies.

The United States, United Kingdom, China, and global coalitions such as the Group of 7 major democracies are all working to establish guidelines for the fast-evolving technology. This is emphasized by alerts from experts and advocacy organizations about the potential threats generative AI presents to mankind and daily living.

According to Nick Reiners, a tech policy analyst at Eurasia Group, there is a possibility that the AI Act will not be accepted before next year’s European Parliament elections, despite its potential to become the leading global standard for AI regulation.

Reiners stated that there is a lot to finalize during the last round of talks on Wednesday. Even if they work late into the night as planned, they may still need to rush to complete everything by the start of the new year.

In 2021, the European Commission, which is the executive branch of the EU, revealed a draft that did not give much attention to general purpose AI systems such as chatbots. The main purpose of the proposal was to categorize AI systems into four levels of risk, ranging from minimal to unacceptable, similar to product safety laws.

The city of Brussels aimed to evaluate and authorize the data utilized by algorithms that drive AI technology, similar to safety inspections for cosmetic products, vehicles, and toys.

The rise of generative AI revolutionized the field, as it amazed people with its ability to generate music, produce images, and write essays that closely resembled human creations. However, it also raised concerns about the potential for this technology to be exploited for large-scale cyberattacks or the development of dangerous bioweapons.

The potential dangers prompted EU legislators to strengthen the AI Act by expanding its scope to include foundation models. These models, also referred to as large language models, are trained on extensive collections of written content and images obtained from the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

The recent disruption at OpenAI, supported by Microsoft, responsible for developing the well-known framework GPT-4, has highlighted for certain European leaders the potential risks of granting too much control to a select few AI companies.

After CEO Sam Altman was initially dismissed and then rehired, several board members who had concerns about AI’s potential safety risks left. This suggests that corporate governance related to AI could be influenced by internal boardroom relationships.

“After the recent chaos, it is evident that companies such as OpenAI prioritize their own profits rather than the well-being of the public,” stated European Commissioner Thierry Breton at an AI conference in France.

The three biggest economies in the EU – France, Germany, and Italy – surprisingly opposed the government’s regulations for these AI systems. They presented a position paper promoting self-regulation.

The decision was perceived as a step towards supporting domestic companies in the field of generative artificial intelligence, such as Mistral AI from France and Aleph Alpha from Germany.

According to Reiners, the motivation behind this is to avoid U.S. companies from monopolizing the AI industry as they have done in the past with other technologies like cloud computing, e-commerce, and social media.

A team of prominent computer experts released a public statement cautioning against diluting the AI Act in this manner, citing it as a “significant disappointment.” Meanwhile, members of Mistral’s leadership engaged in a heated online dispute with a scholar from a nonprofit organization supported by Elon Musk, which focuses on mitigating “existential threats” posed by AI.

Google’s Chief Legal Officer, Kent Walker, emphasized the significance of regulating AI in a recent speech in Brussels, stating that it is crucial to regulate it properly. He also stressed the importance of striving for the best regulations rather than being the first to implement them.

Iverna McGowan, director of the Europe office at the Center for Democracy and Technology, stated that regulating foundation models, which have a variety of uses, is the most challenging aspect for EU negotiators. This is because it contradicts the fundamental principle of the law, which focuses on managing risks associated with particular applications.

According to McGowan, the design of general artificial intelligence (AI) systems makes it difficult to determine how they will be used. However, she believes that regulations are necessary to ensure accountability when other companies utilize these systems in their services.

Altman suggested the establishment of a U.S. or international organization that would grant licenses to the most dominant AI systems. He stated earlier this year that OpenAI may have to exit Europe if it was unable to adhere to EU regulations, but later retracted his statement.

Aleph Alpha stated that a “well-rounded approach” is necessary and expressed support for the EU’s risk-based strategy. However, the German AI company stated that this approach is “not suitable” for foundation models, which require more adaptable and fluid regulations.

The European Union’s negotiators have not yet reached a resolution on several contentious issues, one being a plan to completely prohibit real-time public facial recognition. Some countries are pushing for an exception to allow law enforcement to utilize this technology in cases of missing children or terrorists, but human rights organizations are concerned that this could lead to a justification for widespread surveillance.

The European Union’s three governing bodies have a final opportunity to come to an agreement on Wednesday.

If they approve it, the bloc’s 705 legislators will still need to give their approval for the final version. This vote must take place by April, before they begin campaigning for EU-wide elections in June. The law will not go into effect until after a transition period, which is usually two years.

If they are unable to meet the deadline, the bill will be postponed until later next year, when new EU leaders with potential varying perspectives on AI will assume their positions.

During a panel discussion last week, Dragos Tudorache, a Romanian lawmaker who is co-leading the negotiations for the European Parliament’s AI Act, stated that while there is a high likelihood that it is the final negotiation, there is also a possibility that more time will be needed for further discussions.

His workplace stated that he was not able to do an interview.

The speaker at the Brussels event stated that the conversation is still very dynamic. They also added that they will continue to keep the audience in suspense until the final moments.