A comprehensive set of rules for artificial intelligence has been agreed upon by Europe, making it the first in the world to do so.


The European Union’s negotiators reached an agreement on Friday for the world’s first set of comprehensive regulations for artificial intelligence. This will allow for legal supervision of AI technology, which has the potential to greatly impact daily life and has raised concerns about potential threats to humanity.

Representatives from the European Parliament and the 27 member countries of the bloc reached a compromise on divisive issues such as generative AI and the utilization of face recognition surveillance by law enforcement, resulting in a preliminary political agreement for the Artificial Intelligence Act.

“Agreed!” exclaimed European Commissioner Thierry Breton on Twitter shortly before midnight. “The EU is now the first continent to establish explicit regulations for the implementation of AI.”

The outcome was reached after lengthy private discussions this week, with the first meeting lasting 22 hours before a second one began on Friday morning.

Authorities were under pressure to achieve a successful outcome for the main legislation. Despite this, civil organizations were not impressed and are waiting for specific information that will need to be addressed in the upcoming weeks. They expressed concerns that the agreement did not adequately safeguard individuals from potential harm caused by AI systems.

According to Daniel Friedlaender, the leader of the European office for the Computer and Communications Industry Association, a political agreement was made today that will initiate significant technical efforts towards finalizing essential aspects of the AI Act that are currently absent.

In 2021, the EU was at the forefront of creating guidelines for regulating AI. However, with the rise of generative AI, officials in Europe quickly worked to revise their proposal, which could potentially become a model for other countries.

According to Brando Benifei, an Italian lawmaker who co-led the negotiations, the European Parliament will still have to vote on the act in early next year. However, with the deal finalized, this is merely a formality. Benifei made these comments to The Associated Press on Friday night.

He replied via text saying, “It’s excellent.” When asked if it included all his desired elements. “We did have to make some compromises, but overall it’s great.” The law will not be fully implemented until at least 2025 and could result in significant financial consequences for companies that violate it, with penalties of up to 35 million euros ($38 million) or 7% of their worldwide turnover.

Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

Currently, the United States, United Kingdom, China, and international coalitions such as the Group of 7 prominent democracies have entered the conversation with their own ideas for governing AI. However, they are still lagging behind Europe.

According to Anu Bradford, a professor of EU law and digital regulation at Columbia Law School, the EU’s strict and thorough regulations can serve as a model for other governments. While not all provisions may be replicated, many aspects are likely to be adopted.

According to her, AI companies that must adhere to the regulations set by the EU will probably also apply similar requirements in other regions. She explained that it would not be practical to create separate models for each market, so they would likely extend their obligations globally.

The initial purpose of the AI Act was to reduce potential risks associated with different levels of AI functions, ranging from low to unacceptable. However, legislators advocated for its expansion to include foundation models – the complex systems that form the basis of general AI services such as ChatGPT and Google’s Bard chatbot.

The issue of foundation models was expected to cause significant disagreement in Europe. Nonetheless, negotiators were able to come to a temporary agreement at the beginning of the discussions, despite France’s opposition. France suggested self-regulation as a solution to support European generative AI companies and their competition against major American competitors, such as Microsoft, who supports OpenAI.

These systems, also referred to as large language models, are trained on extensive collections of written works and images collected from the internet. They provide generative AI systems with the capability to generate novel content, unlike conventional AI that operates on preset rules to process data and perform tasks.

Companies creating foundation models must create technical documentation, adhere to EU copyright regulations, and specify the content utilized for training. Foundation models deemed to have high-level risks will undergo additional evaluation, such as identifying and minimizing said risks, reporting major incidents, implementing cybersecurity protocols, and disclosing their energy efficiency.

Experts have cautioned that dominant base models, created by a small number of large technology corporations, have the potential to amplify online deceit and manipulation, cyber assaults, or production of biological weapons.

Advocacy organizations also warn that the absence of transparency regarding the data utilized to educate the models presents potential dangers to everyday activities, as these models serve as fundamental frameworks for programmers creating AI-driven services.

The most challenging issue that arose was the use of AI technology in facial recognition surveillance systems. After extensive negotiations, a compromise was reached by the negotiators.

The European Parliament initially pushed for a complete prohibition of face scanning and other remote biometric identification systems in public due to privacy worries. However, the governments of member nations were able to secure exceptions during negotiations, allowing the use of these systems by law enforcement in cases of severe offenses such as child sexual exploitation and terrorist attacks.

Organizations advocating for human rights expressed worry regarding the exceptions and significant gaps in the AI Act, such as the absence of safeguards for AI systems utilized in migration and border management, and the ability for developers to choose not to have their systems labeled as high risk.

According to Daniel Leufer, a senior policy analyst at Access Now, despite any successes achieved in the last round of negotiations, significant imperfections will still exist in the ultimate version of this document.

___

Matt O’Brien, a technology journalist from Providence, Rhode Island, contributed to this report.

Source: wral.com