The European Union has reached a consensus on pioneering regulations for artificial intelligence. What is their function and how will they impact individuals globally?


EU officials put in long hours last week to reach a consensus on groundbreaking regulations for the use of artificial intelligence in the 27 countries that make up the European Union.

The Artificial Intelligence Act is a recent set of rules created to regulate technology in Europe with potential global influence.

Let’s examine the rules of AI more closely.

The AI Act adopts a “risk-based strategy” towards products or services utilizing artificial intelligence. It prioritizes regulating the application of AI rather than the technology itself. This law aims to safeguard democracy, uphold the rule of law, and protect fundamental rights such as freedom of speech. Additionally, it aims to promote investment and foster innovation.

As the level of risk in an AI application increases, the regulations become more stringent. Applications with minimal risk, like content recommendation systems or spam filters, would only need to comply with basic rules such as disclosing their use of AI.

Systems with a high level of risk, such as medical devices, are subject to more stringent requirements that include using reliable data and providing easily understandable information for users.

Certain applications of artificial intelligence have been prohibited due to the potential for harm they pose. This includes social scoring systems that control human behavior, certain methods of predictive policing, and emotion recognition systems used in educational and professional settings.

Individuals in a public setting are not permitted to have their facial features analyzed by law enforcement through the use of AI-driven remote systems for “biometric identification,” unless the crime involves acts such as abduction or terrorism.

The European Parliament is expected to give its final approval for the AI Act in early 2024, after which it will take two years for it to come into effect. Companies found in violation could face fines of up to 35 million euros ($38 million) or 7% of their global revenue.

The AI Act will affect approximately 450 million people residing in the EU. However, analysts believe that its influence may extend beyond this region due to Brussels’ prominent involvement in establishing regulations that serve as a worldwide benchmark.

The EU has previously taken action with past tech regulations, including requiring a standard charging plug that caused Apple to discontinue its proprietary Lightning cable.

As other nations contemplate how they can regulate AI, the EU’s extensive laws are set to act as a model.

Anu Bradford, an expert in EU law and digital regulation from Columbia Law School, stated that the AI Act is the first of its kind, encompassing and enforceable AI regulation in the world. This legislation is expected to have a major impact in Europe and also contribute to the increasing movement towards regulating AI globally.

She stated that the EU has a special opportunity to take the lead and demonstrate to the rest of the world that AI can be regulated and its progress can be monitored by democratic means.

Rights groups stated that even the omissions of the law could have a global impact.

Amnesty International expressed concern that Brussels’ decision to not implement a complete prohibition on live facial recognition technology essentially gives the green light for dystopian levels of digital surveillance in all 27 EU countries, which could have a detrimental impact on global standards.

The partial restriction represents a significant missed chance to put a halt to and avoid immense harm to human rights, civil liberties, and the rule of law, which are already at risk within the EU.

Amnesty criticized the legislators’ neglect to prohibit the exportation of AI technologies that have the potential to violate human rights, such as the use of social scoring systems, which China uses to incentivize obedience to the government through surveillance.

The two dominant countries in the field of AI, the United States and China, have both initiated the process of establishing their own regulations.

In October, President Joe Biden of the United States signed a comprehensive executive order regarding AI. This is likely to be reinforced by laws and international accords.

To ensure the safety of AI tools before they are released to the public, leading developers must share their safety test results and other relevant information with the government. Government agencies will establish standards to guarantee the safety of AI tools and provide guidance on labeling AI-generated content.

Biden’s decree expands upon previous voluntary pledges from technology giants such as Amazon, Google, Meta, and Microsoft to ensure the safety of their products prior to launch.

In the meantime, China has introduced “interim measures” for overseeing generative AI, which pertains to text, images, audio, video, and other content created for individuals within China.

President Xi Jinping has also put forth a Global AI Governance Initiative, advocating for a transparent and equitable setting for the advancement of AI.

OpenAI’s ChatGPT’s impressive growth revealed significant progress in technology and prompted European policymakers to revise their proposal.

The AI Act has regulations for chatbots and other versatile AI systems, also known as general purpose AI systems, which have the ability to perform a wide range of tasks such as writing poetry, making videos, and coding.

Government officials utilized a two-pronged strategy, imposing basic transparency standards on the majority of general purpose systems. These standards included revealing information about their data management practices and, in support of the EU’s focus on environmental sustainability, reporting their energy consumption for training models using extensive collections of written materials and internet images.

Additionally, they must adhere to EU copyright regulations and provide a summary of the material used for training purposes.

Harsher regulations will be implemented for the most sophisticated AI systems possessing significant computing capabilities, as they have the potential to cause “systemic risks” that authorities aim to prevent from spreading to other services built by software developers.

___

The text was contributed by AP writer Frank Bajak from Boston.

Source: wral.com