The lawmakers have given their final approval for Europe’s groundbreaking regulations on AI. Now, the next step is set to take place. Legislators have officially granted the ultimate authorization for Europe’s pioneering legislation regarding artificial intelligence. The following action in the process is preparing to unfold.
The European Union legislators granted ultimate authorization to the artificial intelligence legislation of the 27-member group on Wednesday, setting the world’s most prominent regulations in motion for implementation later this year.
The European Parliament legislators overwhelmingly passed the Artificial Intelligence Act, after five years since initial regulations were suggested. The AI Act is anticipated to serve as a model for other governments dealing with the challenges of regulating rapidly advancing technology.
According to Romanian lawmaker Dragos Tudorache, who played a crucial role in negotiating the AI Act, this legislation shifts the focus of AI towards humanity and puts humans in charge of the technology. By doing so, it enables us to utilize it for advancing new findings, growing the economy, driving societal progress, and unleashing human potential. Before the vote, Tudorache expressed these sentiments.
Major technology corporations have typically endorsed the idea of regulating artificial intelligence, while also advocating for regulations that will benefit their own interests through lobbying efforts. Last year, Sam Altman, the CEO of OpenAI, sparked some controversy when he hinted that the company may withdraw from operating in Europe if they are unable to comply with the AI Act. However, he later retracted his statement and clarified that there are no current plans to leave.
Here is an overview of the initial and all-encompassing regulations for artificial intelligence.
Similar to numerous regulations within the European Union, the AI Act was first created with the purpose of serving as legislation to ensure consumer safety. It involves a “risk-based approach” when it comes to dealing with products or services that utilize artificial intelligence.
The more perilous the implementation of AI, the greater the level of examination it will receive. Most AI programs are anticipated to pose minimal risk, such as those used for suggesting content or filtering spam. Businesses have the option to adhere to voluntary standards and ethical guidelines.
More stringent requirements, including the use of reliable data and transparent information for users, are imposed on high-risk applications of AI, such as in medical equipment or in vital systems like water and electricity networks.
Certain uses of AI have been prohibited due to concerns about potential hazards, such as the implementation of social scoring systems to regulate behavior, certain forms of predictive policing, and emotion recognition systems in educational and professional settings.
<
The initial versions of the law were centered around AI systems only performing specific tasks such as reviewing resumes and job applications. However, the rapid emergence of all-purpose AI models, like OpenAI’s ChatGPT, caused EU policymakers to hastily adjust their approach.
They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.
The creators of universal AI models, ranging from European startups to OpenAI and Google, are now required to give a comprehensive overview of any text, images, videos, and other materials extracted from the internet for the purpose of training the models. In addition, they must comply with EU copyright regulations.
Pictures, videos, or audio created by AI that depict real people, locations, or events must be identified as artificially altered.
Increased scrutiny is being applied to the largest and most influential AI models that present potential “systemic risks.” These include OpenAI’s highly advanced GPT4 and Google’s Gemini.
The European Union expressed concern about the potential dangers of powerful AI systems, citing the possibility of serious accidents or malicious cyberattacks. Additionally, there is apprehension about the potential for generative AI to perpetrate harmful biases in numerous applications and impact a wide range of individuals.
Firms offering these systems must evaluate and minimize potential risks, report major incidents like malfunctioning resulting in injury or damage, implement cybersecurity measures, and disclose energy consumption of their models.
In 2019, Brussels was among the first to propose regulations for AI, demonstrating its prominent role on a global scale in increasing scrutiny of developing industries as other governments struggle to catch up.
The U.S. President, Joe Biden, approved a wide-ranging executive order surrounding AI in October, which is predicted to be supported by laws and international agreements. Meanwhile, representatives in at least seven states are currently developing their own AI laws.
Xi Jinping, the President of China, has introduced his Global AI Governance Initiative in order to promote ethical and secure use of AI technology. The government has also released “interim measures” that outline guidelines for regulating generative AI, which includes content such as text, images, audio, and video created for consumption within China.
Several nations, including Brazil and Japan, along with international organizations such as the United Nations and the Group of Seven, are taking steps to establish boundaries for AI usage.
The AI Act is anticipated to be formally enacted in May or June following completion of remaining formalities, such as approval from EU member states. Implementation of the regulations will occur in phases, with countries mandated to prohibit restricted AI systems within six months of the rules being established as law.
The regulations for general use AI systems such as chatbots will go into effect one year after the law is implemented. By mid-2026, all regulations, including those for high-risk systems, will be enforced.
For enforcement, each member state of the EU will establish their own oversight body for AI, allowing individuals to report any potential breaches of regulations. In addition, a central AI Office in Brussels will be responsible for enforcing and monitoring laws concerning AI systems used for general purposes.
Failure to comply with the AI Act may result in penalties of up to 35 million euros ($38 million) or 7% of a company’s worldwide income.
According to Brando Benifei, an Italian lawmaker and co-leader of Parliament’s efforts on the law, this is not the final say from Brussels on AI regulations. They anticipate that there may be additional legislation related to AI following the summer elections, specifically in areas such as AI in the workplace, which is partially addressed in the new law.
Source: wral.com