EU policymakers agreed on Friday to a new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to regulate the use of a fast-growing technology with wide-ranging social and economic implications.
The law, known as the AI Act, sets a new global benchmark for countries looking to harness the technology’s potential benefits while trying to protect against potential risks such as automating jobs, spreading misinformation online and endangering national security. The legislation still needs to pass a few final steps to gain approval, but the political agreement means its main parameters have been set.
European policymakers focused on risky uses of AI by companies and governments, including law enforcement and the operation of critical services such as water and energy. Developers of very large general-purpose AI systems, such as those running the ChatGPT chatbot, will face new transparency requirements. According to EU officials and earlier drafts of the law, chatbots and software that create manipulated images such as “deepfakes” must make it clear that what people see is created by AI.
The use of facial recognition software is restricted by police and governments outside of certain security and national security exceptions. Companies that violate the regulations face fines of up to 7 percent of global sales.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said European Commissioner Thierry Breton, who helped negotiate. Agreementsaid in a statement.
While the law has been hailed as a regulatory breakthrough, questions remain about how effective it will be. Many aspects of the policy are not expected to take effect for another 12 to 24 months, a considerable time frame for AI development. Until the last minute of the negotiations, policymakers and countries struggled with its language and how to balance fostering innovation with the need to protect against potential harm.
The agreement reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and stretched into Thursday. A final deal was not immediately made public as negotiations are expected to continue behind the scenes to finalize technical details, which could delay the final passage. votes to be conducted In the Parliament and the Council of Europe, which includes representatives of the 27 countries in the Union.
Regulation of AI is urgent after the release of ChatGPT last year, which created a global stir by showcasing AI’s advanced capabilities. In the United States, the Biden administration recently issued an executive order that focuses on the national security implications of AI. Britain, Japan and other countries have taken a more hands-off approach, while China has imposed some restrictions on data use and referral mechanisms.
are at risk Estimated value is trillions of dollars AI is predicted to reshape the global economy. “Technological supremacy precedes economic supremacy and political supremacy,” said France’s digital minister Jean-Noël Parrott. said This week.
Europe is at the forefront of regulating AI, starting work on becoming an AI law in 2018. In recent years, EU leaders have tried to bring a new level of oversight to technology. Maintenance or banking industries. The group has already enacted far-reaching laws on data privacy, competition and content regulation.
The first draft of the AI Act was published in 2021. But policymakers found themselves rewriting the law as technological advances emerged. The initial version made no mention of general-purpose AI models implementing ChatGPT.
Policymakers agreed on what they called a “risk-based approach” to regulating AI, where limited applications face more oversight and regulation. Companies that develop AI tools that have the most potential harm to individuals and society, such as in employment and education, must provide regulators with evidence of risk assessments, what data was used to train the computers and the assurances the software made. It should not cause harm such as perpetuating racial prejudices. Building and deploying systems will also require human supervision.
Some practices, such as indiscriminately scraping images from the Internet to create a facial recognition database, will be outright prohibited.
The EU debate was controversial, a sign of how AI has confused lawmakers. EU officials are divided over how deeply to regulate the new AI systems, fearing they will stifle European start-ups trying to catch up to US firms such as Google and OpenAI.
The law added requirements for developers of massive AI models to disclose information about how their systems work and assess “systematic risk.” Mr. Breton said.
The new regulations will be watched closely worldwide. They will affect not only major AI developers such as Google, Meta, Microsoft and OpenAI, but also other businesses expected to use the technology in sectors such as education, healthcare and banking. Governments are paying more attention to AI in criminal justice and public welfare provision.
Enforcement is unclear. The AI Act covers regulators in 27 countries and requires the hiring of new experts at a time when government budgets are tight. Legal challenges are likely as companies test the new rules in court. Previous EU legislation, including a key digital privacy law known as the General Data Protection Regulation, has been criticized for uneven implementation.
“The EU’s regulatory capacity is in question,” said Chris Srishak, senior fellow at the Irish Council for Civil Liberties, who advises European lawmakers on AI legislation. “Without strong enforcement, this agreement will have no meaning.”