The European Parliament and Council agreed on the European Union's Artificial Intelligence (AI) Act after extensive negotiations on December 8, 2023.
European Commission President Ursula von der Leyen called it a "global first," making the EU the first continent to establish clear rules for AI use. This historic act positions the EU as a leader in AI regulation that created a comprehensive framework ensuring AI systems are safe and respect fundamental rights and EU values.
Here are some of the salient provisions of this newly enacted regulation on artificial intelligence:
Key Issues
The EU AI Act aims to make sure that AI systems in the EU are safe and to provide clear rules for investments and innovation in AI. It focuses on minimizing risks to consumers and reducing compliance costs for providers. During the recent negotiations between the EU Commission, Parliament, and Council, there were heated discussions about the list of prohibited and high-risk AI systems.
Here are some of the key issues this act aims to address:
The AI Act categorizes AI systems into four risk classes with different rules.
Some AI systems are banned, while high-risk AI systems have specific obligations for providers and users.
Obligations include testing, documentation, transparency, and notification requirements.
Debates focus on classifying and making exceptions for biometric identification systems.
Contentious issues include the enforcement structure and mechanisms of the EU AI Act.
The regulation of general-purpose AI models, introduced in June 2023, caused intense debate.
Concerns were raised about excessive regulation negatively impacting innovation and European companies.
EU Risk-Based Framework
The EU AI Act is based on a risk-based approach, categorizing AI systems into four risk levels:
Unacceptable Risk
High-risk
Limited Risk
Minimal/ No Risk
Prohibitions with Regulation on Artificial Intelligence
The main focus of the Act is expected to be on unacceptable-risk and high-risk AI systems. These are AI systems that pose a clear threat to fundamental rights and go against EU values. The Act aims to ban AI systems falling into the Unacceptable Risk category, as agreed upon in the political agreement.
Systems that categorize individuals based on sensitive characteristics, such as political or religious beliefs, sexual orientation, and race.
Unauthorized collection of facial images from the internet or CCTV footage to build facial recognition databases.
Emotion recognition in workplaces and educational institutions.
Rating individuals according to their behavior or personal traits.
AI systems manipulating human behavior against their free will.
AI exploiting vulnerabilities of individuals based on age, disability, or social and economic situations.
Specific uses of predictive policing.
Regarding biometric identification systems, they are generally prohibited, with exceptions for law enforcement in publicly accessible spaces, requiring judicial authorization and notification to data protection authorities. Post-remote biometric identification is allowed for targeted searches of individuals convicted or suspected of serious crimes. Real-time systems have strict conditions for targeted searches of victims, prevention of terrorism, or identification of suspects in specific crimes.
Certain AI systems with potential harm are classified as High-risk, including critical infrastructures, medical devices, education access systems, law enforcement tools, and biometric identification systems. Compliance obligations, risk mitigation, data governance, documentation, human oversight, transparency, accuracy, and cybersecurity are mandated for high-risk AI.
Conformity assessments and fundamental rights impact assessments are required. Citizens have the right to launch complaints about decisions made by high-risk AI systems. Regulatory sandboxes and real-world testing are allowed for AI system development.
AI systems classified as Limited Risk, including chatbots, certain emotion recognition and biometric categorization systems, and those generating deepfakes, have minimal transparency obligations, such as informing users and marking synthetic content.
Other AI systems categorized as Minimal or No Risk, such as recommender systems and spam filters, are freely allowed under the EU AI Act, with encouragement for voluntary codes of conduct.
Penalties
The EU AI Act is expected to be enforced mainly by national authorities in each EU country. A new body called the European AI Office, part of the EU Commission, will handle administrative, standard-setting, and enforcement tasks, especially regarding the new rules on GPAI models. This is to ensure coordination across Europe. The European AI Board, made up of representatives from member states, will continue to serve as a platform for coordination and advice to the Commission.
Fines for breaking the EU AI Act will vary based on the type of AI system, company size, and the seriousness of the violation.
Companies can face fines for providing false information under the EU AI Act. The penalty is either 7.5 million euros or 1.5% of the company's total worldwide annual turnover, whichever is higher.
Violations of the EU AI Act's obligations can result in fines of 15 million euros or 3% of the company's total worldwide annual turnover, whichever is higher.
For breaches involving banned AI applications, companies may be fined 35 million euros or 7% of their total worldwide annual turnover, whichever is higher.
The EU AI Act aims to provide more reasonable fines for smaller companies and startups, as negotiated in trilogue discussions.
Additionally, individuals or entities can report non-compliance to the relevant market surveillance authority.
The new rules will be closely monitored worldwide and will impact not only major A.I. developers like Google, Meta, Microsoft, and OpenAI but also other businesses planning to use the technology in education, health care, and financial planning. Governments are increasingly using A.I. in criminal justice and public benefit allocation.
The enforcement of these rules is uncertain. The A.I. Act involves regulators from 27 nations and requires hiring new experts at a time when government budgets are constrained. Legal challenges are expected as companies test the new rules in court. Previous E.U. legislation, including the notable digital privacy law known as the General Data Protection Regulation, has faced criticism for inconsistent enforcement.
Conclusion
The EU AI Act, having secured political agreement, is set to be officially adopted by the EU Parliament and Council.
The majority of its rules will take effect after a two-year grace period, with prohibitions starting in six months and obligations for GPAI models kicking in after a year.
AI developers can voluntarily commit to key provisions by joining the AI Pact initiated by the European Commission before the deadlines.
As the EU aims to lead responsible AI development, the effectiveness of the AI Act will likely be assessed against approaches in other AI-leading nations and international initiatives at the G7, G20, OECD, Council of Europe, and the UN.
The discussion about AI in the European Union was heated, reflecting the confusion among lawmakers. E.U. officials disagreed on how much regulation to impose on newer A.I. systems, as they were concerned about hindering European start-ups trying to compete with American companies such as Google and OpenAI.
Comentarios