Velocity Media Blog New

AI Regulation: The EU's AI Risk-Based Framework

Written by Shawn Greyling | Apr 3, 2024 11:23:19 AM

In an age when artificial intelligence (AI) is transforming businesses and daily life, the EU's proactive AI regulation is significant. The European Parliament and Council obtained a political agreement on the EU AI Act on December 8, 2023, marking a milestone in AI regulation worldwide. President Ursula von der Leyen called the Act a "global first," establishing the EU as a leader in AI regulation. The EU's comprehensive AI system legal framework promotes investment and innovation in AI while ensuring safety and conformity to basic rights and values.

Covered in this article

A Risk-Based Regulatory Approach
Scope and Application
Embracing a Risk-Based Approach
Safeguards for General-Purpose AI Models
Enforcement and Penalties
What's Next?
FAQs on the EU's AI Risk-Based Framework

A Risk-Based Regulatory Approach

Central to the EU AI Act is a risk-based regulatory framework, categorising AI systems into four risk levels, each with specific obligations for providers and deployers. This approach balances the need for safety and innovation, ensuring legal certainty for AI investments and minimising consumer risks and compliance costs for AI providers. The Act's focus on high-risk AI systems necessitates stringent compliance measures, including testing, documentation, and transparency, addressing concerns raised during the intensive trilogue negotiations. Issues such as biometric identification, enforcement mechanisms, and the regulation of general-purpose AI models were among the most debated, reflecting the Act's ambitious scope to cover a wide range of AI applications.

Scope and Application

The EU AI Act's final text, yet to be published, is anticipated to clarify the precise definitions and scope of AI systems, aligning with the OECD's approach. It aims to primarily regulate providers and deployers of AI systems, excluding military or defense applications and AI used solely for research and innovation. This delineation underscores the EU's intent to foster a safe and innovative AI ecosystem within a civilian context.

Embracing a Risk-Based Approach

At its core, the EU AI Act embodies a risk-based approach to AI regulation, categorising AI systems based on their potential risks and use cases. This method emphasises the regulation of unacceptable-risk and high-risk AI systems, focusing on applications that pose clear threats to fundamental rights or public safety. For example, the Act proposes to ban AI systems that could manipulate behavior, exploit vulnerabilities, or facilitate indiscriminate surveillance. Conversely, AI systems posing limited risks would face fewer regulatory hurdles, encouraging innovation while ensuring public trust in AI technologies.

Safeguards for General-Purpose AI Models

The Act also addresses the challenges posed by general-purpose AI models, proposing a tiered approach to their regulation. This includes transparency requirements, adherence to copyright laws, and, for models deemed to pose systemic risks, more rigorous obligations such as conducting risk assessments and ensuring cybersecurity. This nuanced approach reflects the EU's commitment to balancing innovation with safety and accountability.

Enforcement and Penalties

Enforcement of the EU AI Act will rely on national market surveillance authorities, complemented by a new European AI Office within the EU Commission, to ensure consistent application across member states. The Act proposes a range of fines, scaled according to the severity of violations, with provisions for more proportionate penalties for smaller companies and startups, highlighting the EU's intent to foster a fair and supportive environment for AI development.

The infographic below from Holistic AI presents the penalties under the EU AI Act, highlighting fines for non-compliance with AI regulations. Operators can face fines up to €35 million or 7% of global turnover. Lesser penalties include up to €15 million or 3% of turnover for failing to meet specific obligations, and up to €7.5 million or 1% of turnover for providing incorrect or misleading information. Significant violations, such as infringement of the GPAI provisions or not cooperating with the Commission's evaluations, also attract fines up to €15 million or 3% of turnover. For EU institutions, the fines range up to €1.5 million for prohibited AI practices and up to €750,000 for any non-compliance. This penalty framework underscores the strict enforcement measures the EU will apply to ensure ethical AI use, as detailed by Holistic AI.

What's Next?

With the political agreement in place, the EU AI Act is set to be officially adopted and enter into force, with most provisions becoming applicable after a two-year compliance period. This transition period will be crucial for establishing oversight structures and issuing implementation guidance. The Act's ambitious goal to ensure that AI in the EU is safe, transparent, and respectful of fundamental rights will likely influence global AI regulatory efforts, setting a benchmark for responsible AI development.

In conclusion, the EU AI Act represents a significant step forward in the governance of AI, offering a model for balancing innovation with ethical and safety considerations. As this landmark legislation moves towards implementation, its impact on the global AI landscape and its effectiveness in safeguarding public interests while fostering technological advancement will be closely watched.

FAQs on the EU's AI Risk-Based Framework

1. What is the EU AI Act?

The EU AI Act is a landmark piece of legislation by the European Union, aimed at regulating artificial intelligence systems. It establishes a comprehensive legal framework to ensure AI systems within the EU are safe, respect fundamental rights, and align with EU values, while also promoting investment and innovation in AI.

2. Why is the EU AI Act considered a "global first"?

The EU AI Act is termed a "global first" by European Commission President Ursula von der Leyen because it is the first legislation of its kind to provide a clear set of rules for AI usage on a continental scale. It positions the EU as a frontrunner in establishing regulatory norms for AI, setting a precedent for other regions to follow.

3. What is a risk-based approach in AI regulation?

A risk-based approach categorises AI systems into different risk levels, each with corresponding regulatory obligations. This method tailors regulatory measures to the specific risks posed by various AI applications, ensuring that higher-risk AI systems undergo stricter scrutiny and compliance requirements.

4. How does the EU AI Act classify AI systems?

The EU AI Act classifies AI systems into four risk categories: unacceptable risk, high-risk, limited risk, and minimal/no risk. Each category is subject to different regulatory requirements, with a particular focus on unacceptable-risk and high-risk AI systems due to their potential impact on safety and fundamental rights.

5. What are the prohibitions under the EU AI Act?

The Act bans certain AI practices considered a clear threat to fundamental rights, including indiscriminate surveillance, manipulation of human behavior, exploitation of vulnerabilities, and certain applications of predictive policing, among others.

6. How will the EU AI Act be enforced?

Enforcement will be primarily the responsibility of national competent market surveillance authorities in each EU Member State. Additionally, a new European AI Office within the EU Commission will oversee administrative, standard-setting, and enforcement tasks to ensure coordination at the European level.

7. What are the penalties for violating the EU AI Act?

The Act proposes fines ranging from up to €7.5 million or 1.5% of a company's total worldwide annual turnover for minor infringements to up to €35 million or 7% of the total worldwide annual turnover (whichever is the higher) for violations of banned AI applications, depending on the severity of the violation and the size of the company.

8. When will the EU AI Act come into effect?

Following its official adoption by the EU Parliament and Council, the EU AI Act will enter into force after a two-year grace period, designed to allow stakeholders to comply with the new regulations. However, the prohibitions will apply six months post-adoption, and obligations for general-purpose AI models will become effective after 12 months.

9. How will the EU AI Act impact AI innovation?

The EU AI Act aims to foster a secure and trustworthy environment for AI innovation by providing clear regulatory guidelines. While it imposes stringent requirements on high-risk AI systems, it also encourages the development and deployment of AI applications that are safe and aligned with EU standards, potentially leading to more sustainable and ethical AI innovation.

10. Can the classification of AI systems under the EU AI Act change over time?

Yes, the classification of AI systems as high-risk can be updated by the European Commission based on evolving technologies, emerging risks, and the impact on safety and fundamental rights. This adaptive approach ensures that the regulatory framework remains relevant and effective in addressing new challenges in AI development.