Enforcement of the European Union AI Act

Enforcement of the European Union AI Act

Services List

    AI Strategy

    The demand for ethical safeguards and transparency in the use of these technologies has surged. Recognising this need, the European Union (EU) has taken a significant step forward by announcing a provisional agreement on the Artificial Intelligence Act (AI Act) in December 2023, with the legislation expected to come into effect between May and July 2024.

    Enforcement of the European Union AI Act

    Covered in this article

    Understanding the AI Act
    Scope and Application
    High-Risk AI Systems
    Transparency and General-Purpose AI Models
    Implications for Business
    Velocity's Role in Navigating the AI Act
    Navigating the Future
    FAQs About Enforcement of the EU AI Act

    Understanding the AI Act

    The AI Act is designed to be the world's first comprehensive legal framework on AI, aiming to foster trustworthy AI in Europe and beyond. It seeks to ensure that AI systems respect fundamental rights, safety, and ethical principles while addressing the risks associated with powerful AI models. The enforcement and implementation of the AI Act will be overseen by the newly established EU AI Office, with severe penalties for noncompliance, ranging from €35 million or 7% of global revenue to €7.5 million or 1.5% of revenue (whichever is higher), depending on the infringement and company size.

    New call-to-action

    Scope and Application

    The AI Act is far-reaching, applying to providers and developers of AI systems marketed or used within the EU, irrespective of the provider's location. This means that companies outside the EU, similar to the GDPR's provisions, could face penalties for non-compliance if their AI technologies are used within the Union. The Act categorises AI systems into four risk-based tiers, with specific prohibitions against practices deemed to present an "unacceptable risk," such as manipulative techniques or exploitation of vulnerabilities.

    High-Risk AI Systems

    The category of "high-risk" AI systems encompasses a wide range of applications, from biometric identification to financial evaluation systems. The AI Act mandates that high-risk AI developers and providers adhere to stringent requirements, including registration in the EU database, maintaining comprehensive documentation, and undergoing conformity assessments.

    Transparency and General-Purpose AI Models

    The AI Act also imposes transparency obligations on the use of AI, requiring systems intended to interact with humans to be marked as such. Additionally, general-purpose AI models with high-impact capabilities are subject to further restrictions, including maintaining technical documentation and compliance with EU copyright laws.

    Implications for Business

    The AI Act represents a significant regulatory shift that could alter how companies operate within the EU and globally. AI technology providers, developers, and implementers must understand the Act's implications and prepare for compliance. This includes familiarising themselves with the requirements for high-risk AI systems and ensuring transparency in AI interactions.

    Velocity's Role in Navigating the AI Act

    At Velocity, we are dedicated to guiding businesses through the evolving AI landscape with strategic insights and recommendations. We aim to enable our clients to leverage AI technologies effectively,  offering tailored advice on integrating AI into their digital marketing strategies effectively and ethically, while emphasising the importance of adherence to regulatory frameworks.

    For specific legal advice regarding the EU AI Act, we strongly recommend consulting with legal professionals who specialise in this area.

    Navigating the Future

    The EU's forthcoming AI Act is a landmark in the regulation of artificial intelligence, setting a precedent for the global governance of AI technologies. As the Act moves towards implementation, businesses must proactively engage with its provisions to ensure compliance and leverage the opportunities it presents.

    By staying abreast of regulatory developments and leveraging our expertise in digital marketing, we help businesses navigate the potential challenges and opportunities presented by the AI Act. When it comes to the intricacies of regulatory compliance, we recommend that you seek specialist legal advice.  

    Contact us today to learn how we can assist you in leveraging AI technologies in your marketing strategy while adhering to the highest ethical and regulatory standards.

    FAQs About Enforcement of the EU AI Act

    1. What is the purpose of the EU AI Act?

    The purpose of the EU AI Act is to establish a comprehensive legal framework for AI to ensure that AI systems used within the European Union are trustworthy and respect fundamental rights, safety, and ethical standards. It aims to address the risks associated with powerful AI models and foster a safe AI environment.

    2. When is the EU AI Act expected to take effect?

    The EU AI Act is expected to come into effect between May and July 2024, following its provisional agreement announcement in December 2023.

    3. Who does the AI Act apply to?

    The AI Act applies to providers and developers of AI systems that are marketed or used within the EU, regardless of whether these entities are based in the EU or elsewhere. This means that companies outside the EU could also be subject to its regulations and penalties for non-compliance.

    4. What are the penalties for non-compliance with the AI Act?

    The penalties for non-compliance can be substantial, ranging from €35 million or 7% of global revenue (whichever is higher) to €7.5 million or 1.5% of revenue (whichever is higher), depending on the infringement's nature and the size of the company.

    5. How does the AI Act categorise AI systems?

    The AI Act adopts a risk-based approach to categorise AI systems into four tiers, ranging from minimal risk to unacceptable risk, based on the sensitivity of the data involved and the specific AI use case or application.

    6. What practices are prohibited under the AI Act?

    The AI Act explicitly prohibits certain AI practices that pose an "unacceptable risk," such as manipulative or deceptive techniques, exploitation of vulnerabilities due to age or disability, and the use of biometric data to categorise individuals based on sensitive characteristics.

    7. What obligations do providers of high-risk AI systems have under the AI Act?

    Providers of high-risk AI systems are required to register with the centralised EU database, implement a compliant quality management system, maintain comprehensive documentation and logs, undergo conformity assessments, and comply with restrictions on the use of high-risk AI.

    8. How does the AI Act address transparency in AI?

    The AI Act imposes transparency obligations, requiring that AI systems intended to interact directly with humans be identified as such unless the nature of the interaction makes this obvious.

    9. Are there any specific restrictions on general-purpose AI models?

    Yes, general-purpose AI models with high-impact capabilities are subject to additional restrictions under the AI Act, including maintaining detailed technical documentation, complying with EU copyright laws, and providing a summary of the training content used.

    10. How can businesses prepare for compliance with the AI Act?

    Businesses can prepare for compliance by staying informed about the AI Act's provisions, assessing whether their AI systems fall under the high-risk category, ensuring transparency in AI interactions, and engaging with knowledgeable partners like Velocity for guidance and support.

    Quick Lists

    Services List

      Subscribe

      The Psychology Behind Conversions

      Explore the psychology of CRO in our FREE e-book to boost conversions and profits by understanding customer behaviour and decision-making factors.
      contact-left

      WE OFFER THE BEST CRM SOLUTIONS

      Let us be a part of your success

      contact-right