Introduction

The European Union Artificial Intelligence Act (EU AI Act) [1] [6] [7], officially known as EU Regulation on Artificial Intelligence (EU) 2024/1689, represents a pioneering effort to establish a comprehensive regulatory framework for artificial intelligence systems. Adopted on June 13, 2024, and effective from August 1, 2024 [4] [5], this legislation aims to ensure the safe development, use [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11], and marketing of AI technologies through a risk-based approach. It seeks to balance innovation with essential principles of transparency, safety [1] [3] [5] [6] [12], and non-discrimination [1], setting a global standard for AI regulation [1].

Description

The European Union Artificial Intelligence Act (EU AI Act) [1] [6] [7], officially known as EU Regulation on Artificial Intelligence (EU) 2024/1689, was adopted on June 13, 2024, and came into effect on August 1, 2024 [2] [4] [6] [7] [8]. This landmark legislation establishes the first comprehensive regulatory framework for AI systems globally [6], employing a risk-based approach to ensure the safe development, use [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11], and marketing of AI technologies. Developed in response to a request from EU leaders in October 2020 [1], the Act aims to balance innovation with essential principles of transparency, safety [1] [3] [5] [6] [12], and non-discrimination in AI systems [1].

The EU AI Act applies broadly [1], affecting not only providers [1], deployers [1] [3] [4] [6] [7] [12], and distributors of AI systems within the EU but also those outside the EU whose systems impact users in the Union [1]. Notably, it excludes AI systems used solely for military or defense purposes [1] [6]. The Act is expected to set a global standard for AI regulation [1], aligning its definition of AI systems with OECD Guidelines [1].

AI systems are categorized into four risk levels:

  1. Unacceptable Risk: Prohibited systems that pose clear threats to safety and human rights [1], including those that manipulate behavior [7], involve social scoring by public and private entities [2] [8], or utilize real-time remote biometric identification for law enforcement [7] [8], with limited exceptions for serious crimes or national security. This category also includes systems for cognitive behavioral manipulation and emotion recognition in workplaces and educational settings [12], as well as those that create or expand facial recognition databases through untargeted scraping of facial images [2] [8]. Additional prohibited practices include the use of subliminal or manipulative techniques and the exploitation of vulnerabilities related to age [8], disability [4] [8] [9], or socio-economic status [8]. These prohibitions will take effect on February 2, 2025, with significant penalties for non-compliance [5] [9], potentially reaching €35 million or 7% of a company’s global annual turnover from the previous financial year [9].

  2. High Risk: Systems that pose significant threats to health, safety [1] [3] [5] [6] [12], and fundamental rights [1] [3] [6] [7] [11] [12], which must meet stringent obligations before they can be marketed. Classification as high-risk is based on criteria outlined in Annex I and Annex III [12], which include AI products or components requiring third-party conformity assessments under relevant EU legislation [12], such as machinery and medical devices [12]. High-risk AI system providers bear the majority of obligations [12], including ensuring human oversight [12], accuracy [12], and cybersecurity [12], as well as implementing risk management systems and maintaining relevant training datasets. These systems are also required to undergo post-market monitoring and incident reporting [3].

  3. Limited Risk: Systems with minimal threats that require transparency to users, necessitating user awareness when interacting with them [6], such as chatbots and AI-generated content, including deepfakes [2] [5] [7]. Providers of systems generating deepfakes must disclose the artificial nature of the content [12].

  4. Minimal Risk: Systems that present no significant risk, like AI-enabled video games and spam filters, which can be deployed without restrictions [1].

General Purpose AI (GPAI) systems used exclusively for scientific research and development are exempt from certain regulations [1], while those used for other purposes must comply with documentation requirements [1], particularly if they exceed a specified computational threshold [6]. By May 2025 [7], the AI Office [2] [3] [7] [8], a new body within the European Commission [7], is expected to publish a code of practice for GPAI models [7], providing clarity on compliance requirements for providers [7]. Regulations concerning general-purpose AI models will take effect on August 2, 2025 [11], requiring providers to ensure compliance with copyright laws and maintain detailed technical records of their development and testing [11]. Providers of open-source models that do not pose systemic risks will face reduced obligations [11], primarily related to copyright compliance and summarizing training data [11].

A significant aspect of the EU AI Act is the emphasis on “AI literacy,” as outlined in Article 4. This mandates that providers and deployers of AI systems take measures to ensure a sufficient level of AI literacy among their staff and others involved in the operation and use of AI systems [4]. AI literacy encompasses the necessary skills [4], knowledge [4], and understanding for informed deployment and awareness of AI’s opportunities and risks [4]. The required level of AI literacy will depend on the technical knowledge [4], experience [4], education [2] [4] [8] [9] [12], and training of those involved [4], as well as the context of AI system use [4]. Companies must ensure responsible use of AI systems [4], including those from third-party providers [4], which extends the obligation to software products incorporating AI tools [4].

To achieve compliance [4], companies should implement measures such as maintaining an up-to-date inventory of their AI systems, establishing dedicated points of contact for guidance on AI tools [4], and providing training tailored to the risk level and context of use [9]. Documenting and managing critical incidents will contribute to a comprehensive compliance system [4]. Key instruments for implementing Article 4 include regular employee training and internal AI guidelines [4]. Appointing an AI officer may also be beneficial [4], centralizing expertise and facilitating communication [4].

The Act imposes severe penalties for non-compliance [1] [6], with administrative fines ranging from €7.5 million to €35 million or 1.5% to 7% of a company’s global annual turnover, depending on the severity of the breach. Small and medium-sized enterprises may benefit from reduced fines [10]. Key implementation milestones include:

  • February 2, 2025: Chapters I and II of the EU AI Act will come into effect, introducing binding measures and prohibitions [10]. This includes a ban on unacceptable risk AI systems, specifically those that evaluate or score individuals based on social behavior [10], personal characteristics [10], or predicted outcomes [10], leading to unfair treatment [10]. The enforcement of Article 5 will prohibit practices that assess or predict the likelihood of an individual committing a crime based solely on profiling, as well as biometric categorization systems that infer sensitive personal information [2] [8]. The use of AI for real-time remote biometric identification in public spaces is prohibited, except in narrowly defined cases related to serious crimes or national security [10].

  • August 2, 2025: Establishment of National Competent Authorities (NCA) provisions and governance frameworks, with member states required to designate NCAs to oversee regulation, including a Notifying Authority [1], Market Surveillance Authority [1], and National Public Authorities for enforcing rights obligations related to high-risk AI systems [1]. Sanctions [11], including fines [1] [2] [3] [5] [7] [11], will be applicable to providers of general-purpose AI models under the Act [11].

  • August 2, 2026: Implementation of remaining provisions, excluding high-risk systems [6].

  • August 2, 2027: Full enforcement of high-risk system regulations.

Organizations operating in the EU should prepare for compliance by classifying their AI systems according to risk levels and developing governance models [6], as providers will be held accountable for safety [6]. Deployers of high-risk AI systems have fewer obligations but must comply with usage instructions and inform employees about the system’s deployment [12]. Importers and distributors must ensure compliance throughout the value chain and verify that providers meet their obligations [12]. The phased implementation of the EU AI Act necessitates proactive measures to navigate the new regulatory landscape [6], making accurate role classification crucial for legal exposure management [12]. Early preparation is advisable [12], particularly for developers of high-risk systems [12], as retrospective compliance may be challenging [12]. The EU seeks to position itself as a global leader in AI governance [3], influencing future regulatory frameworks worldwide while fostering innovation, particularly for small and medium-sized enterprises [3], through regulatory sandboxes [3].

Conclusion

The EU AI Act is poised to significantly impact the global landscape of AI regulation by establishing a comprehensive framework that prioritizes safety, transparency [1] [3] [6] [12], and non-discrimination [1]. Its risk-based approach categorizes AI systems into distinct levels, each with specific compliance requirements, thereby setting a precedent for future regulatory efforts worldwide. Organizations must proactively prepare for compliance to navigate this evolving regulatory environment, ensuring responsible AI deployment and fostering innovation. The Act’s emphasis on AI literacy and phased implementation underscores the EU’s commitment to leading in AI governance while supporting innovation, particularly for small and medium-sized enterprises [3].

References

[1] https://www.jdsupra.com/legalnews/european-union-artificial-intelligence-4276103/
[2] https://www.lexology.com/library/detail.aspx?g=08073c9a-c6a4-4d80-b3a8-a1b030a25a3e
[3] https://businessabc.net/wiki/eu-ai-act
[4] https://www.ypog.law/en/insight/art-4-ai-act
[5] https://www.indiehackers.com/post/tech/all-about-the-eu-ai-act-your-guide-to-building-and-selling-ai-in-europe-LjH7u90zkkfe7vFpvi66
[6] https://www.beneschlaw.com/resources/european-union-artificial-intelligence-act-an-overview-part-2.html
[7] https://www.crowell.com/en/insights/publications/eu-artificial-intelligence-act
[8] https://www.lewissilkin.com/insights/2025/01/20/eu-ai-act-are-you-ready-for-2-february-2025-the-ban-on-prohibited-ai-systems-102jut1
[9] https://advisense.com/2025/01/29/ai-act-the-first-requirements/
[10] https://www.shibolet.com/en/the-european-ai-act-restrictions-on-prohibited-ai-practices-take-effect/
[11] https://natlawreview.com/article/key-developments-german-labor-and-employment-law-2025
[12] https://setterwalls.se/en/article/navigating-the-eu-ai-act-parti-the-ai-act-roadmap/