Introduction

The European Union’s Artificial Intelligence Act (AI Act) establishes a comprehensive regulatory framework for AI systems within the EU, focusing on mitigating regulatory fragmentation [2], safeguarding fundamental rights [2], and fostering an internal market [2]. It categorizes AI systems based on risk levels and sets forth guidelines for compliance, enforcement [1] [4] [5] [6] [7] [8], and penalties [1].

Description

The European Union’s Artificial Intelligence Act (AI Act) came into effect on February 2, 2025, establishing a comprehensive regulatory framework for the development and use of AI systems across the EU. This regulation aims to mitigate regulatory fragmentation [2], safeguard fundamental rights [2] [7] [8], and foster an internal market [2], particularly focusing on AI systems classified as “high-risk” and “unacceptable risk.” The Act’s primary objectives are to ensure the safety, ethics [1] [2] [4] [5] [6] [7], and transparency of AI operations within established guidelines [5], employing a risk-based approach that categorizes AI systems into four risk categories: prohibited, high-risk [1] [5] [6] [8], limited risk [2], and minimal risk.

Enforcement of the AI Act began on February 2, 2025, marking the end of the grace period for compliance [5]. As of this date, AI systems classified as prohibited cannot be marketed or used [6], including manipulative AI systems [6], social scoring mechanisms [2] [4] [6], and certain biometric identification methods [2]. Article 5 of the Act specifically forbids the marketing [7], deployment [5] [7] [8], or use of AI systems that engage in manipulative [7], exploitative [7], social control [7], or surveillance practices that violate fundamental rights and Union values [7]. The European Commission published the Definition Guidelines and Prohibited AI Guidelines on February 4, 2025 [6], which define artificial intelligence and outline prohibited practices [6], including the use of subliminal techniques or manipulative methods that distort behavior and potentially cause significant harm [7]. Examples of prohibited systems include chatbots that impersonate individuals or exploit vulnerabilities based on age, disability [7], or socio-economic status [7].

High-risk AI systems [1] [5] [6] [8], which pertain to critical sectors such as healthcare and justice, are permitted but must adhere to stringent measures, including thorough risk assessments [5], data governance [5] [8], and transparency obligations [1] [2] [5] [8]. AI systems introduced before August 2, 2026 [6], may be grandfathered in [6], with obligations triggered only by significant design changes [6]. Limited risk AI systems, such as chatbots [2], are required to inform users when they are interacting with an AI system [5], as stipulated in Article 50 of the Act [5]. Conversely [5], minimal risk AI systems, like anti-spam filters [2], face no regulatory restrictions and are not subject to regulation.

The AI Act also prohibits certain social scoring practices that assess individuals based on social behavior or personal characteristics [7], leading to unfavorable treatment [7], such as using unrelated variables to determine creditworthiness or eligibility for state support [7]. Additionally, AI systems cannot make risk assessments predicting criminal behavior based solely on profiling or personality traits [7]. The creation or expansion of facial recognition databases through untargeted scraping of images is prohibited [7], while the scraping of other biometric data is allowed [7]. AI systems that infer emotions in workplaces or educational settings are banned unless for medical or safety reasons [7], and biometric categorization systems that deduce sensitive personal information [7], such as race or sexual orientation [7], are largely prohibited [7]. Real-time remote biometric identification systems for law enforcement in public spaces are restricted unless necessary for specific objectives [7], such as locating missing persons or preventing terrorist threats [7].

The AI Act mandates that all AI system providers and deployers ensure their personnel possess adequate AI literacy, necessitating the development of governance policies and tailored training programs to enhance understanding of the opportunities and risks associated with AI. This includes training staff to identify which systems qualify as AI under the Definition Guidelines and to conduct risk evaluations, as high-risk AI systems are subject to the main regulatory requirements of the AI Act [6]. Under Article 56 [1] [3] [5], the European AI Office is tasked with creating a General-Purpose AI Code of Practice [3], which is expected to be finalized by May 2, 2025, providing guidance on compliance obligations for developers of general-purpose AI (GPAI) models. The AI Office has also established a repository of AI literacy practices to assist businesses in enhancing compliance [4], although adherence to these practices does not guarantee compliance with Article 4.

Violations of the AI Act can lead to significant penalties [8], including fines ranging from €7.5 million to €35 million or 1% to 7% of a company’s global annual revenue for serious non-compliance [5]. The AI Office in Brussels will oversee enforcement and support national authorities in this regard [8]. The prohibitions are enforceable in national courts [1], allowing affected parties to seek injunctions [1], although there is no right to private damages under the Act [1]. Regulators are granted extensive investigative powers [4], including access to AI risk assessments and compliance documentation [4], although legal privilege may protect certain internal communications [4]. Businesses are advised to consult legal counsel to ensure compliance strategies are appropriately protected [4].

The next major compliance deadline is August 2, 2025 [8], when EU Member States must designate national authorities for enforcement [8], and rules regarding penalties [8], governance [5] [8], and confidentiality will take effect [8]. By August 2, 2026 [8], most obligations [2] [6] [8], including those for high-risk AI systems [8], will become effective [8], along with specific transparency requirements [8]. By August 2, 2027 [8], providers of GPAI models placed on the market before August 2, 2025 [8], must comply with the AI Act [8]. Following the initial compliance deadline [8], companies will need to evaluate the applicability of the AI Act to their AI systems and GPAI models [8], conducting regular audits to ensure compliance with evolving regulations [8]. Businesses are encouraged to begin preparations now to mitigate potential risks associated with the new regulations, as compliance with other EU laws [1], such as GDPR and the Digital Services Act [1], remains essential [1].

Conclusion

The AI Act represents a significant step in regulating AI technologies within the EU, aiming to balance innovation with the protection of fundamental rights. Its implementation will require businesses to adapt to new compliance requirements, potentially impacting their operations and strategies. As the regulatory landscape evolves, companies must remain vigilant and proactive in ensuring adherence to the Act’s provisions to avoid substantial penalties and maintain market access.

References

[1] https://www.paulweiss.com/practices/litigation/artificial-intelligence/publications/european-commission-publishes-guidance-on-prohibited-ai-practices-under-the-eu-ai-act?id=56629
[2] https://clovers.law/en/blog/2025/2/27/ai-act-entry-into-force-of-the-first-provisions
[3] https://www.mofo.com/resources/insights/250218-european-digital-compliance-key-digital-regulation-compliance
[4] https://cdp.cooley.com/ai-talks-understanding-the-eu-ai-act-ai-literacy-obligations-and-prohibited-practices/
[5] https://natlawreview.com/article/introducing-eu-ai-act
[6] https://www.nautadutilh.com/en/insights/european-commission-publishes-first-set-of-guidelines-on-ai/
[7] https://wiggin.eu/insight/eu-ai-act-commission-publishes-guidelines-on-prohibited-ai-practices/
[8] https://www.jdsupra.com/legalnews/eu-ai-act-first-rules-take-effect-on-7364175/