Introduction
The EU AI Act [1] [2] [5] [6], effective from February 2, 2025, establishes the first comprehensive regulatory framework for artificial intelligence [3] [4], focusing on balancing innovation with consumer rights protection [2]. It introduces significant regulations [1], particularly concerning AI systems that may infringe on privacy and fundamental rights, using a risk-based approach to categorize AI technologies.
Description
As of February 2, 2025 [2], Chapters I and II of the EU AI Act have come into effect [1], establishing the world’s first comprehensive regulatory framework for artificial intelligence [3]. This legislation introduces significant regulations regarding the use of certain AI systems that may infringe on individual privacy and fundamental rights [1], employing a risk-based approach to categorize AI technologies based on their potential risks [3], ranging from low to unacceptable. Article 5 specifically prohibits AI applications related to emotion recognition in the workplace and educational settings [1], with exemptions only for medical or safety purposes [2]. Notably, the Act also bans the use of AI-powered facial recognition in public spaces [3], although exemptions have been negotiated for law enforcement in serious crime investigations [3], raising concerns about potential loopholes. This reflects a commitment to balance AI innovation with consumer rights protection [2], although critics argue that the legislation may prioritize industry and law enforcement interests over individual rights [3], particularly in areas such as migration and border control.
Organizations are required to develop clear policies regarding prohibited AI uses and implement procedures to ensure compliance with the Act [6]. Breaches of Article 5 will incur penalties starting August 2, 2025 [5], potentially costing firms up to €35 million or 7% of their total global annual turnover [5], alongside reputational damage [5]. In conjunction with these provisions, Hunton Andrews Kurth LLP has released a new AI Act Guide aimed at in-house lawyers [4], providing a practical roadmap for compliance with the EU AI Act [4]. This guide assists legal and compliance teams in understanding the Act’s scope [4], key concepts [4], and regulatory obligations [4], including the classification of AI systems and their associated requirements [4]. It outlines practical steps for businesses acting as providers and deployers of AI systems [4], focusing on compliance areas such as risk management [4], data governance [2] [4] [5], and human oversight [4] [5].
Firms with employees exposed to heightened risks [5], such as pilots or drivers [5], benefit from Recital 18 [5], which excludes physical states like pain or fatigue from the definition of emotion recognition AI systems [5]. Even if not explicitly prohibited, these systems may still be classified as high-risk AI systems (HRAI) under Article 6(2) and Annex III [5], point (1)(c) [5], imposing enhanced compliance obligations regarding data governance [5], transparency [5], and human oversight [4] [5]. The Act introduces varying levels of oversight depending on the risk associated with AI use cases [6], presenting challenges for corporate ethics and compliance teams in assessing risks and implementing controls [6].
Despite the activation of these provisions [1], there is currently no established supervisory authority to enforce them [1], nor are there any administrative fines or sanctions in place [1]. However, compliance will be monitored by EU and national authorities through market surveillance [4], regulatory sandboxes [4], and mandatory post-market monitoring [4], with significant penalties for non-compliance [4]. Companies are increasingly negotiating contracts that include compliance provisions tailored to the EU AI Act [1], which raises potential breach-of-contract issues if a provider or vendor utilizes a prohibited AI system [1]. Robust contract management and third-party monitoring are essential [6], as contractors or business partners may utilize AI in ways that are not permitted [6].
The determination of whether certain techniques fall under the prohibition is fact-dependent [5], with ambiguity surrounding the definition of AI systems using ‘purposefully manipulative techniques.’ A distinction is made between non-AI-enabled systems that manipulate behavior and adaptable AI systems that respond to individual vulnerabilities, increasing potential harm [2] [5]. Article 5(1)(a) is particularly relevant for advertisers [5], as illustrated by the example of a chatbot using subliminal messaging techniques [5]. If the conditions of Article 5(1)(a) are met [5], particularly regarding significant harm [5], such systems would likely be prohibited [5].
The AI Act introduces a phased compliance timeline [4], with key obligations set to roll out between 2025 and 2027 [4], emphasizing the need for businesses to assess their AI strategies carefully [4]. Organizations must first assess whether they are using any AI applications prohibited under Article 5 of the EU AI Act [2]. If so [2], they should engage stakeholders [2], phase out applicable AI systems [2], and discontinue their use [2]. Establishing procedures for identifying future AI initiatives that may intersect with the ban [2], implementing employee training for compliance [2], and coordinating with service providers for regulatory adherence are advisable to mitigate third-party AI risks.
This update marks a notable [1], albeit limited [1] [5], advancement in the regulatory framework for AI [1], indicating a challenging year ahead for compliance in the AI sector while equipping businesses to navigate regulatory complexities and develop ethical, compliant [1] [2] [3] [4] [5] [6], and resilient AI systems [4]. The emphasis on AI literacy under Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy among their personnel [2], encompassing the skills and knowledge necessary to make informed decisions about AI deployment and to understand the associated risks and potential harms [2]. The European Parliament is set to vote on the Act early next year [3], with civil society organizations urging lawmakers to ensure that fundamental rights are upheld in the final legislation [3], marking a significant step in global AI regulation and potentially serving as a model for future regulations worldwide.
Conclusion
The EU AI Act represents a significant step in global AI regulation, setting a precedent for future frameworks worldwide. While it aims to balance innovation with consumer rights [2], the Act presents challenges for compliance, particularly in areas like migration and border control [3]. Organizations must navigate these complexities to ensure ethical and compliant AI deployment, with the potential for the Act to serve as a model for future regulations.
References
[1] https://www.jdsupra.com/legalnews/and-so-it-begins-an-eu-ai-act-update-8792889/
[2] https://www.lexology.com/library/detail.aspx?g=4363109f-d647-411d-b700-9a21990d1d92
[3] https://www.business-humanrights.org/en/latest-news/eu-agrees-landmark-artificial-intelligence-rules-key-safeguards-but-serious-loopholes-remain/
[4] https://www.hunton.com/privacy-and-information-security-law/hunton-publishes-ai-act-guide-to-help-businesses-navigate-eus-landmark-regulation
[5] https://www.lexology.com/library/detail.aspx?g=b3fed5bd-a3b8-4ab4-a4d7-b1f03f60bbfb
[6] https://www.navex.com/en-us/blog/article/compliance-with-the-eu-ai-act/