Introduction
The EU AI Act [1] [2] [4] [5] [6] [7] [9], adopted on June 13, 2024, represents a pioneering regulatory framework for artificial intelligence, establishing comprehensive guidelines for the development, deployment [2] [5] [6] [9], and use of AI within the European Union [2]. This legislation not only impacts entities within the EU but also extends its reach to providers outside the region who market AI systems in the EU. The Act introduces a phased implementation schedule, with significant implications for high-risk AI systems and stringent compliance requirements for organizations.
Description
The EU AI Act [1] [2] [4] [5] [6] [7] [9], adopted on June 13, 2024, is a groundbreaking regulation that establishes the first comprehensive global framework for artificial intelligence. This legislation governs the development [2], deployment [2] [5] [6] [9], and use of AI within the EU and has extra-territorial implications [2], affecting providers outside the EU who market AI systems in the region [2]. Set to come into force on August 1, 2024 [2], the Act introduces a phased implementation schedule, with key provisions applicable starting February 2, 2025, particularly for high-risk AI systems that pose significant risks to fundamental rights. Notably, military or defense-related AI systems are exempt from the Act [10].
As the February 2 deadline approaches [7], over 130 companies have joined the AI Pact initiative [7], aimed at facilitating compliance with this intricate legislation [7]. Membership is open to all [7], providing guidelines and best practices to assist CIOs and managers in implementation [7]. The Act categorizes AI systems into four risk levels: Unacceptable-risk [6], High-risk [2] [3] [6] [8], Limited-Risk [6], and Minimal-Risk [6]. Unacceptable-risk systems, which are prohibited [2], include those used for social scoring and real-time biometric identification for law enforcement. High-risk systems encompass those that significantly threaten health, safety [6] [8] [10], and fundamental rights [2] [10], such as behavioral manipulation and social scoring by public authorities [2], which must comply with rigorous obligations [10]. Limited-risk systems, like chatbots [10], are subject to transparency requirements [10], while minimal-risk systems, such as AI-enabled video games [10], pose negligible threats.
As of February 2, 2025 [9], the initial requirements of the EU AI Act will become legally binding [9], imposing fines of up to 7% of global annual turnover for non-compliance [9]. Organizations operating within the EU must evaluate the applicability of the AI Act to their operations [1]. Companies are required to ensure AI literacy among employees by this date, fostering an AI-driven culture through training or hiring qualified personnel. Although there are no direct penalties for failing to meet AI literacy requirements under Article 4 [3], civil liability may arise from August 2, 2026, if inadequately trained staff using AI systems cause harm to third parties [3]. Additionally, non-compliance with Article 5 [2] [3] [6], which prohibits certain high-risk AI practices, can lead to severe sanctions [3], including fines of up to €35 million or 7% of global turnover per violation [3].
Article 5 specifically addresses the marketing, deployment [2] [5] [6] [9], or use of various AI systems [5], including those for social scoring [5], emotion inference in workplaces and educational settings [5], and the creation or expansion of facial recognition databases through untargeted scraping [5]. It also covers systems that assess or predict the likelihood of criminal behavior based solely on profiling and biometric categorization systems that infer sensitive personal information [5]. Additional prohibited practices include the use of subliminal or manipulative techniques [5], exploitation of vulnerabilities related to age [5], disability [3] [5], or socio-economic status [5], and unauthorized real-time biometric identification in public spaces for law enforcement purposes [5]. While these practices are deemed to pose significant risks to fundamental values [5], some exceptions exist for critical public interests [5].
To ensure compliance [2] [6], companies must implement AI training and learning resources to address potential allegations of non-compliance [3]. A layered approach to training is recommended [3], providing basic AI literacy to all employees while offering more specialized training as needed [3]. Legal advisors can assist in developing tailored AI literacy workshops or online courses [3]. Companies should also assess their current AI training programs [3], document existing initiatives [3], and address any identified gaps before the compliance deadline.
The Act’s Chapter 2 outlines the Unacceptable Risk category [6], which includes AI systems that could harm individuals [6], such as those that manipulate vulnerable populations or utilize biometric identification [6]. Engaging in prohibited AI practices may expose companies to civil [3], administrative [2] [3], or criminal liability under other EU laws [3], including GDPR violations [3]. To mitigate risks associated with prohibited AI practices [3], companies should identify and document the AI systems they use [3], particularly those that may fall under the prohibited categories [3]. If there is ambiguity regarding specific AI practices [3], companies should document their rationale for determining compliance [3], which can also help identify high-risk AI systems and those with transparency obligations under Article 50 of the AI Act [3].
In response to the complexities of the legislation, the European Commission has established an AI Office to serve as a center of expertise on artificial intelligence [7]. This office will collaborate with companies, particularly small and medium-sized enterprises [7], to clarify regulatory aspects [7], suggest simplifications [7], and develop best practices [7]. A significant upcoming milestone is the publication of the final Code of Practice for General Purpose AI Models by the European Commission at the end of April 2025 [9], which will take effect in August alongside the enforcement powers of member state supervisory authorities [9]. Emerging international AI regulatory concepts include clinical trials regulations [1], corporate governance [1], compliance policies [1], and new FDA guidance documents [1], all of which companies must prepare for [1]. As the EU AI Act becomes effective, it represents the first comprehensive legislation governing AI in both public and private sectors [6], addressing potential risks associated with AI while ensuring safer operations for businesses within the EU [6]. Business leaders are encouraged to take proactive measures now to prepare for compliance and harness the advantages of responsible AI adoption [6], making it an opportune moment to reassess AI strategies in light of current and forthcoming regulatory requirements [6]. Companies that have already implemented responsible AI programs may find compliance manageable in light of existing regulations [9].
Conclusion
The EU AI Act signifies a transformative step in AI regulation, setting a precedent for global standards. Its comprehensive framework addresses the potential risks associated with AI, ensuring safer operations for businesses within the EU [6]. As organizations prepare for compliance, they are encouraged to adopt responsible AI practices, fostering an AI-driven culture and aligning with emerging international regulatory concepts. The Act not only safeguards fundamental rights but also offers a strategic opportunity for businesses to reassess and enhance their AI strategies in anticipation of future regulatory landscapes.
References
[1] https://www.jdsupra.com/legalnews/international-ai-regulatory-contrast-7508297/
[2] https://www.crowell.com/en/insights/publications/eu-artificial-intelligence-act
[3] https://www.lw.com/en/insights/2025/01/upcoming-eu-ai-act-obligations-mandatory-training-and-prohibited-practices
[4] https://www.eversheds-sutherland.com/en/netherlands/insights/defining-ai-systems
[5] https://www.lewissilkin.com/insights/2025/01/20/eu-ai-act-are-you-ready-for-2-february-2025-the-ban-on-prohibited-ai-systems-102jut1
[6] https://www.softwareimprovementgroup.com/eu-ai-act-summary/
[7] https://www.cio.com/article/3812400/ai-pact-how-to-simplify-ai-act-compliance-for-all-enterprises.html
[8] https://news.bloomberglaw.com/in-house-counsel/companies-prep-eu-ai-act-compliance-while-waiting-for-guidance
[9] https://www.techrepublic.com/article/eu-ai-act-legally-binding-requirements/
[10] https://www.beneschlaw.com/resources/european-union-artificial-intelligence-act-an-overview-part-2.html