Introduction
The European Union’s Artificial Intelligence Act (AI Act) [3] [12], effective from August 1, 2024 [1] [5], represents the first comprehensive regulatory framework for AI systems worldwide. It significantly affects businesses involved in AI development and implementation [5], particularly in the human resources sector. The Act mandates compliance with its requirements, especially for organizations using AI in recruitment, and introduces a risk-based approach to categorizing AI systems. This legislation has broad implications, including extra-territorial applicability [12], stringent penalties for non-compliance [2] [12], and specific obligations for General Purpose AI (GPAI) providers [12].
Description
The EU Artificial Intelligence Act (AI Act) [3] [12], effective from August 1, 2024 [1] [5], establishes the first comprehensive regulatory framework for AI systems globally [3], significantly impacting businesses involved in the development and implementation of AI solutions [5], particularly in the HR sector [1], over the next 36 months [3]. Organizations utilizing AI systems, especially in recruitment [1], must ensure compliance with the Act’s requirements and assess existing systems accordingly. The Act defines an AI system as a machine-based system capable of “inference,” including learning, reasoning [2] [12], or modeling after deployment [2], generating outputs such as predictions or decisions based on input [9]. This definition aligns with the OECD’s framework and extends to products and tools not typically recognized as AI, distinguishing these systems from traditional data processing platforms [2], such as static CRM systems.
The AI Act adopts a risk-based approach [3], categorizing AI systems into four risk levels: unacceptable [5] [12], high [12], limited [12], and minimal risk [12], each with distinct regulatory implications [5]. Systems classified as posing “unacceptable risk” will be banned starting February 1, 2025 [12]. High-risk AI systems [1] [2] [3] [10] [11] [12], particularly those affecting safety or fundamental rights, face stringent scrutiny and must meet rigorous requirements, including thorough documentation [12], data accuracy [12], and transparency for traceability and accountability [12]. Businesses must classify high-risk AI into subcategories aligned with EU product safety legislation [5]. The Act prohibits workplace AI-based emotion recognition systems unless used for medical or safety purposes [1], with certain high-risk systems potentially qualifying for exemptions [1]. Other systems that assess employee feelings [1], such as those identifying overload or boredom [1], are expected to be banned from February 1, 2025 [1], due to unacceptable risks [1]. Certain prohibitions will also take effect on this date, including the use of AI for manipulation [9], untargeted facial image scraping [9], exploitation of vulnerable individuals [9], and detrimental categorization of people [9].
Obligations under the Act apply to various stakeholders [2], including those developing [2] [3], commissioning [2] [3] [9] [10] [11] [12], or deploying AI systems in the EU [2], and have broad extra-territorial applicability, meaning organizations in the UK and US must comply if their systems are marketed or used in the EU. Non-compliance can result in significant penalties [7], with fines for violating the ban on prohibited AI systems reaching up to €35 million or 7% of global annual revenue [12]. Providers of General Purpose AI (GPAI) models [3], defined as models trained with over 10²⁵ FLOPs of computing power [7], must maintain up-to-date technical documentation and disclose summaries of training data to authorities [3]. They are required to prepare this documentation for submission to the AI Office and national authorities [6], establish a copyright policy [6], and publish a summary of training content [6], particularly regarding the web crawlers used for training to address copyright concerns [7]. If systemic risks are identified [3], providers must notify the EU Commission and undertake further risk assessments and cybersecurity measures [3].
A draft Code of Practice has been published for GPAI providers [4], inviting feedback until November 28 [4], which aims to guide compliance with the Act while allowing for alternative compliance measures if best practices are not followed. The AI Office is mandated to develop these Codes of Practice by May 2, 2025 [9], and stakeholders are encouraged to contribute specific examples to aid in the creation of guidelines for consistent enforcement and compliance [9]. The Act does not apply to personal use of AI or open-source systems unless they engage in prohibited practices or are classified as high-risk [2]. Exemptions exist for military [2], defense [2], national security [2] [5], and certain research uses [2]. High-risk AI systems placed on the market before August 2, 2026 [2], are largely exempt unless significant design changes occur [2]. Providers must inform users when they interact with an AI system [3] [12], unless it is clear to a reasonably informed person [10] [11], and ensure sufficient AI literacy among staff [12], with requirements varying based on individuals’ technical knowledge and the context of AI use [3]. This literacy requirement will be enforced from February 2, 2025 [3], and is exempt for AI systems authorized for law enforcement purposes [10], provided safeguards for third-party rights are in place [10] [11].
Certain AI practices are banned [2], including manipulative techniques and profiling based on personal characteristics [2]. Deployers of emotion recognition and biometric categorization systems must inform individuals about their operation and comply with GDPR when processing personal data [10] [11], with exceptions for law enforcement applications [10]. Providers of chatbots and digital assistants must ensure users are aware they are interacting with an AI system [11], unless it is evident to a reasonably well-informed person [10] [11]. For deep fakes [3] [11], deployers must disclose when content has been artificially generated or manipulated [11], with exceptions for lawful uses in criminal justice and artistic works [11], where disclosure must not interfere with the work’s enjoyment [10] [11]. Similarly [11], entities deploying AI to generate or manipulate text for public dissemination must disclose its artificial nature [11], except when authorized for criminal justice or when the content has undergone human editorial control [11].
General-purpose AI models also have specific obligations [2], including documenting technical information for the AI Office and national authorities [6], ensuring compliance with copyright laws [2], and publicly sharing a detailed summary of training content [6]. Companies are required to implement a Safety and Security Framework (SSF) to manage risks proportionately to their systemic impact [7], including measures for data protection [7], access controls [7], and ongoing effectiveness evaluations [7]. Providers whose models pose systemic risks must notify the Commission [6], assess and mitigate these risks [6], conduct model evaluations [6], report serious incidents [6], and ensure cybersecurity [6].
The implementation of the Act will be enforced at the national level, with designated authorities conducting compliance investigations [2]. The European Artificial Intelligence Board will oversee the Act’s implementation across member states [12], with a panel of experts advising on enforcement [12]. The AI Office [3] [6] [8] [9] [12], created in January 2024 [3], is responsible for enforcing these obligations and supporting governance bodies in Member States [6], with the authority to request information [6], evaluate models [6], mandate risk mitigations [6], recall models [6], and impose significant fines for non-compliance [6]. While most provisions of the Act will come into effect on August 2, 2026 [8], obligations concerning training data and copyright will take effect earlier [8], on August 2, 2025 [2] [3] [5] [6] [8] [12]. The majority of rules will apply from August 2, 2026 [12], with remaining obligations for high-risk systems under specific EU product safety legislation effective by August 2, 2027 [12]. Compliance obligations for general-purpose AI models commence on August 2, 2025 [12], with specific provisions for models marketed before this date [6].
Organizations should begin compliance efforts now [2], as some may already meet certain requirements [2]. A comprehensive gap analysis is essential to identify compliance deficiencies [2]. Ongoing compliance will require monitoring guidance and adapting to evolving requirements [2]. Training personnel on the Act’s requirements and adopting “trustworthy AI” principles will be crucial for demonstrating a proactive approach to compliance [2]. The General-Purpose AI Code of Practice will outline compliance methods for providers [6], and adherence to this code can demonstrate compliance with the AI Act [6]. By May 1, 2025 [7] [12], the finalized rules and codes of practice will be published to assist compliance, further supporting organizations in navigating the regulatory landscape. A template for training data policies will also be published to facilitate efficient drafting [8], promoting adherence to strict standards in AI development and deployment while ensuring transparency in copyright compliance and training data usage to protect intellectual property rights [8].
Additionally, the Act mandates that information regarding limited-risk AI systems must be clearly provided at the first user interaction [10] [11], considering the needs of vulnerable groups [10] [11]. The European Commission will review the list of limited-risk AI systems every four years and develop guidelines for the detection and labeling of artificially generated content [10] [11], focusing on the needs of small and medium enterprises and local authorities [10] [11]. Transparency obligations are essential for the effective implementation of the Digital Services Act [10] [11], particularly for very large online platforms and search engines [10] [11], which must manage systemic risks related to artificially generated content [11]. National authorities will oversee compliance with these transparency requirements [10], with non-compliance potentially resulting in significant administrative fines [10]. The transparency obligations for limited-risk AI systems will take effect from August 2, 2026 [10].
In a related initiative, Denmark has introduced a framework to assist EU member states in utilizing generative artificial intelligence in accordance with the EU AI Act [13]. This initiative [9] [13], led by IT consultancy Netcompany and supported by a coalition of major Danish corporations [13], includes a white paper titled “Responsible Use of AI Assistants in the Public and Private Sector.” The document outlines best practices for firms in deploying AI systems within a regulated environment and emphasizes the importance of delivering secure and reliable services to consumers [13]. This Danish white paper is intended to serve as a model for other countries and businesses aiming to navigate compliance with the EU AI Act effectively [13].
Conclusion
The EU AI Act is a landmark regulatory framework that sets a global precedent for AI governance. It imposes significant obligations on businesses, particularly those involved in high-risk AI applications, and extends its reach beyond EU borders. Organizations must proactively engage in compliance efforts, including conducting gap analyses, training personnel [2], and adhering to the forthcoming General-Purpose AI Code of Practice. The Act’s implementation will be closely monitored by national and EU authorities, with substantial penalties for non-compliance [2]. As the AI landscape continues to evolve, businesses must remain vigilant and adaptable to meet the Act’s requirements and ensure the responsible use of AI technologies.
References
[1] https://www.taylorwessing.com/en/interface/2024/ai-act-sector-focus/the-eu-ai-act-from-an-hr-perspective
[2] https://www.jdsupra.com/legalnews/eu-s-ai-act-ten-facts-for-organisations-7049640/
[3] https://www.lexology.com/library/detail.aspx?g=8b4ef464-4654-47d3-8e4d-f3349ff730c1
[4] https://finance.yahoo.com/news/eu-ai-act-draft-guidance-152616475.html
[5] https://www.softwareimprovementgroup.com/eu-ai-act-summary/
[6] https://digital-strategy.ec.europa.eu/en/faqs/general-purpose-ai-models-ai-act-questions-answers
[7] https://www.engadget.com/ai/the-eu-publishes-the-first-draft-of-regulatory-guidance-for-general-purpose-ai-models-223447394.html
[8] https://www.lexology.com/library/detail.aspx?g=b3f1fede-c2b6-4671-93e1-e4335a9371ed
[9] https://www.pinsentmasons.com/en-gb/out-law/news/eu-ai-act-guidelines-scope-prohibitions-come-early-2025
[10] https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-11-eu-ai-act-what-are-the-obligations-for-the-limited-risk-ai-systems
[11] https://www.jdsupra.com/legalnews/zooming-in-on-ai-11-eu-ai-act-what-are-3403383/
[12] https://www.gunder.com/en/news-insights/insights/client-insight-demystifying-the-eu-ai-act
[13] https://www.cnbc.com/2024/11/13/denmark-lays-out-eu-ai-act-compliance-blueprint-with-microsoft-backing.html