Introduction

The European Union’s AI Act, set to be enforced starting February 2, 2025, introduces a comprehensive regulatory framework for artificial intelligence systems [5]. This legislation categorizes AI systems based on risk levels and imposes stringent compliance requirements, particularly for high-risk applications [9]. The Act aims to safeguard fundamental rights and ensure the responsible use of AI technologies across various sectors, including healthcare and human resources.

Description

The enforcement of the EU AI Act’s prohibitions on unacceptable AI risk will commence on February 2, 2025, following its initial implementation date of August 1, 2024. This comprehensive regulatory framework establishes a risk-based approach, categorizing AI systems into four risk levels: unacceptable [2], high [2] [3], limited [2], and minimal risk [2]. Medical devices that incorporate AI will be classified as “high-risk systems,” necessitating stringent technical compliance measures due to their potential effects on patient health and safety [6]. Eight AI practices classified as posing an unacceptable risk [1], such as emotion recognition and predictive policing [2], will be prohibited under EU law [1], presenting significant implementation challenges and complex obligations for businesses operating AI systems within the EU, including those in the UK [3].

The effectiveness of the AI Act hinges on the clarity and enforceability of its provisions [4]. Ambiguous language in definitions and prohibitions could create loopholes that undermine the law’s objective of safeguarding fundamental rights and upholding the rule of law [4]. Addressing these issues is crucial to ensure that all AI systems operating within Europe align with its core values.

High-risk AI activities, particularly in recruitment, HR [3] [8], and worker management [3] [6] [9], necessitate compliance from organizations involved in these sectors. This classification introduces additional legislative requirements for HR departments [3], as AI systems that exhibit bias [3], misrepresent candidates [3], or inadequately analyze data can lead to significant consequences [3], including hiring errors [3], team reprimands [3], and adverse effects on company culture [3], potentially resulting in litigation and public relations challenges [3]. Organizations deploying AI in high-risk categories must adhere to stringent regulations [5], including mandates for data transparency [5], human oversight [2] [3] [5] [6], and performance monitoring [5] [6]. Compliance is mandatory [5], particularly for sectors where AI significantly influences decision-making [5].

Furthermore, providers of general-purpose AI models, which can perform complex tasks and may pose systemic risks, must implement enhanced cybersecurity measures and report serious incidents to the EU AI Office [2]. Organizations must identify ‘shadow AI’—AI tools utilized across various functions—to ensure compliance with stringent regulations concerning data usage, transparency [2] [3] [5] [8] [9], and risk management [3] [6] [9], particularly for high-risk AI systems [9]. As the act’s requirements will be phased in over the coming years [7], organizations involved in the development [7], provision [4] [6] [7], or deployment of AI within the EU must prepare for compliance. National market surveillance authorities will oversee enforcement and report annually to the European Commission regarding any violations and corrective actions taken [1].

Organizations designated as AI providers will bear responsibility for compliance [3], even if they modify existing models with their own datasets [3]. Using purchased AI systems also imposes obligations such as purpose limitation [3], human oversight [2] [3] [5] [6], monitoring of input data [3], record-keeping [3], incident reporting [3], transparency [2] [3] [5] [8] [9], and bias mitigation [3]. Non-compliance with the prohibited practices can lead to substantial penalties [1], including administrative fines of up to €35 million or 7% of global annual revenue [1], whichever is greater [1]. In the case of medical devices, non-compliance can result in fines of up to 6% of global annual turnover [6], emphasizing the urgency for organizations to prepare [6].

To navigate these requirements effectively, organizations should adopt a practical five-step compliance plan: raising board awareness, creating an AI inventory [8], assessing AI tools [8], reviewing contracts [8], and implementing transparency measures [8]. It is advisable for organizations to appoint a dedicated team to analyze the EU AI Act’s relevance to their operations and utilize tools like the EU compliance checker to assess product risk levels [7]. Conducting an inventory of AI products and systems that fall under the act’s scope is also recommended [7].

Understanding these obligations is essential for organizations to prepare adequately for the impending deadlines, especially given the complexities introduced by the relationship between the AI Act and the General Data Protection Regulation (GDPR), which may pose additional challenges for multinational entities. To manage uncertainty [3], organizations must conduct audits of existing AI systems, adopt conformity-by-design practices for new systems [3], and provide training on ethical AI usage [3]. Involving legal counsel in the compliance process is crucial to mitigate risks associated with non-compliance and to ensure adherence to the evolving regulatory landscape.

Despite the regulatory hurdles [5], the Act is generating new job opportunities in AI compliance and governance [5], with a rising demand for specialists in AI policy [5], risk analysis [5], and compliance roles [5]. The European AI Office is actively recruiting Legal and Policy Officers to enforce the AI Act [5], focusing on shaping AI policy [5], conducting audits [5], and ensuring adherence to regulations [5]. To achieve compliance [3] [5], businesses should invest in AI ethics training [5], establish dedicated governance teams [5], and develop internal risk assessment frameworks [5]. As AI innovation progresses [6], it is imperative for organizations [7], particularly in the healthcare and MedTech sectors, to align with regulatory developments [6], enhancing systems and fostering trust in AI-driven medical products [6].

Conclusion

The EU AI Act represents a significant shift in the regulatory landscape for artificial intelligence, emphasizing the need for compliance and ethical practices. Organizations must navigate complex requirements to avoid substantial penalties and ensure their AI systems align with European values. This legislation not only poses challenges but also creates opportunities for growth in AI governance and compliance roles, underscoring the importance of adapting to evolving regulations in the AI sector.

References

[1] https://gvzh.mt/insights/ai-new-eu-artificial-intelligence-act-prohibited-practices/
[2] https://www.lexology.com/library/detail.aspx?g=935d57fb-0d65-423c-950e-5b3d713e40ac
[3] https://www.personneltoday.com/hr/eu-ai-act-what-hr-needs-to-know/
[4] https://www.liberties.eu/en/stories/ai-paper-consultation/45277
[5] https://t3-consultants.com/2025/01/ai-regulations-explained-how-the-eu-ai-act-will-impact-businesses-and-jobs/
[6] https://www.mfmac.com/insights/healthcare-life-sciences/preparing-for-the-eu-artificial-intelligence-ai-act-key-considerations-for-the-medical-device-industry/
[7] https://about.citiprogram.org/blog/an-overview-of-the-eu-ai-act-what-you-need-to-know/
[8] https://www.jdsupra.com/legalnews/life-with-gdpr-navigating-the-eu-ai-ac-86792/
[9] https://natlawreview.com/article/5-trends-watch-2025-eu-data-privacy-cybersecurity