Introduction

The EU AI Act [2] [3] [5] [6] [8] [9] [10], effective from August 1, 2024 [6], mandates that companies deploying high-risk artificial intelligence (AI) systems within the EU internal market adhere to a comprehensive set of obligations by August 2, 2026. This legislation categorizes AI systems into four risk levels and imposes stringent requirements on high-risk systems to ensure safety, trustworthiness [3], and ethical development while safeguarding fundamental rights and public trust [8].

Description

Companies deploying high-risk artificial intelligence (AI) systems must comply with a range of obligations by August 2, 2026 [5] [10], as outlined in Regulation 2024/1689, known as the EU AI Act [3], which came into effect on August 1, 2024 [3]. This legislation establishes harmonized rules for the development [3], deployment [1] [2] [3] [4] [7] [8] [9], and use of AI systems within the EU internal market [3], categorizing them into four risk levels: unacceptable [8], high [5] [8] [10], limited [5] [8] [10], and minimal [5] [8] [10]. High-risk AI systems [1] [2] [3] [4] [5] [6] [7] [8] [9] [10], which include applications in biometrics [3], critical infrastructure [3], law enforcement [3] [4], education [3] [8], employment [3] [4] [7], essential services [3], migration [1] [3], and the administration of justice [3], are subject to stringent requirements aimed at ensuring their safety and trustworthiness [3]. The Act aims to promote ethical AI development while safeguarding fundamental rights and public trust [8], introducing a transition period of two to three years for compliance, depending on the specific type of AI system.

As part of the compliance framework, each EU Member State is required to establish at least one operational regulatory AI sandbox by August 2, 2026 [1]. These sandboxes provide a controlled environment for the innovation [1], development [1] [2] [3] [4] [7] [8] [9], training [1] [2] [5] [8] [10], testing [1] [4] [5] [7] [9] [10], and validation of AI systems under regulatory oversight prior to their market introduction [1]. Providers must agree on a sandbox plan with the competent authority, which will offer guidance, supervision [1] [7] [9], and support to identify potential risks [1]. Activities conducted within the sandbox must be documented, culminating in an exit report that outlines results and learning outcomes [1], which can be used by providers to demonstrate compliance during the conformity assessment process [1].

Providers of high-risk AI systems face stringent obligations [5] [10], including:

  1. Registration: Providers must register themselves and their systems in the EU database prior to market placement or service initiation [5] [10], ensuring transparency and access for authorities and the public.

  2. Quality Management System: A documented quality management system must be implemented and maintained to ensure compliance with established technical standards.

  3. Risk Management System: A continuous and iterative risk management process must be established to identify and mitigate known and foreseeable risks throughout the AI system’s lifecycle. This includes a strong emphasis on human oversight to enable intervention in critical decisions, particularly for vulnerable groups, such as individuals under 18, ensuring that decision-making processes are understandable and verifiable [8].

  4. Incident Reporting: Providers are required to report serious incidents to market surveillance authorities once a causal link is established, including any incidents that occur during testing in the sandbox.

  5. Data Governance & Quality: Robust data management practices must be implemented to prevent and mitigate biases, ensuring the use of high-quality [5], diverse [4], and representative data sets for training, validation [1] [2] [3] [10], and testing [1] [5] [10]. Standards for data governance must specifically address risks to health, safety [1] [2] [3] [4] [6] [7] [9] [10], and fundamental rights [1] [2] [3] [4] [6] [8] [9], encompassing system design [2], data collection [2], and privacy measures [2], while protecting data integrity and preventing unauthorized access [8].

  6. Documentation & Recordkeeping: Technical documentation is essential for compliance evaluation by national authorities and must be produced before market placement. This documentation should detail the system’s development, training data [2] [5], monitoring processes [2] [4] [8], and cybersecurity measures [2] [5] [10]. Automated record-keeping throughout the lifecycle of high-risk AI systems is necessary for effective post-market surveillance [2].

  7. Transparency and Human Oversight: Extensive transparency obligations are mandated to facilitate effective human control, including safe use instructions and information on system accuracy [10], robustness [5] [10], and any associated risks to human health or safety [2]. High-risk systems must allow for human intervention while minimizing health and safety risks [2], with the degree of oversight corresponding to the system’s level of autonomy and context of use [2].

  8. Cybersecurity: AI systems must be designed to ensure appropriate accuracy, robustness [5] [10], and cybersecurity [2] [5] [10].

Importantly, these obligations for providers will take effect on August 2, 2026 [7], with the exception of systems already on the market or in service prior to that date [7]. The transition period is set at three years for systems integrated into products already subject to third-party conformity assessment and two years for other high-risk systems [9].

Deployers of high-risk AI systems [2] [4] [5] [6] [7] [10], including employers, also have their own set of obligations [10], such as:

  1. Use Compliance: Ensuring adherence to the provided instructions for high-risk AI systems through technical and organizational measures. Any deviation from these instructions [2], particularly in new use cases or system modifications [2], may reclassify the deployer as a provider [2], imposing additional regulatory obligations [2].

  2. Fundamental Rights Impact Assessment (FRIA): Certain deployers must conduct a FRIA prior to system use to evaluate potential impacts on fundamental rights.

  3. Human Oversight: Competent personnel must be assigned for effective oversight of AI systems.

  4. Data Quality: Ensuring that input data is relevant and representative to avoid bias.

  5. Documentation: Retaining logs generated by the AI system is mandatory for accountability and compliance.

  6. Incident Reporting: Deployers must inform providers and authorities of any serious incidents that occur during the use of high-risk AI systems.

  7. Risk Management System: A continuous risk management process must be documented throughout the AI system’s lifecycle to ensure ongoing compliance and safety.

  8. Information Obligation for Employers: Employers must inform employees and representatives about the use of high-risk AI systems and their implications.

Importers and distributors also have compliance obligations [10]. Importers must ensure conformity with the AI Act before market placement [10], verifying compliance with the conformity assessment procedure and CE marking requirements [5]. Distributors are responsible for ensuring compliance and notifying authorities of any associated risks. If they identify risks to health [7], safety [1] [2] [3] [4] [6] [7] [9] [10], or fundamental rights related to high-risk AI systems, they are required to notify the provider and relevant authorities.

It is essential for companies involved in the deployment or provision of high-risk AI systems to integrate these requirements into their AI strategy and governance frameworks [7]. Utilizing existing compliance structures [7], such as those related to documentation [7], transparency [2] [5] [6] [7] [8] [10], and risk assessments under the GDPR [7], will be crucial [7]. A comprehensive and cohesive approach to AI governance is necessary to ensure effective risk management while fostering innovation and entrepreneurship [7]. Organizations face a pressing timeline to comply with the EU AI Act [2], emphasizing the need for a proactive approach to safeguard human health [2], safety [1] [2] [3] [4] [6] [7] [9] [10], and fundamental rights amidst the rapid advancement of AI technology [2]. Compliance should not compromise quality; entities can achieve both adherence and innovation with the appropriate resources [2].

Conclusion

The EU AI Act represents a significant regulatory framework aimed at ensuring the safe and ethical deployment of high-risk AI systems within the EU. By imposing rigorous compliance requirements, the Act seeks to protect fundamental rights and public trust while fostering innovation. Companies must proactively integrate these obligations into their AI strategies to navigate the evolving landscape of AI technology effectively. The Act underscores the importance of balancing compliance with innovation, ensuring that AI systems contribute positively to society without compromising safety or ethical standards.

References

[1] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20241023-measures-in-support-of-innovation-in-the-european-unions-ai-act-ai-regulatory-sandboxes
[2] https://www.diligent.com/resources/blog/eu-ai-act-risk-categories
[3] https://eapil.org/2024/10/21/the-eu-ai-act-and-private-international-law-a-first-look/
[4] https://viso.ai/deep-learning/eu-ai-act/
[5] https://www.lexology.com/library/detail.aspx?g=375c21a7-635f-4394-b048-8d75c781d589
[6] https://www.matheson.com/insights/detail/the-eu-ai-act–the-download-for-employers
[7] https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-10-eu-ai-act-what-are-the-obligations-for-high-risk-ai-systems
[8] https://www.secureworld.io/industry-news/navigating-eu-ai-act-compliance
[9] https://ai-watch.ec.europa.eu/news/harmonised-standards-european-ai-act-2024-10-25_en
[10] https://www.jdsupra.com/legalnews/zooming-in-on-ai-10-eu-ai-act-what-are-8392524/