Introduction

The European Artificial Intelligence Act (AI Act) [2], effective from 1 August 2024, represents the first comprehensive legal framework for AI globally [2]. It aims to harmonize AI regulations across EU member states [3], manage associated risks [1], and foster innovation. The Act imposes obligations on various actors in the AI value chain and applies to all AI systems marketed or deployed in the EU, affecting both EU-based and non-EU companies [2]. It adopts a risk-based approach [1] [3], categorizing AI systems into four levels of risk and emphasizing high-risk sectors. The Act also outlines penalties for non-compliance and necessitates alignment with other legal frameworks.

Description

On 1 August 2024 [2] [3], the European Artificial Intelligence Act (AI Act) came into force [2], establishing the world’s first comprehensive legal framework for AI [2]. This Act aims to harmonize AI regulations across EU member states while managing risks associated with AI and fostering innovation. It imposes obligations on various actors in the AI value chain [2], including providers [2] [3] [5], deployers [1] [2] [3] [4] [6] [8], importers [2], operators [2] [3], and distributors [2] [3]. The Act applies to all AI systems marketed or deployed in the EU [2], affecting both EU-based and non-EU companies if their AI systems are intended for use in the EU [2]. Notably, the AI Act also encompasses external providers whose systems impact the EU market or citizens [5].

The AI Act adopts a risk-based approach, categorizing AI systems into four levels: prohibited systems (unacceptable risk) [3], high-risk systems (requiring third-party assessments) [3], limited risk systems (such as chatbots), and minimal or no risk systems (like spam filters and online shopping recommendations) [3]. High-risk sectors [5] [7], including insurance and banking [5] [7], are particularly emphasized, with the Act mandating Fundamental Rights Impact Assessments (FRIAs) for these systems. Companies deploying high-risk AI systems, including public entities and private organizations providing public services such as education [8], healthcare [6], and justice [6], are required to conduct a FRIA prior to the initial use of such systems. This obligation [5] [6] [8], introduced by the European Parliament in June 2023 and outlined in Article 27 of the AI Act [6], applies to systems listed under Annex III of the Act [8], with certain exceptions for safety components in critical infrastructure and specific financial assessments [8], such as those aimed at detecting financial fraud or evaluating creditworthiness. The FRIA must be updated if there are changes in the underlying circumstances [6].

High-risk AI systems must meet strict technical standards focusing on security [3], transparency [1] [3] [5] [7], and accuracy [3], with stringent obligations for those deemed high-risk, including safety components of products governed by EU product safety laws and systems requiring third-party conformity assessments [2]. These systems must be registered in an EU database [2], maintain a quality management system [2], provide technical documentation [2], and ensure effective human oversight [2]. The establishment of a public database is intended to enhance transparency regarding high-risk AI systems [7].

The FRIA consists of three main sections: a descriptive section detailing the system’s intended purposes and the individuals affected; an assessment section evaluating specific risks of harm to individuals; and a mitigation section focusing on risk mitigation strategies [4], including human oversight and governance arrangements [4]. Deployers are obligated to notify the relevant market surveillance authority of the assessment results unless exempted [4]. Stakeholder consultation is essential during the FRIA process [4] [6] [8], involving representatives from affected groups [4] [8], independent experts [4] [8], and civil society organizations [4] [7]. The AI Office is expected to publish a template to assist in conducting FRIAs [8], although it has not yet been released [8].

Deployers who have previously conducted a Data Protection Impact Assessment (DPIA) under the GDPR can leverage that assessment for the FRIA [4] [8], allowing for concurrent or integrated reporting [4] [6] [8]. While the DPIA focuses on risks related to personal data processing and the rights of data subjects, the FRIA addresses broader risks to the fundamental rights of all individuals affected by the AI system [4]. The scopes of the two assessments largely overlap [4], as high-risk AI systems often involve personal data processing [4].

Non-compliance with DPIA requirements can lead to significant fines under the GDPR [4] [6] [8], whereas the AI Act includes a penalty mechanism for violations, with fines for unacceptable risk violations reaching up to 35 million EUR or 7% of a company’s global annual turnover [2]. Other violations may incur fines up to 15 million EUR or 3% of turnover [2], and providing misleading information can result in fines up to 7.5 million EUR or 1% of turnover [2], with reduced penalties for SMEs based on their size and economic viability [2]. If a DPIA reveals high residual risks [4] [8], prior consultation with the supervisory authority is necessary before proceeding with data processing [4] [8]. In contrast [8], the FRIA serves primarily as a documentation requirement and does not have the authority to prevent the deployment of a high-risk AI system [8], regardless of identified risks [4]. The European Data Protection Board (EDPB) is developing guidelines to clarify the relationship between DPIAs and FRIAs [4] [6] [8], which may further elucidate their interplay within the regulatory framework [8].

By 2 August 2027 [2] [6], obligations for high-risk AI systems that are safety components or require conformity assessments will take effect [2]. The horizontal nature of the AI Act necessitates its interpretation alongside other legal frameworks [2], including the GDPR [2], intellectual property regulations [2], the NIS-II Directive [2], the Cyber Resilience Act [2], the Digital Markets Act [2], and the Digital Services Act [2]. Concerns have been raised about unresolved technical details and significant loopholes within the Act [7], including exemptions for law enforcement [7], the potential for exporting banned technologies [7], and a reduction in restrictions on live facial recognition [7]. A coalition of 149 investors has called for further enhancements to the proposed legislation [7], emphasizing the necessity for continuous human rights due diligence in the regulation of AI [7]. Civil society organizations have also criticized the initial proposal for failing to adequately address the risks associated with AI in the context of migration [7].

The AI Act establishes specific requirements for high-risk and prohibited AI use cases for organizations operating within the EU [1], serving as a potential model for future regulatory developments in other jurisdictions [1], including Australia [1]. For high-risk systems [1] [2] [3] [4] [5] [6] [7] [8], obligations vary for providers and deployers [1]. Providers must inform users when they are interacting with AI and ensure that synthetic content is machine-readable and identifiable as AI-generated [1]. Deployers of synthetic content are required to disclose its artificial nature [1].

Risk managers should closely examine recruitment practices and any AI applications used for assessing creditworthiness or insurance pricing [1], as these may be classified as high-risk and necessitate additional compliance measures [1]. Organizations should clearly define their objectives for AI use [1], understand its integration within their operational framework [1], identify risks that could impede these objectives [1], and implement effective controls to mitigate those risks [1].

A model risk policy should delineate clear roles and responsibilities [1], ensuring accountability for oversight and performance outcomes of each model [1]. It is essential to define the intended beneficiaries of each model and ensure equitable outcomes for all affected parties [1]. Rigorous pre-deployment testing is crucial [1], focusing on data quality [1], transparency [1] [3] [5] [7], and the necessity for safety features [1]. Deployment methods must be clarified [1], with all roles and responsibilities clearly defined during implementation [1]. Continuous monitoring of the model’s performance and its integration with real-time data sources is vital [1].

For AI models [1], it is important to document the products and services they are associated with [1], any applicable regulatory classifications [1], and the minimum regulatory requirements that must be met [1]. AI model risk management should align with existing guidance [1], such as the Prudential Regulation Authority’s supervisory statement on model risk principles [1]. Developing control tests concurrently with control implementation can enhance the design of controls [1]. Key control indicators can provide early warnings of control performance issues [1], and the results of controls testing should be integrated into reporting [1].

Beyond regulatory compliance [1], effective model governance and robust controls assurance are essential for organizations to utilize AI responsibly [1], maximize value creation [1], and effectively manage associated risks [1]. Data engineers play a vital role in ensuring compliance with these regulations [5], which include conducting thorough risk assessments, maintaining documentation [5], and implementing continuous monitoring [1] [5]. Ongoing training and collaboration with data scientists [5], policymakers [5], and ethicists are essential for navigating the evolving regulatory landscape and promoting responsible AI development [5].

Conclusion

The European AI Act sets a precedent for AI regulation, impacting a wide range of stakeholders and requiring alignment with existing legal frameworks. It emphasizes risk management [1], transparency [1] [3] [5] [7], and accountability [1], particularly for high-risk AI systems [3] [4]. The Act’s implementation will likely influence global AI regulatory trends, prompting organizations to adopt robust compliance and governance practices. As AI technology continues to evolve, ongoing collaboration among industry, policymakers [5], and civil society will be crucial to ensure responsible and ethical AI development.

References

[1] https://www.protechtgroup.com/en-gb/blog/legislating-for-ai-why-the-eu-ai-act-matters-for-you
[2] https://kpmg.com/be/en/home/insights/2024/12/txl-the-ai-act.html
[3] https://verasafe.com/blog/an-introduction-to-the-eu-ai-act/
[4] https://www.aoshearman.com/insights/ao-shearman-on-tech/zooming-in-on-ai-13-eu-ai-act-focus-on-fundamental-rights-impact-assessment-for-high-risk-ai-systems
[5] https://snapanalytics.co.uk/4-key-eu-ai-act-insights-for-data-engineers/
[6] https://www.lexology.com/library/detail.aspx?g=291805ae-410a-451e-8779-f0d093ecd341
[7] https://www.business-humanrights.org/en/latest-news/eu-ai-act/
[8] https://www.jdsupra.com/legalnews/zooming-in-on-ai-13-eu-ai-act-focus-on-5618999/