Introduction

The European Union’s Artificial Intelligence Act (AI Act) establishes a comprehensive risk-based regulatory framework for AI systems, categorizing them into High Risk, Limited Risk [1], and Minimal Risk [1]. This framework aims to ensure transparency, accountability, and safety in AI applications, with specific obligations tailored to each risk category.

Description

The EU Artificial Intelligence Act (AI Act) exemplifies a sophisticated risk-based regulatory framework that classifies AI systems into three categories: High Risk, Limited Risk [1], and Minimal Risk [1]. Each category is accompanied by specific requirements tailored to the associated risk levels. Limited-risk AI systems [1] [3] [4], which include generative AI models, are subject to transparency obligations [3], particularly concerning interactions with individuals or content generation, as they may pose risks of impersonation or deception.

Providers of limited-risk AI systems [3], such as chatbots and digital assistants [3], must inform users that they are interacting with an AI [3], unless it is evident to a reasonably well-informed person based on the context [4]. This requirement is exempt for AI systems authorized for law enforcement [3], provided that adequate safeguards for third-party rights are in place [4]. Additionally, providers of AI systems that produce synthetic content—such as audio [4], images [4], video [4], or text—must ensure that the outputs are marked in a machine-readable format [4], clearly indicating their artificial origin [4].

For deep fakes [3], which are AI-generated or manipulated content [3], deployers must disclose the artificial nature of the content unless it is used for law enforcement or falls under artistic or creative works [3], where appropriate disclosure must still be made. Similarly [3], entities generating or manipulating text for public dissemination must disclose its artificial origin [3], with exceptions for law enforcement and content under human editorial control [3].

Emotion recognition and biometric categorization systems must inform individuals about their operation and comply with GDPR when processing personal data [3], with exceptions for law enforcement applications [3]. Information regarding limited-risk AI systems must be clearly communicated at the first user interaction [3], especially considering the needs of vulnerable groups [3]. Furthermore, certain AI systems [2] [4], such as those for emotion recognition or social scoring [2], are prohibited to mitigate algorithmic bias [2].

High-risk AI systems [1] [3] [4], in contrast [1], are subject to more stringent requirements, including registration, risk assessment [1], and regular reporting [1]. Compliance is required throughout the entire supply chain [2], affecting not only primary AI system providers but also all parties involved [2], including those integrating General Purpose AI and foundation models from third parties [2]. Minimal-risk systems are not bound by any obligations under the Act [1], although companies may choose to voluntarily adopt additional codes of conduct [1].

The European Commission will review the list of limited-risk AI systems every four years and develop guidelines for the detection and labeling of artificially generated content [3], focusing on the needs of small and medium enterprises [3]. GDPR transparency requirements apply alongside AI Act obligations when personal data is processed [3].

These transparency obligations are essential for the effective implementation of the Digital Services Act (DSA) [3], particularly for large online platforms [3], which must manage risks associated with the dissemination of AI-generated content [3]. Non-compliance with these transparency requirements can lead to significant administrative fines [3], with penalties reaching up to €35 million or 7% of a company’s total worldwide annual turnover [2], depending on the infringement type and company size [2]. Businesses operating in the EU must adhere to these regulations and may benefit from additional guidance [2], such as the National Cyber Security Centre’s recommendations for secure AI system development [2], to promote responsible software development practices [2].

Conclusion

The AI Act’s structured approach to categorizing AI systems by risk level ensures that transparency and accountability are prioritized, particularly for systems that could impact public trust and safety. By imposing specific obligations and potential penalties, the Act encourages responsible AI development and deployment, fostering a safer digital environment. The ongoing review and adaptation of guidelines by the European Commission further ensure that the regulatory framework remains relevant and effective in addressing emerging AI challenges.

References

[1] https://www.softwareimprovementgroup.com/eu-ai-act-summary/
[2] https://www.techradar.com/pro/the-eu-ai-act-what-do-cisos-need-to-know-to-strengthen-ai-security
[3] https://www.jdsupra.com/legalnews/zooming-in-on-ai-11-eu-ai-act-what-are-3403383/
[4] https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-11-eu-ai-act-what-are-the-obligations-for-the-limited-risk-ai-systems