Introduction

The EU AI Act [1] [2] [3], effective from August 2024 [2], establishes a risk-based framework for AI applications [2], categorizing them into prohibited, high-risk, and low-risk systems. This framework mandates transparency and compliance with regulatory standards, particularly for low-risk AI systems. LatticeFlow’s Compl-AI framework offers a tool to assess AI models’ adherence to these regulations, highlighting areas for improvement in fairness, diversity [2], and resilience [2].

Description

Low-risk AI systems are defined as those that are neither prohibited nor classified as high risk under the EU AI Act [1], which came into effect in August 2024 [2]. The Act establishes a risk-based framework for AI applications [2], requiring developers to align their models with regulatory standards [2]. The primary obligation for these systems is transparency [1], which encompasses specific requirements tailored to various scenarios [3].

When an AI system interacts directly with individuals [1] [3], it is mandatory for the provider to inform them that they are engaging with an AI system [1] [3]. For systems that utilize emotion recognition or biometric categorization [1] [3], individuals exposed to these technologies must also be notified [1] [3]. Additionally, providers of generative AI systems [1] [3], including general-purpose models that produce synthetic content such as audio [1], images [1] [3], videos [1] [3], or text [1] [3], are required to label the output in a machine-readable format to indicate that it has been artificially generated or manipulated [1] [3]. This requirement extends to deployers of AI systems that create or edit deep fakes [1], who must disclose the artificial nature of the content [1]. The term “deep fake” refers to AI-generated or manipulated media that resembles real entities or events [1] [3], potentially misleading viewers regarding its authenticity [1].

Furthermore, for AI-generated or edited text intended for public dissemination on significant matters [1] [3], disclosure is required unless the content has undergone human review and is part of a human-controlled editorial process prior to publication [1]. The regulatory oversight for AI systems based on general-purpose models will be managed by the European Commission’s AI Office [1], while national regulators will supervise all other AI systems [1]. Individuals have the option to file complaints with national authorities [1], although the Act does not allow for individual damages [1].

Non-compliance with the EU AI Act can result in substantial fines [1], with violations related to prohibited AI systems facing penalties of either 7% of annual global revenue or 35 million Euros [1], whichever is greater [1]. Other violations may incur fines up to 3% of annual global revenue or 15 million Euros [1]. Providing false or misleading information to EU authorities can lead to fines of 1% of annual global revenue or 7.5 million Euros [1].

In addition to these regulatory requirements, LatticeFlow has developed the Compl-AI framework, a pioneering open-source tool designed to evaluate AI models’ compliance with the EU AI Act [2]. This framework highlights deficiencies in fairness, diversity [2], and resilience against cyberattacks [2], particularly in models with fewer parameters [2]. It maps regulatory requirements to technical measures [2], offering a benchmarking suite for large language models (LLMs) and enabling developers to assess their technology’s compliance [2].

The Compl-AI framework evaluates LLMs across 27 categories [2], including harmful instructions [2], truthfulness [2], and reasoning [2], providing scores that reflect each model’s strengths and weaknesses [2]. Many models show inconsistent performance [2], particularly in general knowledge and reasoning [2], with fairness metrics scoring below 50% [2]. The developers of Compl-AI suggest that the focus on enhancing model capabilities has overshadowed compliance with regulatory requirements [2]. The framework is intended to evolve alongside updates to the EU AI Act [2], serving as a starting point for monitoring and improving AI model compliance [2]. It also identifies significant performance gaps in areas such as cyberattack resilience and fairness [2], emphasizing the need for improvement [2]. Current benchmarks for privacy and copyright compliance are limited [2], indicating a need for broader evaluation methods [2].

LatticeFlow advocates for the adoption and expansion of the Compl-AI framework within the AI community [2], as it is freely available and open source [2].

Conclusion

The EU AI Act’s implementation underscores the importance of transparency and compliance in AI systems, with significant penalties for non-compliance. LatticeFlow’s Compl-AI framework provides a valuable resource for developers to ensure adherence to these regulations, highlighting critical areas for improvement. As AI technology continues to evolve, ongoing assessment and adaptation of compliance tools like Compl-AI will be essential in maintaining ethical and responsible AI development.

References

[1] https://www.jdsupra.com/legalnews/the-eu-ai-act-part-four-low-risk-ai-5301774/
[2] https://www.allaboutai.com/ai-news/new-llm-framework-benchmarks-ai-compliance-with-eu-ai-act/
[3] https://www.cyberlawmonitor.com/2024/10/16/the-eu-ai-act-part-four-low-risk-ai-systems-and-enforcement/