Introduction

The AI Act (Regulation 2024/1689) is a comprehensive legislative framework established by the European Union to regulate the development, deployment [1] [3] [6], and use of artificial intelligence systems [6]. It aims to create harmonized rules across the EU, addressing various risks associated with AI technology [6]. The Act categorizes AI systems into different risk levels and sets forth obligations for providers, with a focus on ensuring safety, transparency [1] [6], and adherence to fundamental rights [1].

Description

The Commission has published a report analyzing stakeholder feedback from two public consultations regarding the AI Act (Regulation 2024/1689) [4], which came into force on August 1, 2024 [3], and is set to take effect on February 2, 2025. This comprehensive regulation aims to establish harmonized rules for the development [6], deployment [1] [3] [6], and use of artificial intelligence systems within the EU [6], addressing various risk aspects associated with AI technology [6]. The consultations focused on defining AI systems and identifying prohibited AI practices [4]. The report, prepared by the Centre for European Policy Studies for the EU AI Office [4], includes non-binding guidelines aimed at assisting providers and stakeholders in effectively applying the AI Act [4].

The analysis reveals that industry stakeholders made up the majority of responses [4], while citizen participation was limited [4]. Respondents emphasized the need for clearer definitions of technical terms like “adaptiveness” and “autonomy” to avoid inadvertently regulating conventional software [4]. An AI system is defined as a machine-based system designed to operate with varying levels of autonomy [3], capable of inferring outputs from inputs to achieve specific objectives [3]. This definition encompasses both pre-deployment and post-deployment phases [3], allowing for adaptability in technology [3]. Autonomy indicates a system’s ability to function independently from human control [3], while adaptiveness allows an AI system to modify its behavior based on new data or experiences [3].

The guidelines on prohibited practices address issues such as harmful manipulation [4], social scoring [1] [4], and real-time remote biometric identification (RBI) in public areas for law enforcement [5]. Unacceptable risk systems [1], including those for social scoring and biometric identification [1], are prohibited under the AI Act. The use of RBI systems is governed by Article 5(1)(h) of the AI Act, with exceptions based on public interest determined by member states through local legislation [5]. The guidelines provide legal clarifications and practical examples to aid compliance [4], including the importance of conducting Fundamental Rights Impact Assessments (FRIAs) to evaluate the potential impact of high-risk AI systems on fundamental rights [5].

Significant concerns were raised regarding prohibited practices [4], with stakeholders requesting concrete examples of what is considered prohibited [4]. The guidelines on the definition of AI systems are intended to help providers determine whether a software system qualifies as an AI system [4], thereby facilitating compliance with the regulations [4]. Notably, not all AI systems are subject to regulatory obligations under the AI Act; only those classified as high-risk or prohibited AI will face legal requirements [3].

Obliged entities must implement a risk management system to identify and mitigate risks [1], including conducting FRIAs for high-risk systems, especially those used by public authorities [1]. Effective human oversight is essential [1], requiring clear user instructions and the ability for operators to understand system outputs [1], emphasizing explainability and transparency [1]. Before marketing high-risk AI systems [1], entities must register them in the EU database and affix a CE marking to indicate compliance with EU standards [1]. Continuous monitoring mechanisms are necessary to assess deployed AI systems’ performance [1], with obligations to report issues or non-compliance to national authorities [1].

In addition, the AI Office is preparing a new voluntary code of practice for the industry, aimed at providing guidelines for the riskiest AI models [7], which is expected to be released shortly [7]. This code will clarify compliance requirements for general-purpose AI providers [7], such as ChatGPT [7], in relation to the AI Act [7]. General-purpose AI (GPAI) models and systems will be governed by a tailored regulatory framework under the AI Act [2], with provisions coming into effect on August 2, 2025 [2]. A consultation is currently open until May 22, 2025, to prepare non-binding guidelines aimed at clarifying the obligations of GPAI model providers [2]. Both sets of guidelines and the forthcoming code are expected to evolve in response to practical experiences and emerging use cases, emphasizing the need for safeguards and conditions when applying exemptions to prohibited practices [5]. Providers are also required to monitor and update their AI systems continuously [1], taking appropriate measures if misuse is detected [1]. While the ban on certain AI practices took effect on February 2, 2025 [1], penalties for breaches of Article 5 will be enforced starting August 2, 2025 [1], with fines potentially reaching €35 million or 7% of total worldwide annual turnover [1].

The AI Act aims to establish uniform regulations for AI technology within the EU [1], categorizing systems into four risk levels—unacceptable [1], high [1], limited [1] [4], or minimal—to ensure safe introduction and promote consumer trust [1]. The Act applies to both EU entities and non-EU providers whose activities impact the EU [1], necessitating the appointment of authorized representatives for non-EU entities [1]. Obligations for general-purpose AI models will be enforced from August 12, 2025 [1]. Financial institutions utilizing AI for credit scoring and insurance risk assessment must adhere to strict requirements regarding data quality [1], documentation [1], transparency [1] [6], human oversight [1], and cybersecurity [1]. They must also consider the interaction of the AI Act with other regulatory frameworks [1], such as anti-money laundering directives [1].

The AI Act emphasizes the promotion of trustworthy AI and adherence to fundamental rights and ethical principles [1], impacting AI technology used for risk management [1], fraud prevention [1], and anti-money laundering [1]. Organizations must monitor regulatory developments and assess how these laws interact with frameworks aimed at preventing financial crime [1]. The AI Act represents a significant advancement in promoting responsible AI use [1]. By embracing trustworthy AI principles and ensuring compliance [1], businesses can adapt to regulatory changes and leverage AI ethically to enhance efficiencies in risk screening [1], fraud prevention [1], and money laundering efforts [1].

Conclusion

The AI Act represents a significant step forward in the regulation of artificial intelligence within the European Union. By establishing a comprehensive framework that categorizes AI systems based on risk and sets forth clear obligations for providers, the Act aims to ensure the safe and ethical use of AI technologies. It emphasizes transparency, human oversight [1], and the protection of fundamental rights, thereby fostering consumer trust and promoting responsible AI innovation. As the AI landscape continues to evolve, the Act’s guidelines and codes of practice will adapt to address emerging challenges and opportunities, ensuring that AI technologies are used in a manner that benefits society as a whole.

References

[1] https://www.moodys.com/web/en/us/insights/ai/ai-governance-navigating-eu-compliance-standards.html
[2] https://www.cullen-international.com/news/2025/05/-INFOGRAPHIC–EU-AI-Act-cheat-sheet–general-purpose-AI-models.html
[3] https://natlawreview.com/article/understanding-scope-artificial-intelligence-ai-system-definition-key-insights
[4] https://digital-strategy.ec.europa.eu/en/library/european-commission-releases-analysis-stakeholder-feedback-ai-definitions-and-prohibited-practices
[5] https://natlawreview.com/article/european-commissions-guidance-prohibited-ai-practices-unraveling-ai-act
[6] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5196856
[7] https://www.politico.eu/article/gpai-code-of-practice-to-come-in-weeks-ai-office-says/