Introduction

On February 4, 2025 [3], the European Commission released draft guidelines to facilitate the implementation of the EU Regulation 2024/1689, known as the Artificial Intelligence Act. This regulation represents the first comprehensive framework for AI governance in Europe [2], employing a risk-based approach to categorize AI systems from minimal to unacceptable risk [2]. The guidelines, while non-binding and pending formal adoption [8], aim to clarify legal concepts and provide practical use cases, promoting consistent and effective application across the European Union.

Description

On February 4, 2025 [3], the European Commission published draft guidelines integral to the implementation of EU Regulation 2024/1689, known as the Artificial Intelligence Act. This regulation establishes the first comprehensive framework for AI governance in Europe [2], adopting a risk-based approach that categorizes AI systems from minimal to unacceptable risk [2]. While these guidelines are non-binding and pending formal adoption [8], they aim to clarify legal concepts and provide practical use cases informed by stakeholder input [6], thereby promoting a consistent and effective application of the Act across the European Union [7]. The initial provisions of the AI Act [4], which include definitions of AI systems [4], AI literacy [4], and a select number of prohibited AI applications deemed to pose unacceptable risks [4], became effective on February 2, 2025 [3] [5].

The AI Act categorizes AI systems into various risk levels [7], including prohibited [3] [4] [6] [7], high-risk [4] [7] [8] [9], and those with transparency obligations [7] [9], aiming to foster innovation while safeguarding health [4] [7], safety [3] [4] [6] [7], and fundamental rights [2] [4] [5] [6] [7]. Article 5 of the AI Act specifically prohibits practices that are incompatible with EU fundamental principles, including harmful manipulation [2] [7] [8], social scoring [2] [3] [5] [7] [8], exploitation of vulnerabilities [2], prediction of criminal risk [2], and real-time remote biometric identification [7] [8]. This prohibition applies to both providers and users of such systems [3]. The guidelines offer specific examples of prohibited practices [3], including the use of AI systems to infer employee emotions or assess an individual’s risk of committing a crime [5], as well as AI-enabled “dark patterns” that manipulate individuals into actions they would not otherwise take [5]. The EU’s commitment to fostering a safe and ethical AI environment is reflected in these clarifications.

The Act bans AI systems that assess social behaviors for unrelated treatment contexts [3], such as determining insurance premiums or creditworthiness based on unrelated personal characteristics [3]. However, AI-enabled scoring is permitted when it involves offering privileges to online shoppers based on their purchase history [3], and individual user ratings are also exempt from this prohibition [3].

The use of subliminal techniques or exploitation of individual vulnerabilities to influence behavior is prohibited [3]. This includes AI applications in gaming that encourage excessive play among children and scams targeting older individuals [3]. The ban does not cover AI systems that do not manipulate users or cause significant harm [3].

The Act prohibits the creation of facial recognition databases through untargeted scraping of images from the internet or CCTV footage [3], although scraping of non-facial data is allowed [3]. Facial image databases not used for person recognition [3], such as those for AI model training [3], are also exempt [3].

Emotion recognition in workplaces and educational settings is generally banned [3], including systems that track employee emotions or assess student engagement [3]. However, emotion recognition for medical and safety purposes is permitted [3]. Additionally, categorizing individuals based on sensitive attributes using biometric data is forbidden [3], particularly for sending political messages or advertisements [3]. The guidelines clarify that technical categorization necessary for commercial services is not included in this ban [3].

To mitigate risks associated with prohibited AI practices [9], companies must proactively identify [9], assess [3] [5] [9], and document the AI systems utilized in their operations [9], particularly those that may fall under the prohibited categories [9]. Given the ambiguous nature of these prohibitions [9], it is crucial for companies to document their rationale for determining that specific AI uses do not violate Article 5 [9]. This assessment process can also help identify high-risk AI systems and those subject to transparency obligations under Article 50 of the AI Act [9].

Companies developing or implementing AI systems must conduct thorough evaluations to ensure compliance with these prohibitions [2], necessitating the integration of organizational and technical measures from the design phase [2]. This includes preliminary impact assessments that address both data protection and additional risk dimensions identified by the AI Act [2]. Providers of AI systems are responsible for ensuring their systems are not likely to be used for prohibited purposes and must implement safeguards against foreseeable misuse [3]. They are expected to clearly state the exclusion of prohibited practices in their terms and provide clear usage instructions [3]. Continuous compliance is required [3], including ongoing monitoring and updates to AI systems [3]. If a provider becomes aware of misuse for prohibited purposes [3], they are expected to take appropriate corrective measures [3].

Violations of the Act’s rules on prohibited use cases can result in significant penalties [1], potentially reaching 7% of global turnover or €35 million [1], whichever is higher [1] [5]. From February 2, 2025 [3] [9], violations may also expose companies to civil [9], administrative [2] [9], or criminal liabilities under other EU or Member State laws [9], such as product liability or general tort law [9]. Additionally, processing personal data in the context of prohibited AI practices may lead to violations of GDPR regulations [9], as argued by EU data protection authorities [9]. The guidelines, while providing insights into the Commission’s interpretation of prohibitions [7], reserve authoritative interpretations for the Court of Justice of the European Union (CJEU) [7]. The Commission plans to review these guidelines based on practical implementation experiences [8], enforcement actions by market surveillance authorities [8], and rulings from the CJEU [8]. These guidelines will be updated as necessary to ensure compliance with the AI Act [6], and EU Member States are required to designate oversight bodies by August 2 [1]. Businesses are encouraged to adopt AI governance frameworks that align with the AI Act’s requirements [2], implement continuous risk assessments [2], invest in personnel training [2], and establish documentation mechanisms for compliance [2]. The guidelines serve as a reference for adapting to the new regulations [2], emphasizing the need to balance technological innovation with the protection of fundamental rights in AI system usage [2].

Conclusion

The introduction of the AI Act and its accompanying guidelines marks a significant step in establishing a structured approach to AI governance in Europe. By categorizing AI systems based on risk and setting clear prohibitions, the Act aims to protect fundamental rights while fostering innovation. Companies are urged to align their practices with these regulations to avoid severe penalties and ensure ethical AI usage. The ongoing review and adaptation of these guidelines will be crucial in maintaining their relevance and effectiveness in the rapidly evolving AI landscape.

References

[1] https://techcrunch.com/2025/02/04/eu-puts-out-guidance-on-uses-of-ai-that-are-banned-under-its-ai-act/
[2] https://www.lexology.com/library/detail.aspx?g=05cf5dcd-2d51-4dfe-aaf8-c35f958c9e2a
[3] https://www.jdsupra.com/legalnews/eu-commission-issues-guidelines-on-8943884/
[4] https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[5] https://www.siliconrepublic.com/machines/ai-act-eu-guidelines-prohibition
[6] https://digital-strategy.ec.europa.eu/en/news/first-rules-artificial-intelligence-act-are-now-applicable
[7] https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
[8] https://www.privacylaws.com/news/eu-issues-guidelines-on-prohibited-ai-practices/
[9] https://www.lw.com/en/insights/upcoming-eu-ai-act-obligations-mandatory-training-and-prohibited-practices