Introduction

The European Commission has released draft guidelines to clarify the definition of AI systems under the AI Act (Regulation (EU) 2024/1689) [2]. These guidelines aim to ensure a consistent application of the AI Act across the EU by providing legal interpretations and practical examples for stakeholders. Although non-binding, they are a valuable resource for companies to assess compliance and will influence national authorities in implementing the Act.

Description

The European Commission has published draft guidelines that clarify the definition of AI systems under the AI Act (Regulation (EU) 2024/1689) [2]. These guidelines aim to provide legal interpretations and practical examples for stakeholders, promoting a consistent application of the AI Act across the EU [2]. Although non-binding, they serve as a valuable resource for companies to assess their compliance with the Act and will influence national authorities in its implementation.

According to Article 3(1) of the AI Act [1], an AI system is defined as a machine-based system designed to operate with varying levels of autonomy [1], capable of adapting after deployment [1], and inferring outputs from received inputs to generate predictions [1], content [1] [2], recommendations [1], or decisions that can influence environments [1]. Recital 12 offers non-binding interpretative guidance on this definition [1], which the EU Commission guidelines further elucidate [1]. The guidelines emphasize a case-by-case analysis of a system’s features throughout its lifecycle [3], clarifying that certain systems [3], particularly those with limited inference capabilities [3], may fall outside the definition of AI systems [3].

The guidelines identify seven key components of the AI system definition:

  1. Machine-based systems: AI systems must integrate hardware and software components [1], emphasizing their computational nature [1].

  2. Degree of autonomy: AI systems should operate with some independence from human involvement, excluding those requiring full manual control [1].

  3. Adaptability post-deployment: While not mandatory, AI systems may exhibit self-learning capabilities that allow their behavior to change during use [1].

  4. Objective-driven operation: AI systems are designed to achieve explicit or implicit objectives, which are internal to the system and distinct from its intended purpose [1].

  5. Inference capability: A critical aspect of AI systems is their ability to generate outputs from inputs without being limited to human-defined rules, differentiating them from traditional software [1].

  6. Influence on environments: AI systems can produce outputs that affect both physical and virtual environments, including predictions [1], content [1] [2], recommendations [1], and decisions [1].

  7. Environmental interaction: AI systems are active entities that can influence tangible and virtual settings.

The AI Act also includes general exclusions for AI systems released under free and open-source licenses [2], although these exclusions do not apply if the systems are classified as high-risk or fall under specific articles of the Act [2]. Key definitions within the Act include “placing on the market,” which refers to the first availability of an AI system in the EU [2], and “putting into service,” which pertains to the initial use of the system [2]. The term “use” is broadly interpreted to encompass any deployment or integration of the system after it has been placed on the market or put into service [2].

Operators of AI systems are categorized into providers and deployers. Providers are those who develop or market AI systems [2], while deployers are entities that utilize these systems under their authority [2], excluding personal [2], non-professional use [2]. Providers must ensure their systems are not used for prohibited practices and must inform deployers about necessary human supervision [3]. Deployers are prohibited from using AI systems in ways that circumvent safeguards established by providers [3]. Operators may assume multiple roles concerning an AI system [2], making it crucial for companies to evaluate whether specific AI applications are prohibited under the Act [2], as responsibilities differ based on their roles and control over the system [2].

Organizations are encouraged to assess their AI systems in light of this definition and the accompanying guidelines [1], while authoritative interpretations of the AI Act are reserved for the Court of Justice of the European Union (CJEU) [2]. Article 5 of the AI Act [1] [3], which prohibits specific AI practices [3], will take effect on February 2, 2025 [3], with significant penalties for non-compliance [3]. The Prohibited Practices Guidelines offer operational clarity on these prohibitions [3], although the full impact of these prohibitions remains to be seen [3]. Both sets of guidelines are complementary [3], providing essential details for compliance with the AI Act and identifying areas that require further clarification [3].

Conclusion

The draft guidelines by the European Commission are instrumental in providing clarity and consistency in the application of the AI Act across the EU. They serve as a crucial resource for companies to ensure compliance and will significantly influence national authorities in the Act’s implementation. As the AI landscape evolves, these guidelines will help shape the regulatory environment, ensuring that AI systems are developed and deployed responsibly and ethically.

References

[1] https://www.jdsupra.com/legalnews/eu-commission-clarifies-definition-of-3940335/
[2] https://www.jdsupra.com/legalnews/eu-commission-publishes-guidelines-on-9450260/
[3] https://www.gide.com/en/news-insights/publication-of-two-sets-of-guidelines-in-connection-with-the-ai-act/