Introduction
The Commission Guidelines on the Definition of an AI System [1] [3], issued under Regulation (EU) 2024/1689 [2], aim to clarify the distinction between traditional software and AI systems as outlined in the AI Act, effective February 2, 2025. These guidelines assist stakeholders in determining whether a software system qualifies as an AI system [2] [3], facilitating the application and enforcement of the AI Act [2]. However, the guidelines face criticism for their lack of clear distinctions and potential regulatory inconsistencies.
Description
The recently issued Commission Guidelines on the Definition of an AI System [1], under Regulation (EU) 2024/1689 [2], aim to clarify the distinction between traditional software and AI systems [1], as outlined in the AI Act [3], which is set to take effect on February 2, 2025. These non-binding guidelines assist stakeholders in determining whether a software system qualifies as an AI system [2] [3], facilitating the application and enforcement of the AI Act [2]. Given the diverse nature of AI systems [2], the guidelines emphasize that each system must be assessed based on its specific characteristics rather than relying on an exhaustive list [2]. The EU acknowledges that there cannot be an automatic determination of what constitutes AI, highlighting the inherent ambiguity in the definition.
The AI Act adopts a lifecycle-based approach [2], defining an AI system as a machine-based system capable of operating with varying levels of autonomy [2], adapting post-deployment [2], and inferring outputs from inputs [2]. This definition encompasses two main phases: the pre-deployment (building phase) and the post-deployment (use phase) [2], with not all elements required to be present in both phases [2]. AI systems are characterized by their reliance on hardware and software components [2], including processing units [2], memory [2], storage devices [2], and computer code [2]. The level of autonomy is crucial for assessing risk and compliance obligations [2], as AI systems can range from fully human-involved to fully autonomous [2].
Adaptiveness [2], while not mandatory [2], refers to an AI system’s self-learning capabilities after deployment [2], allowing it to change behavior over time [2]. A key distinguishing feature of AI systems is their ability to infer outputs from inputs [2], setting them apart from traditional rule-based software [2]. This inference process primarily occurs during the use phase [2], while the building phase focuses on developing models or algorithms [2].
However, the guidelines face criticism for their lack of convincing distinctions [1], focusing heavily on differentiating information processing techniques without addressing the fundamental similarities in the underlying computing operations [1]. They struggle to define when a system transitions from basic computation to AI-driven inference [1], leaving unanswered questions about the classification of systems that blend traditional algorithms with machine learning components [1]. The arbitrary exclusion of regression models from the AI definition [1], while including deep learning models that perform similar functions [1], raises concerns about inconsistent regulatory enforcement [1].
This classification issue creates a regulatory ‘cliff edge,’ imposing burdens on developers and users [1]. The guidelines suggest that systems with greater adaptability and autonomy pose higher regulatory risks [1], yet the current framework fails to align regulation with actual risk [1], instead relying on arbitrary thresholds based on complexity [1]. Furthermore, the guidelines complement existing regulations on prohibited AI practices defined by the AI Act [3], which seeks to balance innovation with health [3], safety [3], and fundamental rights protection by categorizing AI systems into risk categories [3], including prohibited and high-risk systems [3].
Compliance deadlines for certain prohibited use cases have recently come into effect [4], and AI developers must carefully assess whether their software systems fall within the act’s scope to avoid potential fines of up to 7% of global annual turnover for violations [4]. Ultimately, the guidance raises more questions than it resolves [1], failing to clarify what distinguishes AI from non-AI [1], where the line between basic data processing and inference lies [1], and at what point an optimization algorithm becomes classified as AI [1]. The attempt to define AI appears to draw boundaries where none naturally exist [1], leading to ambiguity in the regulatory landscape [1].
Conclusion
The Commission Guidelines on the Definition of an AI System [1] [3], while intended to provide clarity, have sparked debate over their effectiveness in distinguishing AI from traditional software. The lack of clear distinctions and potential inconsistencies in regulatory enforcement pose challenges for developers and users. As the AI Act’s compliance deadlines approach, stakeholders must navigate these ambiguities to ensure adherence and avoid significant penalties. The ongoing discourse highlights the complexities of regulating AI and the need for a more precise framework to address the evolving technological landscape.
References
[1] https://www.jdsupra.com/legalnews/how-many-neurons-must-a-system-compute-4546923/
[2] https://www.lexology.com/library/detail.aspx?g=b02dd4cb-defb-4a2f-ba5c-d4be5d5eb90c
[3] https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
[4] https://techcrunch.com/2025/02/06/eu-details-which-systems-fall-within-ai-acts-scope/




