Introduction

On 6 June 2025 [5], the European Commission initiated a public consultation to guide the implementation of the AI Act [1] [3] [6] [7], a comprehensive regulatory framework for high-risk artificial intelligence systems [3]. This consultation is pivotal for operationalizing the Act’s risk-based framework before the compliance deadlines in 2026 [5].

Description

On 6 June 2025 [5], the European Commission launched a public consultation to guide the implementation of the AI Act [1] [2] [6] [8], which establishes a comprehensive regulatory framework for high-risk artificial intelligence systems set to come into force on 1 August 2024. This initiative is crucial for operationalizing the Act’s risk-based framework ahead of compliance deadlines in 2026 [5], with full effectiveness of high-risk AI system requirements expected by August 2026 and a gradual application period extending until August 2027. Stakeholders—including AI developers [1] [2] [5] [7] [8], users [1] [2] [4], businesses [2] [5] [7] [8], public authorities [1] [2] [7] [8], researchers [2] [4] [7], civil society organizations [2] [5], and citizens—are invited to provide their perspectives.

The consultation aims to collect practical examples of AI systems and clarify the application of regulations related to high-risk systems [1], which are defined as those integrated into products governed by EU safety [1] [2] [3] [4] [5] [6] [7] [8] laws and those that pose significant risks in critical areas such as health, safety, employment [2] [4] [5], education [2] [3], biometric identification [3] [6], law enforcement [2] [3] [5], and access to essential services [2]. Article 6 of the AI Act categorizes high-risk AI systems into two types: those used as safety components of products under Union harmonization legislation and those listed in Annex III that pose significant risks to health [4], safety [1] [2] [3] [4] [5] [6] [7] [8], or fundamental rights [1] [2] [4] [6] [7] [8]. This classification will help define which AI systems qualify as high-risk, based on their significance for product safety [1] [2] [3] [4] [5] [6] [7] [8] under the Union’s harmonized legislation and their potential impact on individuals’ health, safety, or fundamental rights in specific use cases [7].

Participants are encouraged to comment on the application of key obligations [5], including data governance, transparency [2] [3], human oversight [3] [5], risk management [3] [5], quality management systems [3], and conformity assessments [3], as well as the assignment of responsibilities throughout the AI development and deployment chain [5]. Concerns have been raised about potential delays in the AI Act’s implementation if necessary standards and guidelines are not established in time [8], with industry representatives advocating for a mechanism to postpone deadlines if compliance requirements are not met [8].

The consultation is open until 18 July 2025 [1] [2] [3] [5], and the feedback collected will inform two guidance documents: one focused on classifying high-risk systems and another addressing compliance obligations [5], both scheduled for publication by 2 February 2026 [5]. This will provide clarity ahead of the AI Act’s full enforcement in August 2026 [5]. Additionally, the feedback will be used to assess whether updates to Annex III or the list of prohibited AI practices are necessary [5], emphasizing the responsibilities of both providers and deployers in ensuring compliance and oversight [3]. Contributions may be published [2], with the option for respondents to remain anonymous [2], and a summary of the results will be released based on aggregated data [2]. This consultation represents a significant step in shaping the future regulatory landscape for AI in the EU [3].

Conclusion

The public consultation on the AI Act is a critical step in shaping the regulatory landscape for AI in the EU. By gathering input from a wide range of stakeholders, the European Commission aims to ensure that the implementation of the AI Act is both effective and comprehensive. The feedback will play a crucial role in refining the classification of high-risk systems and compliance obligations, ultimately contributing to a safer and more transparent AI environment in the EU.

References

[1] https://www.lexisnexis.co.uk/legal/news/commission-seeks-views-on-high-risk-ai-classification-obligations
[2] https://cadeproject.org/updates/the-eu-commission-opens-consultation-on-high-risk-ai-systems-under-the-ai-act/
[3] https://www.gamingtechlaw.com/2025/06/defining-high-risk-ai-systems-under-the-ai-act-consultation-now-open/
[4] https://www.actuia.com/en/news/the-european-commission-launches-a-public-consultation-on-high-risk-artificial-intelligence-systems/
[5] https://ai-regulation.com/european-commission-launches-public-consultation-on-high-risk-ai-sysstems/
[6] https://www.publicconsultation.ie/haveyoursay/public-consultation-on-implementing-the-ai-act-for-high-risk-ai-systems
[7] https://digital-strategy.ec.europa.eu/en/news/commission-launches-public-consultation-high-risk-ai-systems
[8] https://www.theaiforum.org/news-were-reading/european-commision-opens-consultation-on-ai-act