Introduction
The European Commission has launched a public consultation to gather insights from various stakeholders on the regulation and oversight of high-risk AI products. This initiative is crucial for ensuring product safety and protecting fundamental rights within the EU.
Description
The European Commission has initiated a public consultation aimed at gathering input from a diverse range of stakeholders, including developers [2], users [2], public authorities [2], academia [2], civil society [2], and citizens [2], regarding the oversight of high-risk AI products [1]. This initiative seeks to collect practical examples of high-risk AI systems, such as those used in credit decisioning and fraud detection, and clarify the application of regulations related to these systems [2], which are crucial for product safety under EU law and have significant implications for health [2], safety [2], and fundamental rights [2]. It is particularly relevant for med-tech firms and financial sector entities operating within the EU.
The consultation aims to identify issues that require clarification in the Commission’s guidelines on the classification of high-risk AI systems [3], which now include stringent requirements such as risk assessments, data governance [4], and human oversight [4]. Stakeholders are encouraged to provide feedback on the responsibilities and obligations throughout the AI value chain, particularly concerning fairness evaluations and the prevention of bias in model development. This includes scrutinizing feature selection and training datasets to ensure that AI applications do not disproportionately impact protected demographic groups.
Furthermore, the consultation addresses the need for enhanced transparency and communication from AI model vendors, especially regarding general-purpose AI tools integrated into business processes [4]. The challenges of compliance extend to synthetic data generation techniques used in AI model development [4], which require careful governance and oversight [4]. The consultation is open until 18 July 2025 [2], inviting contributions that will shape future guidelines and ensure effective oversight of high-risk AI systems.
Conclusion
The outcomes of this consultation will significantly influence the future regulatory landscape for high-risk AI systems in the EU. By engaging a wide array of stakeholders, the European Commission aims to enhance the safety, fairness [4], and transparency of AI technologies, ultimately safeguarding public interests and fundamental rights.
References
[1] https://www.bioworld.com/articles/720751-medical-ai-left-out-of-eus-proposal-to-relax-high-risk-ai-mandates
[2] https://www.lexisnexis.co.uk/legal/news/commission-seeks-views-on-high-risk-ai-classification-obligations
[3] https://digital-strategy.ec.europa.eu/en/consultations/targeted-stakeholder-consultation-classification-ai-systems-high-risk
[4] https://www.garp.org/risk-intelligence/artificial-intelligence/eu-ai-act-250606