Introduction

The European Commission’s Artificial Intelligence Office has initiated a Multi-Stakeholder Consultation to develop compliance guidance following the implementation of the EU Artificial Intelligence Act (AI Act), the first comprehensive AI law globally [8]. This consultation aims to define AI systems and identify prohibited AI applications, informing guidelines and a General-Purpose AI Code of Practice.

Description

The Commission’s Artificial Intelligence Office has launched a Multi-Stakeholder Consultation as of November 13, 2024, aimed at developing compliance guidance following the full implementation of the EU Artificial Intelligence Act (AI Act), recognized as the first comprehensive AI law globally [8], which came into force on August 1, 2024. While most obligations under the Act will take effect in August 2026, certain high-risk AI systems will face requirements starting August 2, 2025 [4], and systems deemed to pose “unacceptable risk” will be banned from February 2, 2025 [4]. This consultation focuses on two key areas: defining what constitutes an AI system and identifying AI applications that should be prohibited [3]. Contributions from this consultation will inform the Commission’s guidelines [2], which are set to take effect six months after the AI Act’s enforcement on February 2, 2025 [2].

The first part of the consultation seeks to establish a precise definition of an AI system, soliciting input from businesses [3], academia [1] [3], civil society [1] [3], and the AI industry [3]. Participants are asked to evaluate the significance of various elements in the definition [3], including the characteristics of machine-based systems designed to operate autonomously, their capacity for generating outputs such as predictions, recommendations [3], or decisions based on input data [8], and their intended objectives [3]. The AI Office is particularly seeking clarification on elements of the AI system definition as stated in Recital 12 of the AI Act and is requesting examples of software systems or programming approaches that do not qualify as AI systems.

The second part addresses the identification of AI uses that may be prohibited if deemed harmful [3]. Stakeholders are invited to suggest clarifications regarding these bans to ensure the legislation is comprehensive and clear [3]. Prohibited practices under Article 5(1) of the AI Act include unacceptable social scoring [2], individual crime risk assessment and prediction [2], emotion recognition in workplace and educational settings [2], manipulative practices [8], unauthorized facial image scraping [8], exploitation of vulnerable individuals [8], and detrimental categorization of people [8]. The AI Office emphasizes that these prohibitions apply only when the AI system is placed on the market [2], put into service [2], or used by a provider or deployer [2].

In conjunction with this effort, independent experts have presented the first draft of the General-Purpose AI Code of Practice [7], which will undergo discussion with approximately 1,000 stakeholders [7]. This iterative drafting process [7], facilitated by the European AI Office [7], marks a significant milestone [7], with the first of four drafting rounds concluding by April 2025 [7]. The initial draft incorporates contributions from providers of general-purpose AI models and considers international approaches [7], serving as a foundation for further refinement [7]. The Code of Practice is expected to guide compliance with the AI Act’s requirements, alleviating some regulatory burdens for organizations that adhere to it.

Stakeholders [1] [2] [6] [7] [8] [9], including AI system providers [1], businesses [1] [3] [6] [8], national authorities [1] [5], academia [1] [3], research institutions [1], and civil society [1] [3], are invited to contribute their insights [1]. The feedback collected will inform both the Commission’s guidelines on the definition of AI systems and prohibited practices [1], expected to be published in early 2025 [1], as well as the development of the General-Purpose AI Code of Practice. This Code will establish clear objectives [7], measures [6] [7], and key performance indicators (KPIs) to guide the development and deployment of trustworthy general-purpose AI models [7], addressing transparency and copyright-related rules for AI model providers [7], and detailing a taxonomy of systemic risks and mitigation measures for advanced models [7].

The consultation seeks practical examples from stakeholders to enhance clarity on the application and use cases of AI practices [1]. It will be open for four weeks [1], concluding on December 11, 2024 [1]. Next week [7], the Chairs will engage with stakeholders [7], EU Member States representatives [7], and international observers in dedicated working group meetings to discuss the draft of the Code [7]. Participants will have opportunities to provide verbal feedback [7], and written feedback will be collected through a dedicated platform [7]. The drafting principles emphasize proportionality of measures to risks [7], consideration of provider size [7], and simplified compliance options for SMEs and start-ups [7], while also reflecting exemptions for open-source model providers and highlighting the need for a balance between clear requirements and flexibility to adapt to technological advancements [7].

The AI Office is mandated to develop Codes of Practice by May 2025 [8], while the Commission will provide further clarity through guidelines and Delegated Acts [8]. Stakeholders are encouraged to participate in the consultation exercise to clarify whether their software falls under the EU AI Act and to understand the implications of the legislation on their AI system uses [8].

Member States are required to designate three types of authorities for the implementation of the EU AI Act: market surveillance authorities [5], notifying authorities [1] [5], and public authorities responsible for enforcing fundamental rights obligations related to high-risk AI systems [5]. The market surveillance authority ensures that only products compliant with EU law are available on the market [5], while the notifying authority oversees the assessment and monitoring of conformity assessment bodies that conduct third-party evaluations [5]. Both types of authorities must operate independently and possess sufficient resources. By August 2, 2025 [4] [5], all Member States must establish or designate these competent authorities [5], with Malta currently being the only Member State to have designated both notifying and market surveillance authorities [5].

The European Union has also released its first draft of regulatory guidance for general-purpose AI (GPAI) models [9], which is set to be finalized by May 2025 [9]. These guidelines address key areas such as transparency [9], copyright compliance [9], risk assessment [2] [9], and technical/governance risk mitigation [9]. Companies utilizing significant computing power for their AI models [9], including OpenAI [1], Google [9], Meta [9], Anthropic [9], and Mistral [9], are expected to adhere to these regulations [9].

The draft emphasizes the importance of transparency in AI development [9], requiring companies to disclose information about the web crawlers used for training their models [9], a critical issue for copyright holders [9]. The risk assessment section aims to mitigate potential cyber offenses [9], discrimination [9], and loss of control over AI systems [9]. AI developers are expected to implement a Safety and Security Framework (SSF) to manage risks proportionately to their systemic impact [9], which includes protecting model data [9], establishing failsafe access controls [9], and conducting ongoing effectiveness assessments [9]. The governance aspect mandates accountability within organizations [9], necessitating continuous risk evaluations and the involvement of external experts when necessary [9].

Non-compliance with the AI Act could result in significant penalties [9], including fines of up to €35 million or up to seven percent of a company’s global annual profits [9], whichever is greater [9]. Stakeholders are encouraged to provide feedback on the draft guidelines by November 28 to inform the next iteration [9]. Additionally, Member States are expected to publish a list of authorities responsible for fundamental rights by November 2, 2024 [5], with only a few having done so to date [5]. The European Artificial Intelligence Board will oversee the Act’s implementation [4], ensuring consistent application across member states and providing technical guidance [4].

Conclusion

The Multi-Stakeholder Consultation and the development of the General-Purpose AI Code of Practice represent significant steps in the EU’s efforts to regulate AI comprehensively. These initiatives aim to ensure that AI systems are developed and deployed responsibly, with clear guidelines and compliance measures. The outcomes of this consultation will have far-reaching implications for AI stakeholders, shaping the future landscape of AI regulation in the European Union.

References

[1] https://digital-strategy.ec.europa.eu/en/news/commission-launches-consultation-ai-act-prohibitions-and-ai-system-definition
[2] https://legacy.dataguidance.com/news/eu-ai-office-requests-comments-ai-act-prohibitions-and
[3] https://www.digit.fyi/eu-asks-for-input-on-ai-bans-and-definitions/
[4] https://www.gunder.com/en/news-insights/insights/client-insight-demystifying-the-eu-ai-act
[5] https://artificialintelligenceact.eu/national-implementation-plans/
[6] https://www.privacyrules.com/understanding-the-eu-ai-act-compliance-strategies-and-industry-insights/
[7] https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts
[8] https://www.pinsentmasons.com/en-gb/out-law/news/eu-ai-act-guidelines-scope-prohibitions-come-early-2025
[9] https://www.engadget.com/ai/the-eu-publishes-the-first-draft-of-regulatory-guidance-for-general-purpose-ai-models-223447394.html