Introduction

The European Union’s AI Office has unveiled the initial draft of its General-Purpose AI (GPAI) Code of Practice [7], a pivotal component of its strategy to ensure adherence to the forthcoming EU AI Act. This draft [1] [5] [7] [8] [9] [10] [12], released on November 14, 2024 [10], aims to guide the responsible development and deployment of trustworthy GPAI models [3], aligning with EU principles and values [7]. It addresses challenges such as bias, misinformation [6] [9] [10] [11], and misuse [9], while establishing a governance framework for AI within the EU [6] [11].

Description

The drafting process has reached a significant milestone [1], with independent experts from four thematic working groups preparing this initial version [1], which incorporates contributions from providers of general-purpose AI models and considers international approaches [1]. The draft serves as a foundation for further refinement [1], inviting feedback to shape subsequent iterations [1]. It outlines guiding principles and objectives [1], aiming to clarify the final Code’s potential form and content [1], including transparency and copyright-related rules for providers [1] [3], as well as a taxonomy of systemic risks and mitigation measures for advanced models that may pose systemic risks [1].

The draft Code facilitates compliance with the EU AI Act by mandating the development of codes of practice that consider international approaches [8]. It specifically addresses both general-purpose AI models and those classified as having systemic risk—models that could significantly impact public health [8], safety [1] [5] [6] [7] [8] [9] [10] [11] [12], fundamental rights [8], or society due to their high-impact capabilities [8]. Systemic risk is characterized by the potential for negative effects that can propagate across the value chain [8], and a model is deemed to have systemic risk if it possesses high-impact capabilities [8], as determined by the European Commission [8].

Compliance with the GPAI Code provides GPAI providers with a safe harbor from potential violations of the EU AI Act until final regulations are released [6] [11], potentially reducing regulatory scrutiny and penalties for businesses [12]. Key principles of the Code include:

  • Transparency: Providers must maintain comprehensive documentation about their models [11], including details on design [10], intended tasks [6] [10] [11], origin of training data [12], collection methods [12], acceptable use policies [10] [11], and associated risks [6] [9] [11]. They are required to ensure compliance with copyright law throughout the lifecycle of the GPAI model [7], which includes conducting reasonable copyright due diligence before contracting for data sets and implementing measures to prevent copyright infringement in downstream applications [7]. Transparency reports detailing purposes [12], data sources [7] [10] [12], and compliance measures must be accessible to stakeholders and regulators [12]. The Code emphasizes the need for a clear restatement of the public transparency obligation from Article 53(1)(d) of the AI Act, ensuring that transparency measures are effectively assessed.

  • Acceptable Use Policies: Clear guidelines for users must be established, outlining permitted and prohibited uses [6] [11].

  • Public Trust: Providers are encouraged to disclose relevant information publicly when feasible.

  • Risk Categories: Risk assessments should align with evolving categories, including cyber threats, manipulation [12], loss of control [11], and large-scale discrimination [11]. Businesses must categorize and address these risks effectively [12], with particular attention to the implications of inference-time compute on risk categorization [8], as higher inference compute may correlate more with risk than training compute alone [8].

  • Proportional Mitigation Measures: Safeguards should correspond to the severity and likelihood of identified risks, with risk mitigation strategies tailored to the provider’s size and resources, ensuring flexibility for smaller entities and open-source models [6].

  • Executive and Board-Level Responsibilities: Senior management must take ownership of risk management, potentially through dedicated risk committees [5] [12]. This includes assigning responsibility and resources at the executive level [5], establishing board-level oversight of systemic risks [5], and conducting periodic assessments of the Safety and Security Framework (SSF). Adequate resources must be allocated for managing systemic risks associated with AI systems [2], and governance frameworks should be regularly reviewed and updated to address evolving challenges [2].

  • Decision Protocols: Procedures must be established for evaluating whether to proceed with, halt [12], or modify AI systems based on risk assessments [12]. This includes implementing serious incident reporting procedures [5], ensuring whistleblower protections [5] [10], and maintaining appropriate public transparency regarding systemic risks [5].

GPAI models identified with systemic risks will face additional compliance standards [6] [11], requiring robust safety and security frameworks (SSFs) to manage risks throughout the model lifecycle [11]. These frameworks must include risk management policies and forecasts for when models may trigger systemic risk indicators [8]. Continuous evaluations [6] [11], including adversarial testing [6] [11], will be necessary for risk identification [11], while comprehensive documentation and independent expert evaluations will be required for risk assessment [11].

Model providers are also responsible for identifying and complying with rights reservations under Article 4(3) of Directive EU 2019/790 [4]. This compliance necessitates recognizing machine-readable identifiers that allow right holders to opt-out of AI training [4]. Current measures proposed for this compliance are inadequate [4], as they rely on robots.txt [4], which does not effectively facilitate opting out of text and data mining (TDM) or specific applications of TDM [4]. Moreover, robots.txt can only be set by website controllers [4], who may not be the right holders [4], and it is unsuitable for content not primarily distributed online [4], such as music or audiovisual works [4]. Therefore, AI model providers must respect multiple forms of machine-readable rights reservations and consider the implications of opt-outs in their copyright compliance policies. An effective opt-out should require the removal of opted-out works from training datasets [4], but it should not necessitate the removal of such works from already trained models [4], as this is technically unfeasible [4]. Instead, providers should document and communicate training dates [4], after which new opt-outs need not be honored [4].

Compliance with the Code will also involve embedding systemic risk considerations into organizational decision-making processes [11]. A group of nearly 1,000 stakeholders [11], including EU Member State representatives and industry experts [6], will refine the draft through discussions in dedicated working group meetings [11], with feedback guiding further drafting rounds [6]. Key insights from these discussions will be presented to the full Plenary [1]. Participants have received the draft through a dedicated platform and have two weeks to submit written feedback [1]. The Chairs may adjust the draft based on this feedback [1], ensuring that measures and key performance indicators (KPIs) are proportionate to risks and consider the size of the model provider, while allowing simplified compliance options for SMEs and start-ups [1]. The Code will also reflect exemptions for providers of open-source models and emphasize a balance between clear requirements and flexibility to adapt to technological advancements [1].

A finalized Code is expected by May 1, 2025, with provisions for general-purpose AI models taking effect in August 2025 [8]. If the GPAI Code is not finalized or found lacking by this date [12], the European Commission may implement common rules [12]. Stakeholder engagement is crucial during the consultation period [12], which ends on December 11, 2024.

Conclusion

This initiative reflects the EU’s commitment to proactively regulating foundational AI technologies [11], urging businesses to audit their AI practices for alignment with the draft provisions [12], engage in the feedback process to influence the final Code [12], strengthen governance frameworks for AI risk management [12], maintain detailed documentation of compliance efforts [12], and stay informed about potential updates to obligations [12]. Early adoption of the Code can simplify compliance efforts and position businesses as leaders in the field [12], promoting responsible AI development and compliance [12].

References

[1] https://formiventos.com/2024/11/22/first-draft-of-the-general-purpose-ai-code-of-practice-published/
[2] https://www.lexology.com/library/detail.aspx?g=ca726aed-713f-4e31-9365-6367352ab17a
[3] https://www.matheson.com/insights/detail/the-eu-artificial-intelligence-act—where-are-we-now
[4] https://communia-association.org/2024/12/04/our-analysis-of-the-1st-draft-of-the-general-purpose-ai-code-of-practice/
[5] https://www.aoshearman.com/insights/ao-shearman-on-data/european-commission-publishes-first-draft-of-gpai-code-of-practice
[6] https://perkinscoie.com/insights/update/european-ai-office-publishes-first-draft-general-purpose-ai-code-practice
[7] https://www.lexology.com/library/detail.aspx?g=99a0b8e9-0cb0-41ca-b136-6a2333e1264f
[8] https://www.thecybersolicitor.com/p/the-eus-code-of-practice-for-general
[9] https://www.eweek.com/news/eu-ai-act-drafts-rules-for-ai-models/
[10] https://www.jdsupra.com/legalnews/european-commission-releases-first-3130837/
[11] https://www.jdsupra.com/legalnews/european-ai-office-publishes-first-8378571/
[12] https://www.taylorwessing.com/en/insights-and-events/insights/2024/11/the-general-purpose-ai-code-of-practice