Introduction
The second draft of the General-Purpose AI Code of Practice [5] [7] [10] [11], developed by the European AI Office in alignment with the EU’s Artificial Intelligence Act, has been released [5] [7] [12]. This draft aims to guide providers of general-purpose AI models in complying with the EU AI Act, which establishes a risk-based framework effective from August 1, 2024. The Code addresses transparency [5], copyright compliance [1] [2] [3] [5] [8] [9], risk assessment [1] [7] [8] [9], and governance [3] [5] [7] [9], with a focus on balancing innovation and fundamental rights.
Description
Independent experts have presented the second draft of the General-Purpose AI Code of Practice [5] [7] [11], developed by the European AI Office in alignment with the EU’s Artificial Intelligence Act. This updated version aims to guide providers of general-purpose AI models (GPAI models) in complying with the EU AI Act, which establishes a risk-based framework effective from August 1, 2024, imposing varying requirements based on the societal risk associated with different AI technologies. The draft incorporates feedback from nearly 1,000 stakeholders, including representatives from EU Member States, international observers [7], and contributions from business [2], academia [3] [6] [9], and civil society [3] [6] [9] [10], with approximately 430 submissions received to date [6]. It reflects discussions from Working Group meetings and written submissions on the initial draft published in November 2023 [7]. The Code serves as a guiding document for GPAI model providers to demonstrate compliance with the AI Act throughout the models’ lifecycle [7], particularly for those expected to be released after May 2, 2025 [5], when new regulations come into effect.
The second draft emphasizes transparency in AI decision-making processes and requires providers to maintain comprehensive technical documentation regarding the intended use, design specifications [8], and deployment of their GPAI models [3]. This documentation must be accessible to the AI Office and downstream users to promote public transparency [3]. Providers are also required to publish their copyright compliance measures, establish a contact point for copyright complaints [1] [9], and document data sources and authorizations for lawful data usage [1]. Additionally, the draft highlights compliance with EU copyright law, including guidance on Text and Data Mining (TDM) [3], copyright compliance policies [1] [2] [3] [5] [9], and the use of compliant crawlers in accordance with the Robot Exclusion Protocol [3]. A comply or explain mechanism is established [1] [9], allowing providers to demonstrate compliance through alternative means if they do not adhere to the Code [9]. Signatories of the Code will establish internal policies to ensure compliance with Union copyright law [6], allowing GPAI models lawful access to copyright-protected content in accordance with Article 4(3) of Directive EU 2019/790 [6]. An Acceptable Use Policy (AUP) will be defined [6], outlining acceptable behavior and aligning with the capabilities of the GPAI models [6]. The draft introduces refined obligations [5], including the requirement for providers to create and provide technical documentation detailing their models’ intended tasks [9], architecture [9], and training data to the AI Office and relevant authorities [9]. Additionally, it anticipates a proposal for a summary template of training data content from the AI Office early next year [7].
The Code addresses systemic risk assessment and mitigation measures for advanced GPAI models that may pose systemic risks [7], as outlined in the AI Act [7]. Providers must establish a Safety and Security Framework (SSF) to detail risk management policies [9], continuously assess risks throughout the model’s lifecycle [9], and categorize identified risks by severity. Systemic risks identified in the Code include cybersecurity threats [8], loss of control [3] [8], automated AI use in research [3], manipulation issues [3] [8], and large-scale discrimination [3] [8], among others. Providers are required to conduct due diligence on third-party data sets [1], avoid ‘overfitting’ of models [1], and draft and regularly update Safety and Security Reports (SSR) documenting risk assessments and mitigation strategies [9]. They must manage these risks at all organizational levels [8] [12], including executive and board oversight [12], conduct regular assessments [8], and engage independent experts for evaluations [8] [12]. Processes for reporting serious incidents and whistleblower protections are also required [8], with a focus on public transparency through the publication of safety frameworks and risk assessments [8].
The structure of the Code has been revised to clarify objectives [7], commitments [5] [6] [7] [9], and measures [1] [4] [7] [8], and includes preliminary examples of Key Performance Indicators (KPIs) [7], focusing on proportionality relative to the size of AI providers [5]. The draft aims to balance clear commitments with the flexibility to adapt to evolving technology and emphasizes the need for further development of AI governance and risk management ecosystems [7]. Governance requires resource allocation at the board and executive levels for GPAI models with systemic risk [6], facilitating independent expert assessments throughout the model lifecycle [6], which may include independent testing of model capabilities and reviews of systemic risks and mitigation adequacy [6].
Concerns have been raised by critics, including industry associations [5], about potential overreach and the rushed timeline for stakeholder feedback [5]. Civil society organizations have acknowledged improvements in copyright compliance but caution against extensive downstream compliance requirements [5]. Additionally, there are calls for the draft to address risks related to fundamental rights more comprehensively, ensuring that mitigations do not infringe on rights such as freedom of expression [4]. Some experts view the draft as a significant opportunity to create a flexible framework that minimizes legal risks while protecting fundamental rights [5]. The Code focuses on determining risk thresholds [6], continuous monitoring for emerging risks [6], and evaluating the effectiveness of risk mitigation measures [6]. Builders and deployers of GPAI models must ensure their models do not exceed maximum risk thresholds and must manage access control and model autonomy levels [6].
The final version of the Code is anticipated in Spring 2025 [8] [12], with upcoming workshops scheduled with AI model providers and Member State representatives [7]. The ongoing work aims to ensure coherence and clarity in the Code’s provisions [7], aligning with the principles of proportionality and reflecting a commitment to collaborative policymaking. The iterative drafting process highlights the need for a balanced approach to AI governance that safeguards fundamental rights without hindering innovation [5].
As the EU positions itself as a leader in global AI governance [5], the implications of the Code extend beyond regional standards [5], potentially influencing international AI regulatory measures [5]. The emphasis on transparency and ethical considerations aims to enhance public trust in AI technologies [5], addressing concerns related to privacy and the ethical use of AI in decision-making [5]. The Code’s development underscores the importance of establishing a regulatory framework that fosters responsible innovation while ensuring compliance with legal and ethical standards [5]. Compliance with the Code will imply adherence to the general-purpose AI-related provisions of the AI Act [12], which carries significant penalties for non-compliance [12], potentially reaching €35 million or 7% of a company’s annual turnover [12].
To improve the Code [10], it is essential to provide clear and actionable compliance guidelines that are feasible given current technological capabilities while remaining adaptable to ongoing advancements in AI [10]. The Code should align with international standards [10], such as the G7 Code of Conduct on AI [10], to minimize regulatory fragmentation and avoid imposing unique European requirements [10]. Additionally, the Code should streamline requirements to prevent adding to the existing regulatory burdens on technology development and adoption [10], respecting the obligations set forth in the AI Act and other EU laws [10]. Security [1] [3] [6] [9] [10] [12], confidentiality [10], and protection of intellectual property and trade secrets must be prioritized [10], with information sharing limited to necessary parties [10]. Any public disclosure requirements should align with the legal stipulations of the AI Act [10]. Collaboration among policymakers [10], industry leaders [10], and civil society is crucial to address shared challenges and develop practical guidelines for responsible AI use [10].
Conclusion
The second draft of the General-Purpose AI Code of Practice represents a significant step in aligning AI development with the EU’s regulatory framework. By emphasizing transparency [5] [7] [8], risk management [1] [2] [3] [6] [7] [9], and copyright compliance [1] [2] [3] [5] [7] [9], the Code seeks to balance innovation with the protection of fundamental rights. As the EU aims to lead in global AI governance [5], the Code’s implications may extend internationally, influencing AI regulatory measures worldwide. The ongoing development process highlights the importance of collaboration among stakeholders to create a practical and adaptable framework for responsible AI use.
References
[1] https://www.lexology.com/library/detail.aspx?g=c6fab898-949f-4afb-924b-230dce9fcdee
[2] https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/
[3] https://www.lexology.com/library/detail.aspx?g=69779dd8-81c2-4d1c-8ee3-7b1ad4c999dd
[4] https://cdt.org/insights/cdts-first-contribution-to-the-code-of-practice-process-on-gpai-models/
[5] https://opentools.ai/news/eu-unveils-second-draft-of-general-purpose-ai-code-of-practice
[6] https://see40.org/the-ai-code-of-practice-for-general-purpose-ai-models-latest-developments-in-ai-regulation/
[7] https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts
[8] https://www.rpclegal.com/snapshots/technology-digital/winter-2024/eu-publishes-draft-code-for-general-purpose-ai-models/
[9] https://technologyquotient.freshfields.com/post/102jqlt/eu-ai-act-unpacked-19-general-purpose-ai-code-of-practice-an-overview
[10] https://www.technet.org/media/the-eu-ai-code-of-practice-needs-a-makeover/
[11] https://www.freevacy.com/news/european-commission/eu-ai-office-publishes-second-draft-of-gpai-code-of-practice/6037
[12] https://www.lexology.com/library/detail.aspx?g=200e5835-1f6a-4693-a40f-d4d64ed5611d




