Introduction

The European AI Office is in the process of developing the General-Purpose AI Code of Practice, aiming to align it with the EU AI Act. The Code is designed to guide providers of general-purpose AI models in complying with EU AI regulations, particularly for models identified as posing systemic risks [2]. The third draft of the Code [3] [4] [11], published on March 11, 2025, refines commitments and addresses key issues such as copyright, transparency [1] [2] [3] [5] [7] [9] [11], and safety [2] [5]. However, concerns have been raised about the drafting process and the potential weakening of protections for fundamental rights.

Description

The European AI Office has been actively developing the General-Purpose AI Code of Practice (the “Code”) [7], with the third draft published on 11 March 2025. This draft refines commitments to better align with the EU AI Act [1], finalized on 13 June 2024 and published on 7 July [3]. It addresses key copyright aspects and is structured into four parts: commitments, transparency [1] [2] [3] [5] [7] [9] [11], copyright [1] [4] [5] [6] [7] [8] [9] [11], and safety and security [2] [5] [11]. The Code serves as a guideline for providers of general-purpose AI models (GPAI models) to comply with obligations under the EU AI law, particularly for models identified as posing systemic risks [2], with compliance required by August 2, 2025 [4]. Non-compliance could result in fines of up to 3% of annual global turnover or 15 million euros [4], along with potential bans on the model [4].

Concerns have been raised regarding the drafting process [8], which is led by academics who may lack sufficient legal expertise and experience in technical standardization. Critics argue that the effort has not adequately involved standardization experts, input from the European Parliament [8], or oversight from Member States [8], characterizing it as a top-down approach that seeks to regulate GPAI models within a constrained timeframe [8], despite ongoing informal discussions on GPAI standards in ISO/IEC and CEN/CENELEC [8]. Additionally, modifications in the third draft aim to reduce the regulatory burden on GPAI model providers, which could negatively impact transparency [11], risk assessments [2] [5] [8] [11], and copyright protections [11], potentially undermining fundamental rights [11].

Significant changes have been made to the obligations regarding risk assessment and mitigations [5], which have weakened protections for fundamental rights [5]. The mandatory risks to assess [5], categorized as the “selected” systemic risks taxonomy [5], now primarily address existential risks [5], while discrimination has been moved to the optional risks list [5], along with other fundamental rights risks such as privacy harms and the dissemination of child sexual abuse material or non-consensual intimate imagery [5]. Critics emphasize that the draft has eroded transparency provisions and weakened requirements for third-party evaluations of systemic risks [11], allowing companies to exempt themselves from external assessments based on internal evaluations [11]. The draft suggests that GPAI model providers should evaluate these optional risks only when they relate to high-impact capabilities [5]. Furthermore, a standardized risk assessment framework is necessary to ensure comprehensive risk mitigation [11], particularly regarding the real-world applications of models [11].

The draft introduces new requirements that extend beyond existing obligations outlined in the AI Act [8], including the role of “external evaluators” for systemic risk assessments prior to the market release of GPAI models [8], a requirement not included in the AI Act [8], which only mandates adversarial testing [8]. However, the reliance on existing practices and voluntary obligations in the safety and security chapter may not introduce new standards, and GPAI providers should not solely determine which systemic risks to assess [11].

In addition to these changes, the third draft aims to enhance transparency and practicality by incorporating feedback from the previous draft [7]. It eliminates key performance indicators (KPIs) entirely and sharpens reporting commitments [7], introducing a user-friendly Model Documentation Form for providers [7]. A new transparency measure requires providers to disclose external inputs in the development or use of GPAI models with systemic risk (GPAISR) [7]. Furthermore, GPAI model developers are now obligated to ascertain whether protected content was collected appropriately and to mitigate the risk of copyright infringement in downstream AI systems [8], obligations not stipulated in the AI Act or the Copyright Directive 2019/790 [8], which focuses on primary liability [8]. Providers must also identify and comply with machine-readable rights reservations from rightsholders for content used in text and data mining [4], applicable to both EU and non-EU training activities [4]. They are tasked with excluding known piracy websites from their crawling activities and must publicly disclose their compliance measures [7]. Significantly, providers are required to implement reasonable measures to inform affected rightsholders about the web crawlers used and their robots.txt features [9], as well as the steps taken to comply with rights reservations under Article 4(3) of Directive (EU) 2019/790 during crawling [9]. Furthermore, providers should take steps to prevent the memorization of training content to avoid generating copyright-infringing outputs and prohibit such uses in their acceptable use policies [7], designating a contact point for rightsholders [4] [7]. However, the copyright section raises concerns about the use of metadata for rights management [11], which could undermine privacy and create inequities in content protection [11].

The third draft also reflects a growing recognition of the importance of free and open-source software (FOSS) for the EU’s digital sovereignty and the development of European IT businesses. Notably, Article 2(12) of the AI Act exempts AI systems released under free and open-source licenses from regulation unless classified as high-risk or falling under specific articles [3]. However, monetized AI components do not qualify for these exemptions [3], and recitals 102, 103 [3], and 104 outline reduced regulatory burdens for FOSS [3], particularly concerning transparency obligations [3].

While the Code represents progress in assisting providers with compliance [7], it still lacks clarity on handling datasets with infringing content and strategies for compensating creators for their intellectual property [7]. Critics have noted some improvements in the draft, including enhanced provisions for external assessment and a greater emphasis on risk acceptability by model providers [5]. The success of the Code will depend on its alignment with the AI Act [10], clarity [1] [7] [8] [10] [11], proportionality [1] [10], and practicality [7] [10], which should guide experts [10], the AI Office [1] [3] [5] [9] [10], and EU Member States in finalizing the document [10].

The draft Code of Practice is set for one final review before the final version is released by 2 May [5]. The AI Office and the AI Board will assess the draft [5], with the European Commission ultimately deciding on its approval through an implementing Act or establishing common rules for GPAI model providers by 2 August [5]. Additionally, the European Commission is developing a network of model evaluators to determine how general-purpose AI models with systemic risk should be assessed in accordance with the AI Act and the GPAI Code of Practice [5]. A thorough review by the Commission and the AI Board is essential to ensure that the proposed measures are necessary and do not exceed the provisions of the AI Act [8], as failure to do so could undermine the political compromise achieved in the AI Act and lead to an unconstitutional overreach of the Commission’s powers [8]. Timely access to essential deliverables [10], including the AI Office’s guidelines on GPAI model rules and a detailed summary of training data [10], will be crucial for companies to evaluate the overall framework and make informed compliance decisions before committing to the Code or pursuing alternative paths [10]. Ultimately, the objective is to create a balanced [10], practical [1] [7] [10] [11], and flexible Code of Practice that not only assists companies in complying with the AI Act but also enhances their capacity to develop and implement AI [10], ensuring Europe remains competitive on a global scale [10].

Conclusion

The development of the General-Purpose AI Code of Practice is a significant step towards ensuring compliance with the EU AI Act. While the third draft introduces important refinements and addresses key issues, concerns remain about the drafting process and the potential weakening of protections for fundamental rights. The success of the Code will depend on its alignment with the AI Act [10], clarity [1] [7] [8] [10] [11], and practicality [7] [10]. A thorough review by the European Commission and the AI Board is essential to ensure that the proposed measures are necessary and do not exceed the provisions of the AI Act [8]. Ultimately, the objective is to create a balanced [10], practical [1] [7] [10] [11], and flexible Code of Practice that not only assists companies in complying with the AI Act but also enhances their capacity to develop and implement AI [10], ensuring Europe remains competitive on a global scale [10].

References

[1] https://www.mhc.ie/latest/insights/recent-ai-act-guidance-what-you-need-to-know
[2] https://digitalpolicyalert.org/event/28440-european-commission-published-third-draft-of-general-purpose-artificial-intelligence-code-of-practice-including-testing-requirements
[3] https://openforumeurope.org/understanding-the-ai-act-open-source-key-updates-march-2025/
[4] https://news.bloomberglaw.com/us-law-week/eu-ai-act-guidelines-draft-hones-copyright-specifications
[5] https://cdt.org/insights/cdt-europes-ai-bulletin-march-2025/
[6] https://www.jdsupra.com/legalnews/eu-ai-office-publishes-third-draft-of-5443618/
[7] https://copyrightblog.kluweriplaw.com/2025/04/04/second-and-third-drafts-of-the-general-purpose-ai-code-of-practice-have-been-released/
[8] https://verfassungsblog.de/when-guidance-becomes-overreach-gpai-codeofpractice-aiact/
[9] https://communia-association.org/2025/04/04/3rd-draft-of-the-gpai-code-of-practice/
[10] https://project-disco.org/european-union/how-to-finalise-the-gpai-code/
[11] https://blog.witness.org/2025/03/eu-ai-act-ensuring-rights-disclosure/