Introduction

The third draft of the EU General-Purpose AI (GPAI) Code of Practice has been released [1] [7] [8], marking a significant step in the EU’s efforts to regulate AI technologies. This draft enters its final consultation phase [1], with completion expected in May 2025. It aims to establish a comprehensive framework for AI governance, focusing on transparency [2] [3] [7], risk management [3] [4] [7], and safety [8], particularly for models posing systemic risks [6].

Description

The revised Code features a streamlined structure that consolidates key commitments on transparency, copyright compliance [3], and risk management for all general-purpose AI model providers [1] [7]. It includes 16 commitments focused on safety and security for those classified as posing systemic risks. The Code emphasizes implementation measures centered on risk assessment and compliance mechanisms, ensuring governance aligns with advancements in AI technology [5]. Notably, it mandates regular Safety and Security Model Reports to document compliance [5], and a new user-friendly Model Documentation Form has been introduced to facilitate regulatory compliance in documenting AI models and meeting transparency obligations.

The Code aims to ensure positive social and economic outcomes from powerful AI models while managing associated risks [4], serving as a foundational framework for numerous applications [4], including popular models like ChatGPT. For AI models identified as carrying systemic risk [1], the Code imposes stricter requirements [1], including risk assessments [1], model evaluations [1] [5] [7], incident reporting [1] [5] [7], and mechanisms for whistleblowing. Companies developing these systemic risk models are required to engage external independent experts for evaluation [5]. While the latest draft has seen a dilution of safeguards compared to earlier versions [4], with a narrower scope of risks and reduced whistleblower protections [4], improvements have been made to enhance risk management, such as making it more difficult for providers to self-exclude from independent assessments and introducing a mechanism for pre-deployment information sharing [4].

The AI Office plans to issue further guidance to clarify responsibilities across the AI value chain [1], particularly for downstream actors who modify or fine-tune existing models [1]. This has elicited mixed reactions from industry stakeholders; while some view these measures as crucial for AI safety [1], others express concerns about the ambiguity surrounding the definition of “systemic risk,” which could lead to inconsistent enforcement and burdens on AI companies [1]. The dynamic nature of AI technology presents challenges in balancing regulatory oversight with the need for innovation [1]. The Chairs of the Code emphasize the importance of flexibility in regulations to keep pace with technological advancements [1], although some industry representatives caution that frequent regulatory changes may create uncertainty for businesses investing in AI development [1].

In addition to the Code of Practice [1], the AI Office is developing a public summary template for training data transparency and has committed to publishing further guidance on key issues [1], including clarifying the definition of general-purpose AI models [1] [7], establishing provider and downstream actor responsibilities [1], and determining the applicability of rules to models launched before August 2025 [1]. Stakeholders are invited to submit written feedback on the third draft until Sunday, 30 March 2025 [1], through an interactive website and dedicated workshops [8], with additional discussions planned through working groups [1]. Civil society organizations and downstream AI users are also encouraged to participate [1], potentially enriching the perspectives that will shape the final draft [1].

Stakeholder engagement is a key aspect of the drafting process [7], with around 1,000 participants [7], including EU Member State representatives and international observers [7], contributing to the discussions [7]. The drafting process is designed to be inclusive and transparent [6], involving a diverse range of stakeholders [6], including model providers [6], industry organizations [1] [6] [7], civil society [1] [6] [7], and academia [6]. Written feedback is being collected [7], and further discussions will take place in line with a tentative timeline [7]. Workshops will be organized for general-purpose AI model providers [7], Member State representatives [6] [7], and civil society organizations to facilitate targeted interactions [7]. The AI Office ensures transparency and facilitates discussions [6], maintaining an open environment for collaboration [6].

The EU’s regulatory efforts in AI have faced criticism [1], particularly from US officials and industry leaders who are concerned about the potential for overregulation [1]. At the Paris EU Action Summit [1], US Vice President JD Vance cautioned that stringent regulations could hinder innovation [1], contrasting the EU’s approach with the US administration’s focus on “AI opportunity.” European AI firms [1], including Mistral [1], have also voiced concerns regarding the increasing legislative burden in Europe [1]. While the current draft largely reflects existing safety practices and voluntary obligations [4], functioning more as a compliance tool for companies [4], it represents progress in AI governance by streamlining compliance and providing clarity for consumers and businesses regarding the safety and testing of AI products [4]. Future iterations of the Code will need to be more ambitious to effectively address the evolving impacts of AI technologies [4].

However, the finalization of the Code has been delayed by at least one month as the European Commission seeks to incorporate stakeholder feedback and ensure legal robustness. A coalition of 15 European rightsholder organizations has raised concerns that the current draft contradicts existing copyright law [2], urging for a thorough revision to protect intellectual property rights and uphold the integrity of the AI Act [2]. The proactive governance approach of the EU sets the stage for a transformative balance between innovation and ethical accountability in AI [3], emphasizing the need for global collaboration to foster a unified vision for responsible AI regulation.

Conclusion

The EU’s GPAI Code of Practice represents a pivotal effort to regulate AI technologies, balancing innovation with ethical accountability [3]. While the draft has faced criticism and calls for further refinement, it marks progress in establishing a framework for AI governance. The ongoing consultation and stakeholder engagement process will be crucial in shaping a robust and effective final version, with implications for global AI regulation and collaboration.

References

[1] https://www.computing.co.uk/news/2025/legislation-regulation/third-draft-of-general-purpose-ai-code-of-practice-published
[2] https://www.euronews.com/next/2025/02/20/drafting-of-ai-code-of-practice-faces-at-least-one-month-delay
[3] https://www.cyberpeace.org/resources/blogs/draft-eu-ai-rules-code-of-practice-for-general-purpose-ai
[4] https://www.adalovelaceinstitute.org/news/gpai-code-of-practice/
[5] https://www.techuk.org/resource/eu-s-ai-code-of-practice-third-draft.html
[6] https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
[7] https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts
[8] https://www.lexisnexis.co.uk/legal/news/commission-experts-publish-third-draft-of-general-purpose-ai-code-of-practice