Introduction
The AI Act’s Code of Practice (CoP) is a pivotal regulatory framework designed to align technical requirements with regulatory expectations for General-Purpose AI (GPAI) models. Spearheaded by the European Artificial Intelligence Office, the CoP aims to ensure compliance, ethical use [3], and transparency in AI development, significantly impacting tech firms [3], particularly those based in the US. The CoP is integrated into the enforcement structure for GPAI models [1], with the Commission considering adherence to relevant codes when determining penalties for non-compliance [1]. The development of the CoP involves multiple stakeholders and is influenced by lobbying, political tensions [1], and ambiguities in the text [1]. The AI Act introduces a structured framework for classifying AI systems by risk levels [2], which is vital for compliance and the protection of human rights [2].
Description
The AI Act’s Code of Practice (CoP) is designed to incorporate lessons from previous co-regulatory experiences [1], establishing a defined role for the Commission and stakeholders [1]. The European Artificial Intelligence Office is spearheading the development of the CoP for providers of General-Purpose AI (GPAI) models [2], which is essential for aligning technical requirements with regulatory expectations [2]. The AI Office plays a central role in the CoP’s development [1], ensuring compliance with obligations related to information updates [1], training data summaries [1], and systemic risk management in GPAI models. Assessments of existing AI models have revealed that none fully comply with the EU AI Act [2], highlighting the need for a rigorous technical interpretation to understand compliance comprehensively [2]. This framework emphasizes ethical use and transparency [3], significantly impacting tech firms [3], particularly those based in the US, such as Google, Microsoft [3], and OpenAI [2] [3], which may face a complex regulatory landscape affecting their market access and innovation strategies [3].
The CoP is integrated into the enforcement structure for GPAI models [1], with the Commission considering adherence to relevant codes when determining penalties for non-compliance [1]. Both GPAI model providers and the AI Office have the capacity to influence compliance and enforceability [1], although the AI Office must navigate political pressures and industry dynamics [1]. Challenges arise from conflicting stakeholder interests and potential political resistance to over-regulation [1], which may result in confusion and duplicated efforts [1]. The regulator can introduce procedural instruments to enhance legal certainty [1], allowing regulation addressees to establish standards through private entities [1]. A co-regulation strategy is advantageous for assessing innovation-related risks [1], enabling tailored approaches based on specific contexts [1].
The development of the GPAI CoP is uncertain [1], influenced by lobbying [1], political tensions [1], and ambiguities in the text [1]. Certain provisions lack clear definitions [1], particularly regarding adequacy decisions and general validity [1]. When the AI Office considers a code adequate [1], signatories can use this as a presumption of conformity [1], while non-signatories must demonstrate equivalent safeguards [1]. If a code is deemed inadequate [1], the Commission may create common rules for all providers [1]. The “general validity” clause introduces ambiguity regarding the broader applicability of common rules [1], with its precise meaning undefined [1]. Both the AI Act and GDPR empower the Commission to grant general validity to codes [1], but enforcement mechanisms differ significantly [1]. The GPAI CoP could adopt a mechanism similar to Germany’s labor law [1], where sectoral agreements gain binding force through state endorsement [1].
The Commission’s ability to grant general validity to the CoP is a significant regulatory tool [1], especially against non-signatories [1]. If major GPAI developers choose not to sign [1], it could create an uneven competitive landscape [1]. The Commission may limit general validity to jurisdictional effects [1], but this appears inconsistent with the regulatory framework [1]. The AI Office must balance regulatory objectives with stakeholder concerns regarding compliance and fairness [1]. Providers may rely on codes to demonstrate compliance until harmonized standards are published [1], suggesting that co-regulatory efforts could be superseded by formal standards [1]. The Commission must navigate these dynamics to create a clearer interpretive framework [1]. A robust CoP may be more progressive than a standardization framework that struggles to keep pace with innovation [1].
The CoP development involves multiple stakeholder groups with varying priorities [1]. The AI Office has established working groups to manage these perspectives [1], but a modular approach to the CoP could enhance effectiveness [1], facilitating targeted engagement [1], accelerating consensus-building [1], and providing compliance flexibility [1]. A dedicated Intellectual Property Code is warranted due to complexities surrounding copyright compliance in AI training [1]. Additionally, a separate Transparency Code is necessary to address contentious transparency requirements among stakeholders [1]. Risk mitigation strategies for systemic-risk GPAI models require a nuanced approach [1], balancing self-assessment and mandatory audits [1].
The ongoing debate reflects misunderstandings about the CoP’s role [1], which is to facilitate compliance [1], support best practices [1], and allow for voluntary enhancements [1]. The strategic value of the CoP lies in establishing industry-wide best practices that exceed the AI Act’s baseline requirements [1]. The future of the GPAI CoP hinges on balancing legal certainty with policy decisions made by the AI Office [1]. The Commission’s power to extend the code’s scope serves as a regulatory tool to address non-signatories and prevent market distortions [1]. The CoP should primarily facilitate compliance while fostering innovation in governance practices [1].
Industry stakeholders express caution against exceeding the AI Act’s requirements [1], while civil society advocates for stronger measures [1]. The AI Office must build provider confidence in the CoP as a reliable compliance mechanism [1]. If the CoP is not finalized [1], the Commission may need to establish common rules [1], complicating compliance for providers [1]. The code could be enforced under the Unfair Commercial Practice Directive to ensure adherence to self-imposed codes of conduct [1]. The Commission has yet to provide further guidance but has indicated that it will [1]. The GDPR CoC general validity has not been utilized but serves as a potential model for the AI Act [1].
The GDPR mandates a submission procedure for EU-wide codes [1], requiring national authority involvement [1]. The Commission adopted an implementing decision on a standardization request to CEN-CENELEC [1], but it did not include GPAI models [1]. The process is structured around four working groups: Transparency and Copyright [1], Categorization of Risk and Assessment [1], Identification of Mitigation Measures [1], and Governance and Internal Risks Assessments [1]. As the AIA establishes a framework for AI governance [3], its extraterritorial reach mandates compliance from any company offering AI services in the EU [3], potentially setting a precedent for global AI governance similar to the GDPR [3]. This presents an opportunity for the US and EU to collaborate on AI standards [3], fostering innovation while ensuring responsible deployment [3]. The AIA represents a significant regulatory effort [3], but its impact on innovation remains uncertain [3], raising concerns about competitiveness and international cooperation in the evolving AI landscape [3].
The Act introduces a structured framework for classifying AI systems by risk levels [2], which is vital for compliance and the protection of human rights [2]. It categorizes AI systems into four risk categories: unacceptable [2], high [2], limited [1] [2], and minimal [2], each with specific compliance requirements [2]. High-risk systems [2], in particular [2], require human oversight to prevent the automation of critical decisions [2]. The Act mandates clear disclosure of AI-generated content [2], the design of AI models to prevent illegal content generation [2], and the publication of summaries of copyrighted data used for training [2]. High-impact AI models must undergo thorough evaluations [2], and any serious incidents must be reported to the European Commission [2]. Additionally, AI-generated content must be labeled to inform users of its origin [2].
National authorities are tasked with creating testing environments that simulate real-world conditions [2], which is crucial for innovation and compliance [2]. This is particularly relevant for companies operating within the EU [2], such as OpenAI and Google [2], as they navigate the complexities of the EU AI Act [2]. Current compliance evaluations often rely on simple questionnaires to assess AI system risks [2]. In contrast [2], a more effective approach involves detailed technical requirements mapping and an open-source benchmarking suite for quantitative self-assessments [2]. This suite allows developers to access and enhance benchmarking tools [2], facilitating a thorough evaluation of models based on critical areas such as robustness [2], safety [2], and fairness [1] [2] [3].
While the benchmarking suite provides quantitative assessments of large language models (LLMs), the lack of established compliance standards limits the ability to make definitive qualitative statements about models’ compliance with the EU AI Act [2]. Smaller models tend to score poorly on technical robustness and safety [2], and many examined models struggle with diversity [2], non-discrimination [2], and fairness [1] [2] [3], often due to an overemphasis on model capabilities [2]. The EU AI Act is expected to encourage providers to adopt a more balanced approach to LLM development [2], addressing previously neglected aspects [2]. The methodology and results of the technical interpretation of LLMs are relevant to the ongoing efforts to develop the CoP for general-purpose AI models [2], as outlined by the Act [2].
Current benchmarking efforts have limitations [2], focusing on horizontal coverage rather than vertical depth [2]. Future work should enhance the depth of benchmarking for individual technical requirements to construct a comprehensive compliance evaluation suite [2]. The existing benchmarks may yield inconclusive results [2], underscoring the need for improvement to meet regulatory standards effectively [2]. An important next step involves expanding the scope to cover other AI systems beyond LLMs [2], addressing the unique challenges specific to various model types and applications [2].
Conclusion
The AI Act’s Code of Practice (CoP) is a critical regulatory tool that aims to align technical requirements with regulatory expectations, ensuring compliance [1] [2], ethical use [3], and transparency in AI development. Its development involves navigating complex stakeholder dynamics and political tensions, with significant implications for tech firms, particularly those based in the US. The CoP’s integration into the enforcement structure for GPAI models and its potential to influence global AI governance underscore its importance. However, the CoP’s impact on innovation and competitiveness remains uncertain, necessitating careful consideration of its implementation and enforcement.
References
[1] https://ai-regulation.com/gpai-cop-hidden-policy-choices/
[2] https://www.restack.io/p/ai-research-platforms-tools-answer-artificial-intelligence-eu-act
[3] https://blogs.law.ox.ac.uk/oblb/blog-post/2025/03/eu-ai-act-global-game-changer-or-roadblock-innovation-us