Introduction
The European Union’s AI Act, set to take effect in August 2024 [2] [5] [7], establishes a regulatory framework for general-purpose AI (GPAI) models, distinguishing between obligations for GPAI model providers and other AI systems. This framework aims to ensure compliance, transparency [2] [5] [8], and safety in the deployment and use of AI technologies, with specific guidelines and timelines for implementation.
Description
Providers of general-purpose AI (GPAI) models that are placed on the market before August 2, 2025 [9], have until August 2, 2027 [9], to ensure compliance with the AI Act [3] [4] [9]. The guidelines specify that “placing a model on the market” includes making it available through an API [9], software library [9], or integrating it into applications or services [9]. On April 22, 2025 [7] [8], the AI Office within the EU Commission initiated a multi-stakeholder consultation to develop these guidelines [8], aiming to clarify the definitions and distinctions between GPAI models and the compliance role of the European AI Office. Stakeholders are invited to provide feedback by May 22, 2025, to refine the guidelines [8] [9], which will address interpretative challenges regarding the application of requirements for GPAI model providers [8].
The AI Act [1] [2] [3] [4] [5] [7] [8] [9], which will take effect in August 2024 [2], differentiates between obligations for GPAI model providers and those for other AI systems [7], making it essential for developers and integrators to determine their provider status [7]. A GPAI model is characterized as an AI model capable of performing a wide range of tasks [7], trained on extensive datasets through large-scale self-supervision techniques [8], and can be integrated into various applications [5]. An example of such a model is ChatGPT, which generates human-like text [8]. A proposed threshold based on computational resources indicates that models generating text or images using over 10^22 floating point operations (FLOP) are presumed to be GPAI models [7]. If significant modifications to a GPAI model exceed approximately 3 x 10^21 FLOPs, the modifying company must create separate technical documentation and assess compliance with GPAI requirements for the modified model [1]. This requirement also applies to companies licensing third-party models that conduct significant modifications [1].
If fine-tuning results in a model that surpasses the systemic risk threshold of 10^25 FLOPs [1], the modifying company must adhere to GPAI-SR requirements [1], which include conducting risk assessments [1], adversarial testing [8], implementing cybersecurity measures [7] [8], and notifying the AI Office of serious incidents. Modifications to these models can be made by the original provider or downstream modifiers [7], with specific thresholds dictating when a modification constitutes a new GPAI model [7]. The AI Office acknowledges that FLOP is an imperfect measure and is considering alternative metrics for assessing a model’s generality and capabilities [5].
Providers of GPAI models that present systemic risk are subject to additional obligations [7], including conducting model evaluations and ensuring transparency for AI system integrators. The guidelines also define what it means to place a GPAI model on the market and specify exemptions for certain open-source releases [7], contingent on the collection of personal data [5]. Compliance with the proposed code for GPAI providers is not mandatory [5], but adherence may enhance transparency and trust with the Commission and stakeholders [5]. The AI Office has released the third draft of the Code of Good Practices aimed at assisting GPAI model providers in adhering to the AI Act’s requirements [2], particularly in areas of transparency [2], copyright [2] [5] [8], and risk management [2]. This updated version features a more streamlined structure and refined commitments based on feedback from previous drafts [2]. Key commitments focus on safety and security measures [6], applicable to all general-purpose AI models [2], with an exemption for open-source models [2]. The final draft of the EU Code of Practice for GPAI models is anticipated soon [3], providing a framework for companies to demonstrate compliance with the EU AI Act [3]. This Code aims to enhance legal certainty and support compliance in a rapidly evolving regulatory landscape [3].
In parallel [4], the European Data Protection Board is collaborating with the EU AI Office to draft guidelines that address the relationship between the AI Act and EU data protection laws [4]. The European Data Protection Supervisor has also published its 2024 annual report [4], detailing its efforts to ensure compliance with the AI Act as part of its ongoing work program [4]. The European Data Protection Board has released its annual report for 2024 [4], which includes opinions on the use of personal data in training AI models [4]. However, the current draft of the Code includes complex and prescriptive requirements that may hinder its adoption and contradict the AI Act’s scope [3]. To foster innovation and competitiveness in Europe [3], a simpler regulatory framework is essential [3]. The European Commission has prioritized the simplification of legislation [3], particularly for strategic technologies like AI [3], as outlined in the recent AI Continent Action Plan [3]. The success of the AI Act hinges on the practicality of its rules [3], making the finalization of the Code a critical step [3].
Developing internal AI policies is essential for managing AI responsibly [1], including defining approval procedures for AI use cases [1]. Contractual agreements with AI suppliers should be reviewed to ensure sufficient rights for training and using AI [1], as well as to obtain necessary information for compliance with the AI Act [1]. The obligations for GPAI model providers are set to take effect on August 2, 2025 [7], with a focus on collaborative enforcement [7], particularly for those models deemed to have systemic risk [3] [7]. The AI Office recognizes that some providers may encounter difficulties in achieving timely compliance and encourages them to communicate their compliance strategies if they plan to launch a GPAI model after the specified date [9].
The guide includes a flowchart to assist in identifying whether an AI system is subject to conformity assessment (CA) obligations [10], detailing steps such as risk classification and responsibility for conducting CAs [10]. CAs encompass a framework of assessments [10], requirements [1] [2] [3] [5] [6] [8] [10], and documentation obligations [10]. Providers must evaluate the risks associated with their AI systems and implement necessary features [10], such as event recording and human oversight [10]. Compliance with documentation obligations [10], including technical documentation [1] [8] [10], is also essential [10]. The guide emphasizes the importance of standardization and harmonized standards in facilitating the CA process [10]. AI systems developed within regulatory sandboxes or certified under cybersecurity schemes may benefit from a presumption of conformity with certain AIA requirements [10].
Ongoing compliance is crucial [10], as CAs are not a one-time requirement [10]. Providers must establish monitoring systems to ensure that essential requirements are continuously met throughout the lifecycle of high-risk AI systems [10]. Companies involved in the development or use of GPAI models are advised to review the guidelines and consider participating in public consultations to provide feedback before the final draft is released. Signatories to the GPAI code of practice can expect their adherence to be a focus of enforcement activities by the Commission [5], and commitments made in the code may influence the severity of penalties for non-compliance [5]. This report also highlights a comprehensive comparison between the EU AI Act’s GPAI Code of Practice and the practices of leading AI companies [6], facilitating dialogue between regulators and GPAI model providers by showcasing evidence of existing practices [6].
Conclusion
The EU AI Act represents a significant step towards regulating AI technologies, particularly GPAI models, to ensure safety [1], transparency [2] [5] [8], and compliance [1] [3] [4] [5] [6] [7] [8] [9] [10]. By establishing clear guidelines and timelines, the Act aims to foster innovation while maintaining rigorous standards. The ongoing collaboration between regulatory bodies and stakeholders is crucial for refining these guidelines and ensuring their practical implementation. As the regulatory landscape evolves, the successful adoption of the AI Act will depend on the balance between comprehensive regulation and the flexibility needed to accommodate technological advancements.
References
[1] https://eaccny.com/news/member-news/wilson-sonsini-eu-ai-office-clarifies-key-obligations-for-ai-models-becoming-applicable-in-august/
[2] https://www.actuia.com/en/news/code-of-good-practices-for-general-purpose-ai-the-quest-for-a-delicate-balance/
[3] https://www.itic.org/news-events/techwonk-blog/a-proinnovation-code-of-practice-for-europes-ai-continent-ambitions
[4] https://www.techuk.org/resource/dispatch-from-brussels-updates-on-eu-tech-policy-may-2025.html
[5] https://www.pinsentmasons.com/out-law/news/eu-clarify-ai-act-scope-gen-ai
[6] https://paperswithcode.com/paper/existing-industry-practice-for-the-eu-ai-act
[7] https://artificialintelligenceact.eu/providers-of-general-purpose-ai-models-what-we-know-about-who-will-qualify/
[8] https://www.jdsupra.com/legalnews/eu-commission-seeks-stakeholder-7799362/
[9] https://www.jdsupra.com/legalnews/eu-ai-office-clarifies-key-obligations-7899754/
[10] https://fpf.org/blog/fpf-and-onetrust-launch-updated-conformity-assessment-under-the-eu-ai-act-guide-and-infographic/