Introduction
The European Union’s AI Act establishes a comprehensive legal framework for General-Purpose AI (GPAI) models, focusing on transparency, safety [6] [10] [11] [14], and accountability [10]. It categorizes AI systems based on risk and sets specific obligations for providers, with a strict implementation timeline. The Act aligns with other EU regulations and introduces a Code of Practice for GPAI models, impacting providers globally.
Description
The EU AI Act establishes a comprehensive legal framework for providers of General-Purpose AI (GPAI) models [11], with specific obligations set to take effect on August 2, 2025. This framework enhances transparency, safety [6] [10] [11] [14], and accountability in AI systems across the EU, categorizing AI systems based on risk [11], particularly focusing on those with systemic capabilities [11], including high-risk models [11]. Providers must prepare extensive technical documentation [11], comply with copyright-related rules [11], and implement quality assurance mechanisms well in advance of the enforcement deadline.
The implementation timeline is strict [11], requiring immediate action from organizations [11]. Providers of GPAI models released before the enforcement date must achieve compliance by August 2, 2027. The Act aligns with other EU regulations on cybersecurity [5], chips [5], and digital services [5], aiming for strategic autonomy and a level playing field [5]. The European Commission has published the Code of Practice for GPAI Models, establishing a voluntary framework for GPAI providers to align with the transparency [6], copyright [1] [3] [5] [6] [10] [11] [13] [14], and systemic risk requirements under the AI Act [6]. This Code applies globally to GPAI providers whose outputs are utilized within the EU and includes documentation tools and risk protocols, although some prescriptive elements have raised concerns within the industry.
Key implementation tools [6], including a mandatory template for public summaries of training content [2] [12], are still pending [6]. The guidelines published by the European Commission clarify obligations under the AI Act [13], addressing key areas such as the definition of GPAI models, provider responsibilities [1] [2] [4] [13], open-source exemptions [2] [3], and enforcement considerations [2]. A GPAI model is defined as one that demonstrates significant generality and can competently perform a wide range of tasks [1], utilizing over 10^23 floating point operations per second (FLOPS) in training compute, a threshold raised from the previously proposed 10^22 FLOPS [2]. This adjustment results in fewer models qualifying as GPAI, as models limited to narrow tasks are excluded [2], while those demonstrating substantial generality may still be classified as GPAI [2]. Models exceeding 10^29 FLOPS are presumed to carry systemic risk [1], necessitating notification to the European Commission’s AI Office [1]. Organizations can contest this presumption by demonstrating that their model does not present systemic risk [9], although the burden of proof lies with them.
The guidelines establish clear technical criteria for identifying GPAI models and outline obligations for providers, which include lifecycle responsibilities that begin at the pre-training stage and extend through all development phases, including post-market modifications [13]. Providers must maintain comprehensive records of their AI’s development and testing processes [8], publish training data summaries [5] [13], and ensure copyright compliance [13]. While open-source model providers must adhere to copyright obligations and publish training data summaries [5], they may be exempt from certain documentation requirements unless they present systemic risks. However, this exemption is voided if the provider monetizes the model [7], which includes paid technical support and monetized platforms [7].
The emergence of Shadow AI poses risks [5], as unauthorized use of AI tools by employees may lead to compliance issues under the EU AI Act [5]. Providers of advanced models that present systemic risks [1] [4] [10] [12], specifically those exceeding 10^29 FLOPS [10], will face additional requirements [1] [10] [11] [12], including comprehensive risk assessments [11] [13], mitigation strategies [6] [13], and notifying the AI Office of relevant information if their model is presumed to have high-impact capabilities [2]. The Act categorizes GPAI models by risk [5], with systemic risk requiring increased documentation and oversight [5], which may incentivize businesses to minimize energy use [5].
The GPAI Code of Practice emphasizes a principles-based approach to obligations, particularly concerning copyright [9] [14], and GPAI providers must be cautious about data scraping practices to avoid violating these obligations [14]. Providers who adhere to this Code will experience reduced burdens and increased legal certainty [10], although signing it does not guarantee automatic compliance [13]. The AI Office will monitor adherence and may impose scrutiny on non-signatories [13].
From August 2, 2025 [1] [2] [3] [4] [6] [7] [9] [10] [11] [14], providers must fulfill transparency and copyright obligations when introducing GPAI models to the EU market [10]. If a model is made available to any downstream actor in the EU market [13], that actor is considered the provider and must fulfill GPAI obligations [13]. Downstream actors incorporating GPAI models into AI systems also assume provider responsibilities [13], particularly if the model is later placed on the EU market [13]. However, not all modifications to GPAI models trigger provider obligations for downstream actors; only specific obligations related to documentation and training data summaries apply [13]. Modifications to systemic-risk GPAI models require full compliance with all relevant obligations [13], including managing risks, reporting incidents [13], and ensuring cybersecurity [13].
Signatories of the Code are required to document information related to each GPAI model for ten years and must provide necessary information to the AI Office and downstream providers [14]. The Code defines systemic risks and mandates the establishment of a comprehensive Safety and Security Framework [14], including risk assessments and mitigation measures for models deemed to have systemic risks [14]. Signatories must also report serious incidents to the AI Office [14], including cybersecurity breaches and harm to individuals [14].
An enforcement timeline for GPAI models has been established [10], with full enforcement powers available from August 2, 2026 [13], allowing the Commission to impose fines for non-compliance [4], which can reach up to €15 million or 3% of global revenue [1]. The AI Office may support new entrants [13], particularly those developing systemic-risk models [13], and will coordinate the development of consistent standards in response to the evolving evaluation ecosystem [13]. Stakeholders will be invited to contribute to updates through consultations and workshops [13], and providers are encouraged to review their obligations [4], assess risks [4], prepare for compliance [4] [6] [11] [13], and engage with the AI Office [4]. These guidelines, developed through public consultation [4], reflect the Commission’s interpretation and will guide enforcement [4], enabling stakeholders across the AI value chain to innovate with clarity and confidence.
The guidelines also outline various scenarios that constitute the “placing on the market” of GPAI models [7], including availability through APIs [7], app stores [7], cloud services [7], and other means [7]. A significant legal fiction is established whereby if a company develops a GPAI model but does not place it on the market [7], integrating it into an AI system that is placed on the market results in the model being deemed as placed on the market as well [7].
The concept of a “lifecycle” for GPAI models is introduced [7], indicating that a model remains the same throughout its lifecycle [7], even if modified after its initial training [7]. This is crucial for grandfathering provisions concerning models modified after August 2, 2025 [7], provided they were placed on the market before that date [7].
In contrast [7], downstream providers who fine-tune a GPAI model become providers themselves if their modifications significantly alter the model’s capabilities or systemic risk [7], particularly if the training compute used exceeds one-third of the original model’s training compute [1] [7]. If the original training compute is unknown [7], downstream providers can use established thresholds for GPAI models to assess their obligations [7].
Recent changes compared to earlier approaches include clarifications on GPAI models with systemic risk [7], the treatment of modifications by original providers [7], and a more precise definition of monetization [7]. Effective governance requires a combination of technical compliance [11], organizational capabilities [11], and strategic planning [11], as organizations must engage with GPAI model providers to ensure compliance across the AI value chain [11], particularly regarding fundamental rights impact assessments [11]. The transition from legislation to practical implementation presents challenges [11], including technical complexity and the need for international coordination [11]. The Act’s technology-neutral approach allows for adaptation to emerging AI capabilities while maintaining core protection principles [11], setting global precedents for AI governance and influencing international regulatory development [11].
Conclusion
The EU AI Act’s comprehensive framework for GPAI models significantly impacts AI governance, emphasizing transparency [1] [5] [10], safety [6] [10] [11] [14], and accountability [10]. By aligning with other EU regulations and establishing a global Code of Practice, the Act influences international standards. Providers must navigate complex compliance requirements, balancing innovation with regulatory obligations, while the Act’s technology-neutral approach ensures adaptability to future AI advancements.
References
[1] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250724-european-commission-issues-guidelines-for-providers-of-general-purpose-ai-models
[2] https://www.lexology.com/library/detail.aspx?g=a69576af-f56b-4951-935c-d75230e12f64
[3] https://www.paulweiss.com/insights/client-memos/eu-commission-publishes-guidelines-on-general-purpose-ai-obligations-as-well-as-training-data-disclosure-template-further-clarity-as-the-countdown-to-enforcement-begins
[4] https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers
[5] https://martel-innovate.com/news/2025/08/04/eu-ai-act-takes-effect-compliance-guide/
[6] https://perkinscoie.com/insights/update/delayed-eu-code-practice-provides-compliance-framework-general-purpose-ai-models
[7] https://conventuslaw.com/report/taking-the-eu-ai-act-to-practice-how-the-final-gpai-guidelines-shape-the-ai-regulatory-landscape/
[8] https://artificialintelligenceact.eu/article/53/
[9] https://www.jdsupra.com/legalnews/european-commission-publishes-3513388/
[10] https://digital-strategy.ec.europa.eu/en/news/eu-rules-general-purpose-ai-models-start-apply-tomorrow-bringing-more-transparency-safety-and
[11] https://digital.nemko.com/insights/eu-ai-act-rules-on-gpai-2025-update
[12] https://digital-strategy.ec.europa.eu/en/factpages/general-purpose-ai-obligations-under-ai-act
[13] https://artificialintelligenceact.eu/gpai-guidelines-overview/
[14] https://www.paulweiss.com/insights/client-memos/eu-commission-publishes-its-code-of-practice-for-general-purpose-ai-what-you-need-to-know