Introduction
The rapid advancements in artificial intelligence, particularly in General-Purpose AI (GPAI) models [2], have introduced significant updates that can alter their capabilities and risk profiles [2]. These updates [2], often implemented without adequate oversight, can lead to unintended behaviors, posing risks to various applications. The European Union (EU) has responded by establishing comprehensive regulations to ensure the safe and responsible development and deployment of GPAI models.
Description
Technological advancements in artificial intelligence [2], particularly with General-Purpose AI (GPAI) models [2], have led to significant updates that can alter their capabilities and risk profiles [2]. These updates often occur without sufficient oversight [2], potentially resulting in unintended behaviors that can disrupt downstream applications [2]. For instance [2], a healthcare startup integrating a GPAI model may face serious consequences if an update leads to the assistant providing harmful advice [2], prompting product withdrawal due to safety concerns [2].
In 2024 [2], while only a few new foundation models were released [2], numerous updates were implemented [2], affecting billions of users [2]. These updates [2], necessary for performance enhancement and bug fixes [2], can also introduce new risks and vulnerabilities [2]. The lack of rigorous testing for substantial modifications [2], akin to standards in high-risk industries like aviation or healthcare [2], raises concerns about the reliability of GPAI models [2]. The EU has established comprehensive obligations for GPAI [4], including requirements for documentation [4], risk mitigation [1] [4], and security measures [4], with ongoing development of codes of practice [4].
A recent analysis of 143 changelogs from major AI providers revealed that a significant percentage of updates could increase systemic risk [2], with only a small fraction focused on safety improvements [2]. The potential for updates to create novel failure modes underscores the need for a more nuanced risk management approach [2]. The EU AI Act categorizes AI systems into four risk levels: unacceptable [4], high [1] [4], limited [4], and minimal [4]. Unacceptable systems are prohibited [4], while high-risk systems [4], such as those involved in biometrics and critical infrastructure [4], must adhere to stringent requirements [4], including risk assessments and human oversight [4]. GPAI models that pose systemic risks [1] [3] [4] [7], particularly those with extensive usage in the EU [4], are subject to additional regulations [4].
To comply with the Act, providers must maintain detailed technical documentation of their models’ training [1], testing [1] [2] [7], and evaluation processes [1], which is essential for regulatory scrutiny and compliance by downstream deployers [1]. They are also required to publish summaries of training datasets and ensure adherence to copyright laws, adopting responsible development practices that include explicit copyright compliance policies [1]. Continuous risk assessment is mandated [5], requiring providers to monitor model performance and update evaluations as new use cases arise [5]. Engagement with rights holders is emphasized [1], with a dedicated point of contact for copyright-related concerns [1].
Enhanced obligations apply to GPAI models classified as posing systemic risks, necessitating the development of a Risk Taxonomy Roadmap to classify potential harms and implement a Safety and Security Framework for ongoing risk assessments [1]. Providers must conduct external evaluations before market entry and continuously thereafter, alongside mechanisms for reporting serious incidents and protecting whistleblowers [1]. Open-source GPAI models are exempt from some obligations unless classified as systemic-risk models [3] [7], while AI systems in research phases are generally exempt until market placement [3] [7]. The Act’s definition of GPAI models lacks precise criteria [7], prompting guidelines to introduce training compute thresholds to determine qualification and clarify when a downstream actor becomes a provider [7].
Non-compliance with GPAI obligations can lead to significant liability [4]. Although the draft GPAI Code of Practice is currently voluntary [4], not adopting it may result in negative legal and regulatory implications [4]. The proposal suggests that failure to comply could shift the burden of proof onto companies [4], requiring them to demonstrate adherence to safety [4], transparency [1] [3] [4] [5], and governance standards in cases of enforcement or litigation [4]. This would elevate the GPAI Code of Practice to a baseline standard for responsible AI development and deployment [4], particularly for GPAI providers [4]. Proactive engagement and cross-functional compliance efforts are essential for organizations operating in or with customers in the EU as the Code of Practice on GPAI models is finalized and the AI Act’s next milestones approach [6].
To address the risks associated with GPAI updates, it is essential to establish clear thresholds for when updates necessitate new risk assessments [2], develop standardized documentation requirements [2], and implement monitoring frameworks to track the impact of updates over time [2]. Transitional provisions are in place for existing GPAI models [7], requiring compliance by 2 August 2027 [7]. The European AI Office will supervise providers [7], although enforcement mechanisms and cooperation with national authorities remain underdeveloped [7]. This approach aims to foster consumer trust and support a dynamic business environment that encourages innovation while ensuring safety in the rapidly evolving landscape of AI [2]. Organizations that proactively address these requirements will be better positioned for success in a regulated AI environment [5], ensuring readiness for the upcoming deadlines while fostering robust AI governance frameworks that prioritize trustworthiness [5]. Ongoing adaptation and legal vigilance will be necessary due to the dynamic nature of AI technology [7], particularly concerning intellectual property rights related to training data and the criteria for regulatory obligations [7].
Conclusion
The evolving landscape of GPAI models presents both opportunities and challenges. While technological advancements offer enhanced capabilities, they also introduce new risks that necessitate stringent oversight and regulation. The EU’s comprehensive framework aims to mitigate these risks by enforcing rigorous standards and practices. Organizations that align with these regulations will not only ensure compliance but also foster trust and innovation in the AI sector. As the field continues to evolve, ongoing vigilance and adaptation will be crucial to maintaining a balance between innovation and safety.
References
[1] https://www.medialaws.eu/balancing-ai-innovation-and-risk-inside-the-gpai-code-of-practice/
[2] https://oecd.ai/en/wonk/proportional-oversight-for-ai-model-updates-can-boost-ai-adoption
[3] https://www.lexology.com/library/detail.aspx?g=16f29be0-6218-4061-85b7-215e9d2832d7
[4] https://www.kslaw.com/news-and-insights/transatlantic-ai-governance-strategic-implications-for-us-eu-compliance
[5] https://eyre.ai/gpai-model-obligations-ai-act/
[6] https://cdp.cooley.com/the-eu-ai-act-key-milestones-compliance-challenges-and-the-road-ahead/
[7] https://www.osborneclarke.com/insights/european-commission-clarify-providers-obligations-general-purpose-ai-models-under-eu-ai