Introduction
The European Union (EU) is set to implement a comprehensive governance framework for artificial intelligence (AI) through the EU AI Act. This legislation aims to standardize AI regulations across all 27 EU Member States, ensuring safety, transparency [1] [7], and accountability in AI systems. The Act introduces a structured approach to categorizing AI systems by risk and establishes new governance rules and obligations for General Purpose AI (GPAI) models.
Description
By 2 August 2025 [2] [3], the governance framework for the EU AI Act must be established [3], which includes the formation of EU-level advisory bodies and national competent authorities [3]. The European AI Board [3], consisting of representatives from EU Member States [3], is already operational [3]. The Commission has initiated the establishment of a Scientific Panel of independent experts [3], with an open call for expressions of interest [3]. Additionally, stakeholders from civil society [3], academia [3], and industry can apply to join the AI Act Advisory Forum [3].
On this date, the EU is also required to establish an AI Office to enhance its expertise and capabilities in artificial intelligence [2]. This Office will be supported by Member States and will be responsible for various tasks under the AI Act [2], including coordination [2], advisory [2] [3], and oversight functions [2]. Member States must designate their competent national authorities [2], notify the Commission [2], and publicly share their contact details by the specified deadline.
The Act [1] [4] [7], effective from 1 August 2024 [7], introduces comprehensive regulations for artificial intelligence across all 27 EU Member States [7], categorizing AI systems into four risk tiers: unacceptable [7], high [7], limited [4] [5] [7], and minimal [7]. New governance rules and obligations for General Purpose AI (GPAI) models available in the market will come into effect through a voluntary Code of Practice that emphasizes transparency, safety [1] [7], and security [1]. Organizations involved in the development or deployment of GPAI models must prioritize participation in this Code [4], prepare technical documentation [4], and establish a compliance framework [4]. Providers of generative AI models will be held accountable, and enterprise end-users will need to consider their value chain and third-party risk management practices [1]. The GPAI Code of Practice [1] [5], published on 10 July 2025, is currently under review by Member States and the Commission [5]. Following its endorsement [5], organizations will have a limited timeframe to comply with the upcoming obligations [5]. The Code mandates that training data be sourced lawfully and requires providers to document and disclose their training processes while assessing potential harms associated with their models. GPAI models launched after 2 August 2025 must comply with the new regulations by 2 August 2026, while those released before this date must achieve compliance by 2 August 2027 [8]. The Act aims to standardize AI security across the EU [1], promoting a security-by-design approach that integrates security considerations throughout an AI system’s lifecycle [1]. However, compliance challenges may arise due to the absence of existing technical specifications defining the required cybersecurity measures [1].
High-risk AI systems [6] [7] [8], such as those used in job applicant screening [7], must adhere to stringent legal requirements regarding safety [7], transparency [1] [7], and human oversight [7]. The Commission has also issued Guidelines pertaining to GPAI models [5], and the AI Office will provide a final compliance template for GPAI providers [5]. National competent authorities will be tasked with the implementation [3], supervision [3], and enforcement of AI regulations [3]. They will have the authority to investigate compliance [3], designate notified bodies for pre-market approvals [3], and create AI regulatory sandboxes [3], particularly supporting innovation for small and medium enterprises. EU Member States are required to empower these authorities by the specified deadline [3]. The Commission will publish a list of these authorities on a dedicated webpage in due course [3]. Non-compliance with the EU AI Act can result in significant penalties [1], including fines of up to 7% of a company’s global turnover [1] [7]. Although not all enforcement authorities are operational yet [1], companies are advised to prepare for imminent actions related to compliance [1].
Additionally, high-risk AI systems utilized by public sector organizations must be fully compliant by 2 August 2030 [8]. AI systems that are components of specific large-scale EU IT systems and were placed on the market before 2 August 2027 must comply by 31 December 2030 [8]. The establishment of absolute prohibitions on unacceptably risky AI systems is particularly significant, as it aims to respect fundamental rights and protect vulnerable individuals from potential infringements, including manipulative techniques and social scoring [6]. Companies developing or using AI must also ensure their staff possess adequate AI literacy to meet the evolving requirements of the Act. The EU AI Act is expected to set a global standard for AI governance [7], marking a pivotal moment in the responsible development of AI technology [7]. Comprehensive support is available for organizations to navigate the compliance process [4], optimize resource allocation [4], and mitigate risks associated with the upcoming regulatory landscape [4]. Immediate action is necessary to ensure compliance with the new regulations [4].
Conclusion
The EU AI Act represents a significant step forward in the regulation of artificial intelligence, setting a precedent for global AI governance. By establishing a robust framework for AI oversight, the Act aims to ensure the safe and ethical development of AI technologies. Organizations must act promptly to align with the new regulations, which will not only enhance AI security and transparency but also foster innovation and trust in AI systems across the EU. The Act’s implementation will likely influence AI governance worldwide, promoting a standardized approach to AI safety and accountability.
References
[1] https://www.itpro.com/business/policy-and-legislation/the-second-enforcement-deadline-for-the-eu-ai-act-is-approaching-heres-what-businesses-need-to-know-about-the-general-purpose-ai-code-of-practice
[2] https://www.alexanderthamm.com/en/blog/eu-ai-act-timeline/
[3] https://digital-strategy.ec.europa.eu/en/news/eu-rules-governance-start-apply
[4] https://digital.nemko.com/insights/eu-ai-act-rules-on-gpai-2025-update
[5] https://natlawreview.com/article/eu-ai-act-compliance-deadline-august-2-2025-looming-general-purpose-ai-models
[6] https://www.cio.com/article/4032894/analysis-of-the-european-ai-regulation-one-year-after-its-entry-into-force.html
[7] https://gulfnews.com/world/europe/ai-law-kicks-in-what-you-need-to-know-about-the-new-european-ai-act-1.500221588
[8] https://www.techrepublic.com/article/news-eu-ai-act-gpai-models/