Introduction
In the first quarter of 2025 [1], significant regulatory and policy developments concerning artificial intelligence (AI) have emerged at various levels [1], including state [2], federal [1] [3], and international [1]. These developments focus on the implementation of new regulations, the establishment of oversight bodies, and the introduction of legislative measures to address the challenges and opportunities presented by AI technologies.
Description
In Q1 of 2025 [1], significant regulatory and policy developments regarding artificial intelligence (AI) have emerged at state [1], federal [1] [3], and international levels [1]. The European Union’s Artificial Intelligence Act (EU AI Act) [1] [2], which officially entered into force in August 2024 [2], has begun its phased implementation, imposing obligations on operators of “high-risk” AI systems [1]. The first set of regulatory requirements took effect on February 2, 2025 [1], prohibiting certain AI practices [2], particularly those involving manipulative techniques or predicting criminal behavior [2], and restricting the use of AI for determining emotions in workplace settings [2]. These measures also include mandates for AI literacy and a ban on AI systems deemed to pose “unacceptable risk.” By August 2, 2025, additional obligations for general-purpose AI (GPAI) models will require providers to maintain detailed technical documentation [1], ensure compliance with EU copyright law [2], report serious incidents [1], and assess systemic risks associated with their models [2]. An AI Office and a European Artificial Intelligence Board will be established to oversee enforcement [2], with national authorities designated in each member state [2].
In the United States [1], the federal government has shifted from pursuing comprehensive AI legislation to a more subdued approach regarding federal privacy legislation [3]. The Trump administration has prioritized AI development over safety protocols [1], issuing an Executive Order on January 23, 2025 [1], that suspended several Biden-era AI policies and initiated the creation of an AI Action Plan [1], with public input being solicited [1]. Subsequent policy memoranda from the Office of Management and Budget aim to implement this Executive Order regarding agency use and procurement of AI technologies [1]. Another Executive Order established an AI Education Task Force to develop a Registered Apprenticeship program for AI-related occupations [1].
At the state level [1], over 550 AI-related bills have been introduced across 45 states and Puerto Rico in the first quarter of 2025 [1], addressing topics such as consumer protection [1], transparency [1] [2], and algorithmic discrimination [1] [3]. While many proposals focus on specific AI applications [1], at least eight states are considering comprehensive frameworks for “high-risk” AI systems [1]. California’s experience illustrates the challenges of regulating AI [1], as Governor Newsom vetoed a bill mandating safety testing but later commissioned a study on AI risks [1].
Virginia and Colorado have introduced bills addressing algorithmic discrimination [1], with Virginia’s bill vetoed and Colorado’s bill enacted but subject to amendments [1]. The Colorado AI Act [1], effective February 1, 2026 [1] [2], imposes requirements on developers of “high-risk AI systems” to protect consumers from algorithmic discrimination [1]. However, recent amendments introduced on April 28, 2025, aim to reduce obligations for developers, deployers [3], and vendors of high-risk AI systems [1] [3]. Notably, an exception is made for developers of AI systems with “open model weights,” provided they implement specific technical and administrative measures to prevent consequential decision-making [3]. The existing duty of care to protect consumers from algorithmic discrimination risks would be eliminated [3], and reporting obligations for developers would be lessened [3].
Governor Polis has called for further legislative refinement to balance consumer protection with innovation [1], as compliance deadlines for the amended regulations would be extended to January 1, 2027, with additional extensions for smaller businesses [3]. The definition of “extensive profiling” would be removed [3], refocusing on significant consumer decisions [3], while obligations for specific risk evaluations related to physical or biological identification would be eliminated [3]. The scope of significant decisions would be narrowed to the provision or denial of specific goods and services [3], excluding advertising [3].
Both states define “high-risk” AI systems as those significantly influencing consequential decisions [1], with Colorado’s Act allowing for broader interpretations of what constitutes “assisting” in such decisions [1]. The Virginia AI Act proposed a more restrictive definition [1]. As the AI industry evolves [1], state legislatures will continue to debate and refine AI legislation [1], with Colorado and Virginia’s laws serving as influential models for other states [1].
Conclusion
The regulatory and policy developments in AI during the first quarter of 2025 highlight a global effort to address the complexities and potential risks associated with AI technologies. The EU’s comprehensive approach, the US federal and state-level initiatives, and the specific legislative actions in states like Colorado and Virginia underscore the importance of balancing innovation with consumer protection. These measures are likely to influence future AI legislation and set precedents for other jurisdictions as they navigate the evolving landscape of AI regulation.
References
[1] https://www.jdsupra.com/legalnews/ai-quarterly-update-recent-ai-6278881/
[2] https://www.bsr.org/en/blog/the-eu-ai-act-where-do-we-stand-in-2025
[3] https://natlawreview.com/article/states-shifting-focus-ai-and-automated-decision-making