Introduction
Congress is currently deliberating a proposal for a decade-long moratorium on state-level regulations concerning artificial intelligence (AI) and automated decision-making systems. This initiative, part of a budget reconciliation bill introduced by House Republicans, aims to establish uniformity in AI regulation across the United States, countering the fragmented state-by-state approach [7]. However, it has sparked significant debate due to its potential impact on existing and future state legislation, particularly in states like California and Colorado [3], which have been at the forefront of AI regulation.
Description
Congress is considering a decade-long moratorium on state-level regulations concerning artificial intelligence (AI) and automated decision-making systems, a provision included in a budget reconciliation bill proposed by House Republicans. This legislation would invalidate over 20 existing laws in California related to AI and halt 30 current bills aimed at regulating AI, including those requiring transparency in AI decision-making in healthcare and employment [9]. Introduced by Congressman Brett Guthrie [7] [9], the moratorium seeks to create uniformity in AI regulation across the country [7] [9], countering the fragmented state-by-state approach [7]. However, critics [3] [7] [9], including California Attorney General Rob Bonta and a coalition of 40 attorneys general [4], argue that it poses a threat to essential protections against discrimination and misuse of AI technologies [9], potentially stifling legislative efforts in states like California and Colorado.
California’s ongoing efforts to regulate AI are being challenged by this potential federal moratorium. State Senator Josh Becker has been active in drafting laws that emphasize transparency from AI developers [10], particularly regarding generative AI [10]. The California Assembly has advanced the AI Copyright Transparency Act [2], spearheaded by Assemblymember Rebecca Bauer-Kahan, to the Senate [2] [8] [9]. This legislation mandates that developers of generative AI systems document any copyrighted materials used in training their models and provides a mechanism for copyright holders to verify whether their work was utilized [8]. Developers must post this documentation on their websites by January 1, 2026 [1], and establish a publicly accessible platform for copyright owners to submit requests for information regarding the materials used [6]. They are required to respond to these requests within 30 days [1], detailing the copyrighted materials likely present in their datasets [1]. Non-compliance allows rights owners to initiate civil actions against AI developers [1] [6], although an exemption exists for those who make all training data publicly available at no cost [1]. The California Privacy Protection Agency has raised concerns that the proposed moratorium could undermine existing rights and privacy protections established by California voters [7], particularly those related to opting out of automated decision-making and ensuring transparency in personal data usage [9].
The California Assembly Judiciary Committee has also discussed proposed legislation concerning copyright law and its implications for AI training [5], emphasizing the need to balance transparency with the protection of proprietary information [5]. Key concerns were raised about the significance of copyright law for various stakeholders, including creators and businesses [5]. A representative expressed skepticism about the intent of certain bills [5], arguing that they may not genuinely promote transparency and that the question of whether using copyrighted material for AI training constitutes fair use should be resolved by the courts [5]. Additionally, Assembly Bill 2013 mandates disclosures about the datasets used for AI training [5], including the presence of copyrighted material [5], while amendments have been made to safeguard sensitive information [5], warning that excessive disclosure could jeopardize trade secrets [5].
If enacted [7] [9], the moratorium could stifle legislative efforts across the country [7] [9], as nearly 600 draft bills aimed at regulating AI are currently under consideration in 45 states [7]. Critics argue that the legislation may be an attempt by companies and lobbyists to weaken California’s leadership in AI regulation [7], potentially obstructing the agency’s efforts to regulate automated decision-making and enforce laws against deepfakes [7]. The broad nature of the moratorium raises concerns about unintended consequences [3], especially as many companies now identify as AI firms [3]. For instance [3], if financial services companies utilize AI in ways that result in discriminatory practices [3], states may be unable to intervene due to the moratorium [3].
While the moratorium is set to expire in 10 years, it could significantly hinder regulatory initiatives in the interim [7]. The legality of a blanket moratorium on state regulation remains uncertain [7], although the bill does allow states to enforce certain laws aimed at improving government efficiency [7]. A bipartisan House AI task force has struggled to reach an agreement on AI regulation [7] [9], and the discontinuation of this task force further raises concerns about the legislative process [3], prompting states to take independent action in response to incidents of harm related to AI technologies [7].
Proponents of the moratorium argue that inconsistent state regulations could hinder US competitiveness in AI [7], asserting that excessive regulation could stifle innovation [7]. This proposal aligns with broader efforts by certain political figures to limit AI regulation [7], emphasizing the need for a balanced approach to foster industry growth while ensuring safety and accountability [7]. Critics [3] [7] [9], however, express skepticism about the speed at which comprehensive federal policy will be developed, noting that no such legislation has been introduced in the current Congress [3]. The ongoing debate highlights the tension between state and federal approaches to AI regulation [10], with some lawmakers arguing that a patchwork of state laws could hinder innovation and competitiveness [10].
Conclusion
The proposed moratorium on state-level AI regulations has significant implications for the future of AI governance in the United States. While it aims to create a unified regulatory framework, it also risks undermining state efforts to address AI-related challenges, particularly in areas like discrimination and privacy. The debate underscores the complex balance between fostering innovation and ensuring accountability, with potential consequences for both state and federal legislative processes. As the discussion continues, the outcome will likely shape the trajectory of AI regulation and its impact on various stakeholders across the nation.
References
[1] https://calmatters.digitaldemocracy.org/bills/ca_202520260ab412
[2] https://www.transparencycoalition.ai/news/pope-leo-xiv-is-already-addressing-artificial-intelligence-heres-what-he-actually-said
[3] https://timesofsandiego.com/politics/2025/05/16/congressional-bill-would-block-california-other-states-from-regulating-ai/
[4] https://oag.ca.gov/news/press-releases/attorney-general-bonta-congress-california-must-retain-its-ability-protect
[5] https://www.citizenportal.ai/articles/3204736/California/
[6] https://www.transparencycoalition.ai/news/ai-legislative-update-may-16-2025
[7] https://calmatters.org/economy/technology/2025/05/state-ai-regulation-ban/
[8] https://www.transparencycoalition.ai/news/california-assembly-approves-ai-copyright-transparency-act-bill-now-moves-to-senate
[9] https://www.kqed.org/news/12040476/californians-would-lose-ai-protections-under-bill-advancing-in-congress
[10] https://themarkup.org/artificial-intelligence/2025/05/16/congress-moves-to-cut-off-states-ai-regulations