Introduction
In recent years, various states have enacted legislation to address the challenges and risks associated with artificial intelligence (AI) technologies. These laws focus on transparency, consumer protection [2] [5], and compliance [2] [3] [5], reflecting a growing concern over AI’s impact on society. However, a recent federal proposal seeks to impose a moratorium on state and local AI regulations, sparking significant debate over the balance between national uniformity and state autonomy in AI governance.
Description
Legislation has been enacted in various states to address the harms and risks associated with AI technologies [7], with numerous AI-related bills focusing on transparency, consumer protections [2] [5] [7], and compliance inspections of high-risk AI systems [2]. Key areas of focus include protections against AI-generated explicit material [7], deep fakes [7], spam communications [7], AI disclosure requirements [7], and identity protection for AI-generated content [7]. Notably, Colorado has enacted a comprehensive AI law that addresses issues such as discrimination and transparency [2]. In 2025 [3], California revised its rules to exempt behavioral advertising profiling from risk assessment and compliance requirements [3], allowing businesses to include pre-use notices within existing collection notices [3]. Arkansas introduced two laws mandating public entities to create comprehensive AI usage policies and granting ownership rights over content generated by generative AI to individuals who provide input [3], provided there are no copyright infringements [3]. Kentucky’s SB 4 requires the Commonwealth Office of Technology to establish policy standards for AI use [3].
Maryland’s HB 956 formed a working group to study private sector AI usage and recommend regulatory standards [3], while Montana’s SB 212 ensures that government restrictions on private ownership of computational resources are narrowly tailored and mandates risk management policies for AI-controlled critical infrastructure [3]. Utah’s new regulations require consumer-facing generative AI services to disclose AI interactions and introduce specific rules for AI-supported mental health chatbots [3], including bans on advertising during interactions and sharing personal information [3]. West Virginia’s HB 3187 creates a task force to identify economic opportunities in AI and recommend best practices for public sector use while protecting individual rights and consumer data [3]. These laws have been developed with input from consumers [7], industry stakeholders [4] [5] [7], and advocates [3] [7], reflecting a careful consideration of the potential risks [7].
Recently, the US House of Representatives passed a bill imposing a 10-year moratorium on state and local regulations concerning artificial intelligence [5], which has raised bipartisan concerns among several senators, including Republicans Josh Hawley and Marsha Blackburn [6]. This legislation [1] [5], part of H.R. 1 [5], would prevent states from regulating AI [2], including areas such as chatbots, social media [2], deepfakes [2] [4] [5], and medical software [2], and it nullifies existing state protections against AI discrimination [5], algorithmic bias [5], and deepfake technology [5]. While there is a limited exception for criminal enforcement [5], the bill eliminates civil enforcement mechanisms that states have developed to safeguard citizens from AI-related harms [5]. Proponents argue that existing state laws on unfair practices [2], consumer privacy [2], and discrimination can adequately address AI-related issues [2], asserting that a uniform national framework is necessary to prevent a fragmented regulatory environment that could undermine the competitiveness of US tech firms [4], particularly against international rivals like China [4]. However, critics warn that blocking state and local action could create an unregulated environment for AI development and deployment [2], hindering states’ ability to protect residents from rapidly evolving AI threats, including deepfake scams [4], algorithmic discrimination [4], and job displacement [4].
A coalition of over 260 state legislators from all 50 states, led by South Carolina Rep [4]. Brandon Guffey and South Dakota Sen [4]. Liz Larson [4], has expressed opposition to the proposed moratorium, emphasizing the necessity of state autonomy in policymaking [4]. They advocate for the ability to implement AI regulations tailored to the specific needs of their communities [4], especially in light of Congressional inaction [7]. Additionally, a coalition of 40 state attorneys general from both parties has urged Congress to reject the moratorium and instead establish a comprehensive regulatory framework [6], emphasizing concerns that the current proposal does not provide adequate protections against the risks associated with AI [6]. Legal experts have pointed out that the bill’s structure lacks the typical preemption language found in federal legislation [5], which usually clarifies the authority to supersede state laws [5], raising significant concerns regarding the implementation of AI in business and the potential risks involved [5]. In the interim [7], businesses are advised to adhere to existing state regulations [7], as enforcement by State Attorneys General is expected to remain a priority [7].
While the moratorium is anticipated to ignite significant debate, its passage into law appears unlikely in the near term [2]. Any Senate-approved bill will also require approval from the House [6], where it previously passed narrowly [6]. Notably, some House members [6], including Rep [4] [5] [6]. Marjorie Taylor Greene [6], have expressed opposition to the moratorium [6], indicating they would not support the bill if it remains [6]. Meanwhile, multiple federal AI bills are under consideration [2], which include proposals for new studies [2], disclosure standards [2], and increased research and development funding [2], reflecting the complexities seen with state privacy laws and the ongoing discussions surrounding federal preemption. Several states [3] [4], including California [4], Colorado [2] [4], and Utah [4] [5], have enacted significant laws governing AI’s use in the commercial sector [4], while others have introduced narrower regulations [4], and many governors have established AI task forces to develop best practices for AI deployment [4]. Organizations are advised to keep track of the bill’s progress in the Senate and uphold ethical AI practices despite the anticipated reduction in oversight [5].
Conclusion
The ongoing debate over AI regulation highlights the tension between the need for a cohesive national strategy and the importance of state-level autonomy to address local concerns. While the proposed federal moratorium seeks to create a uniform regulatory environment, it faces significant opposition from state legislators and attorneys general who argue for tailored solutions. As the legislative landscape continues to evolve, businesses and policymakers must navigate the complexities of AI governance to ensure both innovation and protection against potential risks.
References
[1] https://natlawreview.com/article/state-lawmakers-oppose-proposed-10-year-freeze-ai-laws-regulations
[2] https://www.kiplinger.com/politics/how-will-state-laws-hurt-future-of-ai
[3] https://www.lexology.com/library/detail.aspx?g=1fe5d98d-1e21-402a-a940-aea5f5290571
[4] https://statescoop.com/state-lawmakers-push-back-federal-proposal-limit-ai-regulation/
[5] https://www.businesstechweekly.com/technology-news/house-passes-ai-regulation-moratorium-implications-for-state-protections-and-oversight/
[6] https://www.cnet.com/tech/services-and-software/how-a-proposed-moratorium-on-state-ai-rules-could-affect-you/
[7] https://www.jdsupra.com/legalnews/bipartisan-coalition-of-state-ags-1989542/




