Introduction

The landscape of artificial intelligence (AI) regulation in the United States is rapidly evolving, with various states enacting laws to address specific harms and ensure responsible AI deployment. These regulations focus on issues such as deepfakes, child sexual abuse material (CSAM) [4], and discriminatory practices [7], while also establishing frameworks for innovation and compliance.

Description

Most US state AI laws focus on specific harms [4], such as deepfakes and child sexual abuse material (CSAM) [4]. Texas has enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) [1] [6], which will take effect on January 1, 2026 [6]. This comprehensive legislation establishes extensive requirements for AI developers and deployers [6], including categorical restrictions on certain AI systems and mandates for disclosing AI usage when interacting with consumers [6]. TRAIGA defines AI systems broadly [6], encompassing any machine-based system that generates outputs based on inputs [6], and aims to prevent discriminatory and manipulative uses of AI while ensuring that Texas residents receive clear notifications when engaging with these systems [6]. Notably, the act applies to developers and deployers operating in Texas [6], including government entities [1] [6], but excludes hospital districts and higher education institutions [6]. It also protects individuals acting in personal contexts from disclosure requirements in commercial or employment situations.

TRAIGA prohibits specific practices [1], including using AI to manipulate behavior [1], assign social scores by government entities [1], unlawfully discriminate [1], infringe on constitutional rights [1], and capture biometric data without consent [1]. The act introduces a regulatory sandbox program to foster innovation and responsible AI deployment [1], allowing businesses to test innovative AI systems for a 36-month period without prior licensing, provided they receive state approval and submit relevant information regarding the development [5], training [5] [7], and testing of the AI system [5]. This program protects participants from punitive actions during the testing phase [1]. Additionally, TRAIGA establishes the Texas Artificial Intelligence Council to address ethical [1], privacy [1] [2] [3], and public safety concerns related to AI [1].

In 2023, Texas adopted additional statutes addressing the development and deployment of AI systems for specific purposes. California has enacted requirements for AI vendors to publicly disclose training data [7], effective January 1, 2026 [1] [6] [7], and has passed several AI-related bills [2], including the Defending Democracy from Deepfake Deception Act and the California AI Transparency Act [2], which mandates disclosure of AI-generated content [2]. The Health Care Services: Artificial Intelligence Act requires healthcare providers to disclose the use of generative AI in patient communications [2]. The California Senate is also advancing further legislation targeting chatbot marketing practices and enhancing the regulatory framework for AI systems [2].

Colorado has passed comprehensive legislation mandating employer risk management policies for AI in employment decisions [7], with civil liability provisions for violations [7], effective February 1, 2026 [1] [7]. The Colorado AI Act [1] [2], set to take effect in 2026 [2], establishes duties for developers and deployers of high-risk AI systems [2], with the Colorado Attorney General authorized to implement it and violations classified as unfair or deceptive trade practices [2]. Illinois has prohibited AI discrimination based on protected classes [7], while New York’s Senate has passed bills requiring audits of AI systems used in employment decisions [7], building on existing 2023 audit requirements in New York City [7]. Connecticut has proposed similar audit mandates [7].

The classification of AI hiring platforms as “agents” disrupts the traditional approach to algorithmic employment decision-making [7], allowing for direct liability under federal anti-discrimination statutes [7]. This legal framework emphasizes the function delegated to the agent rather than the method of execution [7], meaning that the technology industry cannot evade liability by automating discriminatory processes [7]. The lack of unified federal AI restrictions has altered the compliance landscape for multi-state employers [7], leading to significant variations in AI compliance requirements across jurisdictions [7]. This fragmentation necessitates the development of multiple compliance strategies tailored to specific state requirements [7].

Existing laws [2], such as privacy and intellectual property statutes [2], continue to govern AI-related issues [2]. The California Privacy Protection Act regulates automated decision-making [2], while the Biometric Information Privacy Act in Illinois allows for significant damages for violations [2]. Organizations operating across multiple jurisdictions must prepare for staggered implementation dates [7], including October 1, 2025 [7], for California’s FEHA regulations [7], January 1, 2026 [1] [6] [7], for generative AI disclosure [7], and February 1, 2026 [7], for Colorado’s risk management framework [7]. The evolving landscape of AI employment regulation necessitates the development of adaptive compliance frameworks that address new requirements while ensuring operational efficiency [7].

In July 2025 [3], Congress rejected a federal proposal that would have imposed a ten-year ban on state-level regulation of artificial intelligence [3], allowing states to maintain their authority over AI oversight [3]. The US Senate recently considered a moratorium on state-level AI law enforcement but ultimately rejected the proposal [4]. In total, over 40 state AI bills were introduced in 2023 [2], with Connecticut and Texas among those adopting new statutes. To navigate this evolving regulatory environment [3], companies should take proactive steps: thoroughly map their AI systems to understand their applications [3], monitor state laws to anticipate legislative changes [3], integrate compliance into their design processes [3], and consider implementing bias audits and clear disclosures [3]. Engaging legal counsel early in the process is crucial [3], as the landscape of AI regulation is rapidly changing [3].

Conclusion

The evolving regulatory landscape for AI in the United States presents both challenges and opportunities for businesses and policymakers. As states continue to enact diverse AI laws, organizations must develop adaptive compliance strategies to navigate varying requirements. This dynamic environment underscores the importance of proactive engagement with legal counsel and the integration of compliance measures into AI system design and deployment. The ongoing developments in AI regulation will significantly impact innovation, privacy [1] [2] [3], and ethical considerations, shaping the future of AI governance in the US.

References

[1] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250721-texas-enacts-new-ai-law
[2] https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
[3] https://www.fourscorelaw.com/resources/congress-blocks-ai-regulation-ban-what-it-means-for-businesses-and-the-rise-of-state-level-ai-laws
[4] https://www.jdsupra.com/legalnews/ai-law-center-july-2025-updates-8331919/
[5] https://natlawreview.com/article/americas-ai-action-plan-national-security-imperative
[6] https://www.beneschlaw.com/resources/texas-joins-growing-state-by-state-ai-regulation-in-enacting-comprehensive-ai-system-law.html
[7] https://www.linkedin.com/pulse/ai-employment-compliance-how-state-regulations-kayne-mcgladrey-dpasc/