Introduction
The landscape of artificial intelligence (AI) regulation in the United States is undergoing significant changes, with states like Colorado and California taking the lead in developing frameworks to address the challenges posed by AI technologies. These efforts focus on mitigating risks associated with high-stakes AI applications, ensuring fairness, and enhancing oversight.
Description
State-level AI regulation in the US is evolving [1] [2], with notable developments in Colorado and California [1] [2]. Colorado’s Artificial Intelligence Act (CAIA) [1] [2], effective February 1, 2026 [1] [2], emphasizes the regulation of high-risk AI systems that make consequential decisions affecting individuals in critical areas such as education [1] [2], employment [1] [2] [3], and healthcare [1] [2]. This law mandates comprehensive documentation [1] [2], risk management programs [1] [2], impact assessments [1] [2], and ongoing monitoring to prevent algorithmic discrimination [1] [2], requiring compliance reviews at least annually and after significant modifications to the AI systems [1] [2].
In California [1] [2], lawmakers faced challenges in 2024 regarding AI regulation, including the rejection of a significant bill that would have mandated employers to notify and potentially accommodate workers when AI is utilized in critical hiring or employment decisions [3]. Additionally, a proposed law aimed at requiring developers of high-risk AI models to conduct safety tests and implement shutdown mechanisms to mitigate critical harms was vetoed by Governor Newsom in September [3]. Although the proposed SB 1047 sought to regulate AI systems based on their computational power and associated costs [1], specifically targeting models that require over 10^26 floating-point operations and exceed $100 million in cloud computing expenses [2], it was ultimately vetoed. The bill aimed to impose additional oversight [2], including mandatory third-party audits and shutdown capabilities [1] [2], while prohibiting uses that could pose an unreasonable risk of critical harm [1] [2], such as mass casualties or major cyberattacks [2].
Looking ahead, future AI regulation in California is expected to evolve towards a framework that balances computational capabilities with specific use cases [1] [2], focusing on the risks posed by AI in high-stakes environments [2]. In New York [3], lawmakers attempted to address perceived shortcomings of existing regulations in 2024 but were unsuccessful [3]. As a result, there is an expectation that 2025 will see an increase in lawsuits and agency actions against employers concerning their use of AI in hiring and workplace practices [3].
Businesses and legal counsel can learn from these regulatory approaches: Colorado’s focus on bias mitigation in critical sectors and California’s emphasis on oversight for large-scale models [2]. Organizations that adopt both perspectives—ensuring safety and fairness in AI applications while managing broader risks—will be better positioned for compliance as new regulations emerge [2].
Conclusion
The evolving regulatory landscape in states like Colorado and California highlights the growing recognition of the need for robust AI governance. These efforts underscore the importance of addressing both the ethical and operational challenges posed by AI technologies. By focusing on bias mitigation, safety [2] [3], and oversight [2], these states are setting precedents that could influence future national and international AI regulatory frameworks. Businesses that proactively align with these emerging standards will likely find themselves better equipped to navigate the complexities of AI compliance and innovation.
References
[1] https://www.jdsupra.com/legalnews/two-paths-to-ai-regulation-capability-6044925/
[2] https://ourtakeonai.bakerbotts.com/post/102js1w/two-paths-to-ai-regulation-capability-vs-use-case-in-state-level-approaches
[3] https://www.fisherphillips.com/en/news-insights/comprehensive-review-of-ai-workplace-law-and-litigation-as-we-enter-2025.html




