Introduction

The Colorado Artificial Intelligence Act (CAIA) [2] [4], also known as SB-205, represents a pioneering effort in establishing a comprehensive state-level governance framework for artificial intelligence in the United States. Enacted on May 17, 2024, and set to take effect on February 1, 2026 [1] [3], this legislation aims to prevent algorithmic discrimination, particularly in critical sectors such as healthcare and employment, by mandating rigorous governance and transparency measures for high-risk AI systems.

Description

The CAIA requires developers and deployers of High-Risk AI systems to implement an AI Governance program [1]. This program must include safety testing, thorough documentation [1], harm mitigation strategies [1], and reasonable care to protect consumers from biases [2]. Healthcare providers [5], in particular, are mandated to create risk management frameworks and continuously evaluate their AI applications [5], especially in areas such as billing, scheduling [5], and clinical decision-making [5], to ensure compliance with anti-discrimination standards [5]. Companies classified as “Developers” must adhere to specific guidelines, and while not mandatory, having a risk management policy compliant with ISO 42001 or aligned with the NIST AI RMF can provide an affirmative defense against potential violations [4].

Businesses are required to assess their risk tolerance and the potential harms associated with high-risk AI systems [1], which can significantly influence consequential decisions [2]. Compliance with industry standards is essential [1], and employers classified as deployers must conduct annual impact assessments, implement risk management policies [2] [4], and notify consumers when high-risk AI systems are utilized. Additionally, comprehensive documentation is required to determine if a system qualifies as a High-Risk AI System [4], including details about its purpose, intended and prohibited uses [4], expected benefits [4], outputs [4], foreseeable uses [4], limitations [1] [4] [5], risks of algorithmic discrimination [2] [3] [4] [5], and mitigation measures [4]. Developers are also mandated to disclose training data and document bias mitigation efforts prior to deployment.

Transparency is a key requirement, as employers must provide information regarding adverse decisions influenced by AI and allow individuals affected by such decisions the opportunity to correct data and appeal outcomes. Providers are obligated to inform patients about the use of AI systems in consequential decisions and clarify the AI’s role in any adverse outcomes [5]. Any known [4], suspected [4], or imminent instances of algorithmic discrimination must be reported to the Colorado Attorney General within a specified timeframe, with a requirement to notify the General Counsel within 24 hours for any such instances. Enforcement of the Act falls under the jurisdiction of the Colorado Attorney General [5], and there is no provision for private lawsuits by consumers [5].

In light of ongoing discussions, Governor Jared Polis has expressed concerns about balancing protection and innovation, urging stakeholders to refine the bill over the next two years [3]. On May 5, 2025 [3], he [1] [2] [3] [4] [5], along with other officials [3], requested a delay in the effective date to January 2027 [3], coinciding with the indefinite postponement of an amendment bill [3]. However, attempts to delay CAIA’s implementation through unrelated legislation were unsuccessful [3].

Organizations are advised to begin preparing for compliance [3], focusing on policy development [3], impact assessments [2] [3] [5], and engagement with AI auditors [3], as these assessments are crucial for ensuring the reliability of AI systems and minimizing risks associated with algorithmic discrimination [3]. Certain exemptions exist for smaller organizations [2], particularly those with fewer than 50 employees that do not utilize their own data for AI training [2], though specific criteria must be reviewed to determine eligibility [2]. Engaging an AI governance professional is recommended to navigate these critical considerations both prior to and during the operation of AI systems [1], ensuring adherence to the evolving regulatory landscape [1].

Conclusion

The Colorado Artificial Intelligence Act (CAIA) is set to significantly impact AI governance in the United States by establishing a robust framework to prevent algorithmic discrimination. Its implementation will require organizations to adopt comprehensive governance measures, thereby promoting transparency and accountability in AI systems. As the regulatory landscape evolves, businesses must prepare to comply with these new standards, ensuring that AI technologies are deployed responsibly and ethically.

References

[1] https://www.jdsupra.com/legalnews/the-building-blocks-for-artificial-8827499/
[2] https://ilgdenver.com/2025/07/navigating-colorados-new-artificial-intelligence-act-caia/
[3] https://natlawreview.com/article/will-colorados-historic-ai-law-go-live-2026-its-fate-hangs-balance-2025
[4] https://blog.stackaware.com/p/colorado-artificial-intelligence-act-sb-205-developer-high-risk-artificial-intelligence-systems
[5] https://www.simbo.ai/blog/assessing-ai-applications-in-healthcare-operations-a-guide-for-providers-to-meet-colorado-ai-act-requirements-2269633/