Introduction

On May 17, 2024 [4] [10], Colorado enacted Senate Bill 24-205 [3], a pioneering piece of legislation aimed at regulating the use of artificial intelligence (AI) in significant decision-making processes. This law [1] [5] [10], effective February 1, 2026 [1] [4] [10], positions Colorado as a leader in AI regulation, particularly concerning “high-risk” AI systems in sectors such as employment, housing [1] [3] [4] [6], credit [2] [4], education [4], and healthcare [4] [10]. The legislation seeks to prevent algorithmic discrimination and ensure transparency and accountability in AI applications.

Description

On May 17, 2024 [4] [10], Colorado enacted Senate Bill 24-205 [3], establishing comprehensive regulations for the use of artificial intelligence (AI) in making consequential decisions, particularly in employment [4] [6], housing [1] [3] [4] [6], credit [2] [4], education [4], and healthcare [4] [10]. This law positions Colorado as one of the first states to regulate the AI industry, focusing on “high-risk” AI systems that significantly influence decision-making processes. The legislation is set to take effect on February 1, 2026, and introduces a new section to the Colorado Consumer Protection Act [10], aiming to prevent algorithmic discrimination in critical areas [10].

The law mandates that developers of high-risk AI systems conduct thorough impact assessments and publicly disclose the types of AI systems utilized in Colorado, along with their roles in decision-making processes [3]. Developers are required to communicate intended use cases [9], provide data summaries [9], explain model limitations [9], outline intended benefits [9], and detail measures to mitigate risks associated with algorithmic discrimination. They must also address issues related to training data, specify intended outputs [9], describe evaluation methods [9], clarify usage guidelines [9], and document risk mitigation techniques [9]. Any substantial modifications to high-risk AI systems necessitate updated impact assessments [10], which must disclose the system’s purpose [10], intended use cases [7] [9], known risks [7] [8] [10], data categories processed [7], and post-deployment monitoring measures [7] [8]. All impact assessments must be retained for a minimum of three years [8], with annual reviews mandated to ensure compliance with anti-discrimination measures [10].

Deployers of AI systems face stricter obligations, reflecting a trend towards regulating technology deployment [9]. They must inform consumers when AI is involved in significant decision-making processes, including employment [4] [6], loans [1] [5] [7], and housing [1]. Deployers are required to outline personnel roles in risk management, describe risk management techniques [9], consider organizational complexity [9], evaluate real-world benefits and risks [9], maintain transparency [4] [9], monitor system performance [9], and conduct impact assessments for significant modifications [9]. They must also allow consumers to appeal AI-driven decisions and request human reviews when feasible [9]. If a developer identifies potential algorithmic discrimination [9], they must report it to the Attorney General and other deployers within 90 days [9].

Businesses deploying or developing AI systems intended for consumer interaction are obligated to disclose this engagement to consumers, identifying any “differential treatment or impact” across a broad range of protected categories [6]. This imposes significant transparency [6], analysis [6], and documentation burdens on employers [6]. Insurers and related entities can achieve compliance by adhering to existing laws governing the use of external consumer data [2], algorithms [1] [2] [3] [4] [5] [6] [7] [8] [9] [10], and predictive models as regulated by the commissioner of insurance [2]. Financial institutions [2], including banks and credit unions [2], comply with the act if they are subject to oversight by state or federal regulators regarding high-risk systems [2], provided that the applicable guidance meets specified criteria [2].

Consumers have the right to appeal AI-generated decisions and to opt out of personal data processing. The law requires that notices regarding adverse decisions made by AI tools detail the reasons for the decision, the AI tool’s impact [6], the data used [6], and the sources of that data [6]. Notifications must be clear [7], accessible in all languages used by the deployer [6], and in formats accommodating disabilities [6]. Additionally, consumers are granted rights to correct erroneous data and to receive information about their rights and the purpose of AI-driven decisions.

A small business exemption exists for companies with fewer than 50 employees [5], and direct reporting to the Colorado Attorney General is required [5]. The law focuses on the outcomes of AI decisions rather than intent [5], which has raised concerns about compliance burdens [5]. Governor Jared Polis has expressed worries about the law’s potential to hinder innovation and competitiveness [5], urging refinements to key definitions and the compliance structure before the law’s effective date [5]. Stakeholders agree that clarifications are needed [5], particularly regarding notification and documentation requirements [5], which are crucial for employers to ensure compliance and mitigate litigation risks [5].

The definition of “consequential decisions” is a key area of discussion [5], as it determines which AI-driven processes are regulated [5]. Employers seek clarity to align their AI use in HR functions with legal obligations [5]. The scope of exemptions for mid-sized businesses is also under negotiation [5], as current thresholds may impose disproportionate burdens [5]. Timing and scope of impact assessments are contentious [5], with stakeholders debating when assessments should occur and what documentation is necessary [5]. The interconnected nature of the law’s provisions complicates potential revisions [5], as changes in one area can affect others [5], impacting compliance and operational realities for businesses [5].

The definition of “algorithmic discrimination” has been criticized for vagueness [5], making compliance determination challenging for businesses [5]. Risk management requirements for AI deployers are also debated [5], particularly regarding documentation and oversight levels [5]. The Attorney General is granted rule-making authority to implement and enforce the act [2], with violations classified as unfair and deceptive trade practices under the Colorado Consumer Protection Act [2]. Reporting obligations to the Attorney General are under scrutiny [5], with concerns about trade secret exposure versus the need for transparency [5].

Controversial issues include the right to cure before enforcement actions and the extent of trade secret protections [5]. The role of the Attorney General in enforcement is debated [5], with proposals for expanded oversight versus limiting discretion to reduce uncertainty [5]. Other contentious topics include consumer rights to appeal AI decisions [5], potential modifications to small business exemptions [5], and the possibility of delaying implementation for better preparation [5]. Specific website disclosures are mandated [7], detailing the types of AI systems in use and how algorithmic discrimination risks are managed [7] [8], although these requirements do not apply to deployers with fewer than 50 full-time employees who do not use their own data for training [7].

As the law’s enforcement date approaches, legislative changes are anticipated [5], though the specifics remain uncertain [5]. Ongoing discussions are recommended to address concerns and balance consumer protections with business feasibility [5]. Employers should assess AI usage [5], conduct risk assessments [5] [6] [9], review contracts with AI vendors [5], stay informed on legislative developments [5], and develop compliance plans to prepare for potential obligations [5]. The burdens imposed by the law may deter employers from using AI tools for hiring and other purposes [6], especially given the challenges of determining the residency of individuals affected by AI tool use [6]. The detailed notice requirements and the need to explain AI tools in plain language present practical difficulties [6], alongside the costs associated with compliance [6]. The right to opt out of profiling could limit the efficiencies gained from AI [6], necessitating the development of parallel non-AI systems [6]. The individual appeal processes could overwhelm hiring operations [6], as every unsuccessful applicant might seek a human review [6]. The implications of this legislation may prompt other states to consider similar laws, particularly in light of global interest in risk-based AI regulation models [6].

Conclusion

The enactment of Senate Bill 24-205 marks a significant step in AI regulation, with Colorado setting a precedent for other states. While the law aims to protect consumers and ensure fairness in AI-driven decisions, it also presents challenges for businesses in terms of compliance and operational adjustments. As the implementation date approaches, ongoing dialogue and potential legislative refinements will be crucial to balancing innovation with consumer protection. The law’s impact may extend beyond Colorado, influencing AI regulatory frameworks nationwide and globally.

References

[1] https://coloradosun.com/2024/05/18/colorado-artificial-intelligence-law-signed/
[2] https://leg.colorado.gov/bills/sb24-205
[3] https://www.denver7.com/news/politics/colorado-gov-polis-signs-ai-bill-several-others-into-law
[4] https://www.seyfarth.com/news-insights/colorado-governor-signs-broad-ai-bill-regulating-employment-decisions.html
[5] https://www.jdsupra.com/legalnews/colorado-s-ai-task-force-warns-of-5719997/
[6] https://www.littler.com/publication-press/publication/colorados-landmark-ai-legislation-would-create-significant-compliance
[7] https://natlawreview.com/article/colorados-historic-sb-24-205-concerning-consumer-protections-interactions-ai-signed
[8] https://www.healthlawadvisor.com/colorado-sb-24-205-on-the-verge-of-addressing-ai-risk-with-sweeping-consumer-protection-law
[9] https://www.lumenova.ai/blog/colorado-senate-bill-24-205-algorithmic-discrimination-protection/
[10] https://natlawreview.com/article/colorado-sb-24-205-addressing-ai-risk-sweeping-consumer-protection-law