Introduction

The deployment and regulation of High-Risk Artificial Intelligence (AI) Systems are critical to ensuring fairness and transparency, particularly in sectors affecting Colorado residents. This involves conducting evaluations to prevent algorithmic discrimination [1], informing residents about AI usage, and maintaining compliance with legal requirements.

Description

Deployers of High-Risk Artificial Intelligence Systems must conduct annual evaluations to prevent Algorithmic Discrimination and exercise reasonable care to mitigate known or foreseeable risks associated with these systems. Before utilizing such systems for significant decisions affecting Colorado residents—such as those related to employment, lending [2], and healthcare—they are required to inform residents about the deployment [1], the purpose of the system [1], and provide contact information [1]. Residents must also be made aware of their right to opt out of data processing for profiling under the Colorado Privacy Act [1].

In the event that a decision adversely impacts a resident, the Deployer must disclose the rationale behind the decision [1], including the system’s contribution and the data sources utilized [1]. Residents should have the opportunity to correct any inaccurate data and appeal adverse decisions [1], which must include human review unless it is deemed not in the resident’s best interest.

Deployers are obligated to maintain and update a public statement on their websites regarding the types of High-Risk AI Systems in use [1], risk management strategies for Algorithmic Discrimination [1], and details about the data collected and utilized [1]. The need for clearer guidelines on AI usage disclosures and risk assessments has been emphasized by industry representatives, highlighting the importance of addressing AI risks while fostering innovation. To establish a rebuttable presumption of reasonable care in consumer protection [3], the law outlines specific steps that developers can implement [3].

In cases of Algorithmic Discrimination [1], Deployers must notify the Colorado Attorney General within 90 days of discovery [1]. The Attorney General holds exclusive enforcement authority under the Colorado AI Act [1], with violations classified as unfair trade practices [1]. Compliance must be demonstrated through documentation and adherence to recognized risk management frameworks [1]. Developers and Deployers may assert affirmative defenses if they proactively address and mitigate risks [1], including discovering and remedying violations while remaining compliant with an approved AI risk management framework [1].

State officials recognize the concerns of small businesses regarding the law’s implications while emphasizing the necessity of consumer protection in the rapidly evolving AI landscape. Ongoing discussions among stakeholders aim to refine the law, ensuring it aligns with both industry needs and future national regulations.

Conclusion

The regulation of High-Risk AI Systems in Colorado underscores the balance between innovation and consumer protection. By mandating transparency, accountability, and risk management [1], the law seeks to safeguard residents while accommodating industry growth. The ongoing dialogue among stakeholders is crucial to refining these regulations, ensuring they remain effective and relevant in the face of technological advancements.

References

[1] https://www.jdsupra.com/legalnews/understanding-the-colorado-ai-act-9361626/
[2] https://coloradosun.com/2024/10/22/task-force-talks-revisions-to-colorados-controversial-artificial-intelligence-law/
[3] https://kjk.com/2024/10/24/the-current-and-evolving-landscape-of-ai-in-the-united-states-whats-next/