Introduction

The Virginia High-Risk Artificial Intelligence Developer and Deployer Act [3] [4] [5] (HB 2094) represents a significant legislative effort to regulate high-risk AI systems, focusing on consumer protection and addressing algorithmic discrimination. Passed by the Virginia General Assembly and awaiting the Governor’s signature, this Act positions Virginia as a leader in AI regulation, following Colorado’s broader legislative framework.

Description

The Virginia High-Risk Artificial Intelligence Developer and Deployer Act, known as HB 2094, was passed by the Virginia General Assembly on February 20, 2025, and is currently pending the signature of Governor Youngkin. This legislation establishes a comprehensive regulatory framework for high-risk AI systems, emphasizing consumer protection and targeting systems that autonomously make or significantly influence consequential decisions without adequate human oversight [1]. By doing so, Virginia positions itself as the second state to implement laws addressing algorithmic discrimination [4], following Colorado [3] [4], although the scope of Virginia’s Act is narrower compared to Colorado’s broader legislation.

The Act defines high-risk AI systems as those intended to autonomously render consequential decisions [5], requiring AI to provide the “principal basis” for such decisions [5], which is a more industry-friendly stance than Colorado’s definition. It categorizes high-risk AI systems as those that significantly impact decisions affecting education, employment [1] [2] [3] [6], financial services [2] [6], government services [3] [6], healthcare [2] [3] [6], housing [2] [3] [6], insurance [2] [6], and legal services [2] [3] [6]. Notably, the Virginia Bill expands this definition to include decisions related to parole [6], probation [6], and marital status [3] [6], although government entities making such decisions may be excluded from its scope [6]. Additionally, it excludes models used for development [3] [6], prototyping [6], and research prior to deployment [6].

Developers and deployers of high-risk AI systems must implement safeguards against algorithmic discrimination [5], which involves unlawful differential treatment based on protected characteristics such as age [2], color [2], disability [2], and ethnicity [2]. They are required to maintain documentation on various topics [3] [6], including system limitations, data used [2] [3] [6], and measures to mitigate algorithmic discrimination [3] [6]. Transparency is crucial; developers must disclose their use of high-risk AI to consumers, including the purpose [2], intended benefits [2], and foreseeable harmful uses [2]. If a developer learns of potential algorithmic discrimination [6], they must notify the Virginia attorney general and all known deployers within 90 days [6], a requirement not mirrored in Colorado’s legislation.

Both the Virginia and Colorado acts require disclosures to consumers regarding their interaction with AI systems, particularly high-risk ones [1] [3] [6]. The Virginia Act defines “consumer” as a Virginia resident [6], excluding those acting in a commercial or employment context [1] [6]. If a high-risk AI system makes an adverse decision [3] [6], the deployer must inform the consumer of the reasons for the decision [6], the AI system’s contribution [6], and the data processed [6], along with the right to appeal and correct inaccuracies [3] [6].

Impact assessments must include performance metrics [6], known limitations [2] [6], and evaluation methods for performance [2]. While the Colorado Act requires deployers to report algorithmic discrimination to the attorney general within 90 days [6], the Virginia Bill lacks similar reporting requirements [2] [3]. Enforcement of both acts is overseen by the Virginia attorney general, with neither law providing a private right of action [6]. Violations of the Virginia Act can result in civil penalties of $1,000 per violation and $10,000 for willful violations, while the Colorado Act imposes penalties of up to $20,000 per violation. The Virginia Bill is set to take effect on July 1, 2026.

To ensure compliance [3] [6], businesses should inventory current and planned AI use cases [6], assess their roles as developers or deployers [3] [6], and determine which systems are high-risk [6]. Required disclosures must be prepared [6], and impact assessments documented [6], potentially leveraging existing data protection assessment processes [6]. Deployers must implement a risk management policy and program [2], review high-risk AI systems for algorithmic discrimination [2], and maintain impact assessments for a specified duration [2], with the Virginia Act imposing longer retention requirements [2]. Finally [6], an AI risk management program and policies should be drafted and implemented [6], utilizing frameworks like the NIST AI RMF or ISO/IEC 42001 [6], which are viewed favorably by both acts [6].

Conclusion

The Virginia High-Risk Artificial Intelligence Developer and Deployer Act underscores the growing importance of regulating AI technologies to prevent algorithmic discrimination and protect consumers. By establishing a robust framework, Virginia aims to ensure that AI systems are developed and deployed responsibly, with transparency and accountability at the forefront. Businesses must adapt to these regulations by implementing comprehensive risk management strategies and maintaining thorough documentation to comply with the new legal requirements.

References

[1] https://natlawreview.com/article/virginia-poised-become-second-state-enact-comprehensive-ai-legislation
[2] https://www.jdsupra.com/legalnews/virginia-legislature-passes-second-us-7683331/
[3] https://practicalprivacy.wyrick.com/blog/patchwork-2-0-faqs-on-emerging-us-state-ai-laws-and-what-to-do-about-them
[4] https://www.jdsupra.com/legalnews/virginia-s-high-risk-ai-developer-and-7508875/
[5] https://www.jdsupra.com/legalnews/virginia-legislature-passes-high-risk-9847746/
[6] https://www.jdsupra.com/legalnews/patchwork-2-0-faqs-on-emerging-us-state-1467847/