Introduction
The veto of the High-Risk Artificial Intelligence Developer and Deployer Act by Virginia Governor Glenn Youngkin highlights the ongoing debate over the regulation of AI technologies. The proposed legislation aimed to protect residents from algorithmic discrimination and ensure accountability for high-risk AI systems [1], but concerns about its potential impact on smaller companies and economic growth led to its rejection.
Description
Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act on March 24, 2025 [1], citing concerns that the bill could create a burdensome regulatory framework for smaller companies [1]. This legislation aimed to protect residents from algorithmic discrimination and ensure accountability for high-risk AI systems [1], which are defined as those making consequential decisions affecting access to essential services such as education [1], employment [1] [4], and healthcare [1] [4].
The proposed law sought to implement regulatory measures similar to those in Colorado’s AI Act [4], imposing obligations on both developers and deployers of high-risk AI systems [1]. These obligations included conducting AI impact assessments [4], maintaining detailed technical documentation [4], adopting risk management protocols [4], and ensuring that AI outputs are identifiable to consumers [1]. Additionally, the bill aimed to require AI developers and deployers to implement safeguards against algorithmic discrimination and disclose information about their AI systems, including intended benefits [1], uses [1] [2] [3], and limitations [1].
Governor Youngkin expressed that the bill’s rigid framework did not accommodate the rapid evolution of AI technology and could hinder economic growth in Virginia [1], particularly for startups and smaller firms [1]. He emphasized that existing laws already protect consumers against discriminatory practices [1], suggesting that current regulations on discrimination, privacy [2] [4], data usage [4], and defamation could adequately address potential AI-related harms [4].
Despite the veto of HB 2094, Virginia has not left itself without AI regulations. The state has established baseline standards for AI use in government through Executive Order No 30 [2], which mandates compliance with AI policy standards published by the Virginia Information Technologies Agency (VITA) [2]. These standards [2], released in June 2024 [2], require that AI technologies used by state agencies [2], including those from external vendors [2], adhere to guidelines focused on transparency [2], risk mitigation [2], and data protection [2], drawing on ethical principles to prevent bias and privacy issues [2]. The Executive Order also formed the Artificial Intelligence Task Force [2], composed of various stakeholders [2], to develop additional guidelines for responsible AI use and provide ongoing recommendations [2].
In Colorado [4], despite the law being signed [4], Governor Polis called for further assessment and revisions [4]. The AI Impact Task Force identified areas for clarification and improvement [4], categorizing them into four groups [4], including minor changes and the need for further stakeholder engagement [4]. Key issues include defining “consequential decisions” and “algorithmic discrimination,” as well as the potential for including an opportunity to cure non-compliance incidents [4].
Texas is also considering legislation on high-risk AI systems [4], recently modifying its proposed Responsible AI Governance Act [4]. The revised bill reduces regulatory requirements for private sector companies while addressing concerns about AI discrimination [3]. It prohibits certain discriminatory use cases but does not impose active requirements on companies to prevent such effects [3]. The Texas law aligns with Utah’s AI legislation [4], requiring notice when individuals interact with AI in government contexts [4], and prohibits the intentional development of AI systems to incite harm or criminality [4]. The bill is currently pending in the House Committee [4].
While the veto means Virginia will not implement the broad AI statute proposed in HB 2094 [2], companies supplying AI services to state agencies must still comply with VITA’s standards [2], which indirectly impose governance expectations on private-sector businesses through public contracts [2]. Despite the absence of specific requirements from HB 2094 [2], companies in Virginia remain subject to existing laws and regulations governing AI-driven activities [2], including antidiscrimination laws [2], consumer protection statutes [2], and data privacy regulations [2], such as the Virginia Consumer Data Protection Act [2]. Consequently [1] [2] [4], businesses could face liability for biased outcomes or unfair practices resulting from their AI tools [2], regardless of the vetoed legislation [2].
Conclusion
The veto of the High-Risk Artificial Intelligence Developer and Deployer Act underscores the challenges of balancing innovation with regulation. While the proposed legislation was intended to safeguard against algorithmic discrimination, its potential impact on economic growth and smaller companies led to its rejection. Nonetheless, Virginia continues to enforce AI regulations through existing laws and standards, ensuring that AI technologies are used responsibly and ethically. The ongoing developments in AI legislation across various states reflect the complexity and importance of establishing effective governance frameworks for emerging technologies.
References
[1] https://www.davispolk.com/insights/client-update/virginia-governor-vetoes-ai-bill-targeting-algorithmic-discrimination
[2] https://www.jdsupra.com/legalnews/virginia-governor-vetoes-artificial-6335065/
[3] https://www.ciodive.com/news/Virginia-AI-bill-veto-HB2094-Youngkin-policy-regulation/743518/
[4] https://www.jdsupra.com/legalnews/us-state-ai-legislation-virginia-vetoes-2892430/