Introduction

Virginia Delegate Michelle Lopes Maldonado has introduced a comprehensive legislative package consisting of four bills designed to enhance safety, transparency [2], and fairness in the realm of artificial intelligence (AI). These bills focus on AI disclosure, training data transparency [2], consumer opt-out options [2], safety testing [2], and regulations concerning deepfakes [2]. Notably, HB 2250 is recognized as the first AI safety bill in the nation to introduce consumer protection measures [2]. The legislation also addresses algorithmic discrimination in critical areas such as housing, healthcare [3], and employment [2] [3].

Description

Virginia Delegate Michelle Lopes Maldonado has introduced a comprehensive package of four bills aimed at enhancing safety, transparency [2], and fairness in artificial intelligence [2]. These measures include requirements for AI disclosure [2], training data transparency [2], consumer opt-out options [2], safety testing [2], and regulations concerning deepfakes [2]. Notably, HB 2250 is recognized as the first AI safety bill in the nation to introduce consumer protection measures such as ‘Do Not Train’ data designations [2], Training Data Verification Requests [2], and Training Data Deletion Requests [2]. This legislation also addresses algorithmic discrimination [3], particularly in significant decisions related to housing, healthcare [3], and employment [2] [3].

Maldonado [2] [3], an attorney with a background in technology [3], has collaborated with various stakeholders [3], including industry representatives and civil society [3], to refine these bills [3], which she hopes to advance during Virginia’s legislative session [3]. The bills have received conceptual endorsement from the legislative Joint Commission on Technology and Science [3], emphasizing responsible development practices [3]. Developers will be required to conduct quality assurance to ensure their AI systems do not marginalize individuals [3], and any substantial modifications to an AI model must be accompanied by an impact assessment to prevent negative outcomes [3]. Specifically, developers of high-risk AI systems must update their disclosures and documentation within 90 days following any intentional and substantial modifications to the system [1].

The Transparency Coalition endorses these initiatives as vital for safeguarding personal data and human-created content [2]. HB 2250 seeks to mandate developers to disclose information about the training data used in AI models [2], following California’s recent adoption of a similar measure (AB 2013) [2]. Virginia’s HB 2250 [2], along with Washington State’s proposed HB 1168 [2], aims to provide comparable protections for citizens [2]. Additionally, all parties involved in the development and use of high-risk AI systems are required to disclose the rationale behind adverse consequential decisions [1], detailing the AI system’s contribution [1], the data processed [1], and the sources of that data [1]. Consumers will have the right to correct inaccuracies or appeal adverse decisions [1], and public-facing disclosures must be provided when interacting with these AI systems [1].

Furthermore, HB 2121 [2], known as the Digital Content Authenticity and Transparency Act [2], requires AI developers to embed provenance data in AI-generated or modified images and to provide public access to this information [2]. The bills include:

  • HB 2121: Digital Content Authenticity and Transparency Act – Mandates the application of provenance data to AI-generated digital content and public access to this data [2].

  • HB 2250: Consumer Opt-Out Artificial Intelligence Training Data Act – Allows consumers to opt out of having their personal data used for AI training and requires developers to disclose training data information on their websites.

  • HB 2094: High-Risk Artificial Intelligence Developer and Deployer Act – Obligates developers of high-risk AI systems to disclose intended use, limitations [1] [2], risks of algorithmic discrimination [1] [2] [3], performance evaluations [2], and risk mitigation strategies [2].

  • HB 2124: Synthetic Digital Content Act – Expands defamation laws to cover synthetic digital content, classifying the use of AI-derived content for fraud as a Class 1 misdemeanor [2]. It also permits individuals depicted in synthetic content to pursue civil action against offenders and directs the Attorney General to examine enforcement of related laws [2].

Maldonado has pointed out existing issues [3], such as the exclusion of women and people of color from algorithmic outputs in hiring and housing lotteries [3], advocating for diagnostics to evaluate AI screening systems for equitable results [3]. She stresses the importance of proactive measures in technology development [3], drawing lessons from the consequences of inaction in social media [3]. The legislation does not ban AI usage but requires developers to ensure their systems function correctly before public deployment [3], aiming to enhance quality assurance processes without stifling innovation [3].

The Virginia General Assembly has temporarily recessed due to a water crisis affecting Richmond [2]. As the proposed AI legislation is set to take effect in July 2026 [1], it shares similarities with Colorado’s AI Act [1], which will be implemented in February 2026 [1]. Both frameworks focus on regulating developers and deployers of high-risk AI systems [1], mandating detailed disclosures regarding intended use [1], limitations [1] [2], and measures to mitigate risks of algorithmic discrimination [1]. Enforcement will be managed by the Office of the Attorney General [1], with no private right of action for consumers [1]. Virginia is also considering two separate bills that would impose different regulatory requirements on public versus private sector entities involved with high-risk AI systems [1], encouraging organizations to establish governance and compliance programs to enhance their compliance posture [1].

Conclusion

The introduction of these bills by Delegate Maldonado represents a significant step towards ensuring responsible AI development and deployment. By addressing key issues such as data transparency, consumer rights, and algorithmic discrimination [1] [2] [3], the legislation aims to protect citizens while fostering innovation. The proposed measures, if enacted, will set a precedent for other states to follow, potentially leading to a more standardized approach to AI regulation across the United States.

References

[1] https://wp.nyu.edu/compliance_enforcement/2025/01/09/sweeping-ai-legislation-under-consideration-in-virginia/
[2] https://www.transparencycoalition.ai/news/virginia-files-nations-first-ai-do-not-train-data-bill-as-2025-session-convenes
[3] https://pluribusnews.com/news-and-events/qa-the-virginia-lawmaker-behind-ai-anti-discrimination-legislation/