Introduction
The Texas Responsible AI Governance Act [3] [6], also known as Texas HB 1709, establishes a comprehensive regulatory framework for high-risk AI systems [7], particularly in employment contexts [6]. It addresses significant decision-making areas such as healthcare [3], financial services [3], and criminal justice [3], with a focus on preventing AI-powered discrimination [2] [5]. The Act introduces new categories of regulated entities and mandates detailed assessments and transparency measures. Similar legislative efforts are underway in Virginia, highlighting a growing trend in AI regulation.
Description
Texas HB 1709 [2] [4] [5] [7], now known as the Texas Responsible AI Governance Act, establishes a comprehensive regulatory framework for high-risk AI systems [7], particularly in employment contexts [6], while also addressing significant decision-making areas such as healthcare, financial services [3], and criminal justice [3]. The Act emphasizes preventing AI-powered discrimination and introduces a new category of regulated entities called distributors, who are responsible for making AI systems available for commercial purposes [4]. Distributors must exercise reasonable care to ensure compliance with established standards before market entry and are required to withdraw or disable any AI system if they are aware or have reason to believe it is non-compliant.
The legislation mandates that developers and deployers conduct detailed annual impact assessments to evaluate risks related to algorithmic discrimination, cybersecurity vulnerabilities [3], and transparency [3] [4]. Employers utilizing AI for employment decisions must inform individuals about their interactions with these systems, detailing the purpose and nature of significant decisions made by AI [6]. Additionally, they are required to establish an AI governance and risk-management framework [6], provide adequate training for users [6], allocate necessary resources for compliance [6], and perform due diligence on AI vendors and developers to prevent algorithmic bias [6].
Developers and deployers of AI systems must maintain detailed records of training data and model limitations, surpassing the requirements of the EU AI Act [2], and are required to update their disclosures and documentation within 90 days of any intentional and substantial modifications [9]. The Act also emphasizes transparency, fairness [2] [3] [4] [5] [7], and accountability [4] [7], particularly in AI-driven hiring practices, and prohibits specific uses of AI [4], such as manipulating or classifying individuals based on sensitive characteristics [5], social scoring [2] [4], unauthorized identification from public images [2], and the creation of AI-generated sexual deepfakes.
In addition to the responsibilities of distributors, Virginia lawmakers are advancing legislation to regulate high-risk AI systems [8], addressing public safety and privacy concerns in critical sectors such as healthcare, law enforcement [1] [4] [8], education [1], and employment [1] [3] [6]. Among the proposed measures, Virginia HB 2094 establishes integrators [4], who integrate AI systems into software applications for market distribution [4]. Integrators must adopt acceptable use policies to mitigate known risks of algorithmic discrimination and provide disclosures to deployers regarding modifications made to the AI system [4]. Furthermore, developers of generative AI systems are mandated to watermark their outputs [4], and the bill emphasizes compliance and enforcement [8], with civil penalties for noncompliance [1] [8].
Another significant proposal, House Bill 2121 [1] [8], requires AI developers to document and publicly disclose their technology’s origin and development history [1], ensuring public accessibility of this information [8]. This legislation incorporates feedback from civil society and expands responsibilities to distributors and integrators, not just developers [8]. Additionally, HB 2250 allows consumers to opt out of the usage of their personal information [1]. All parties involved in the development and use of high-risk AI systems are required to disclose the rationale behind adverse consequential decisions [9], including the AI system’s contribution to the decision [9], the data processed [9], and the sources of that data [9]. Consumers will have the opportunity to correct inaccuracies or appeal adverse decisions [9], and public-facing disclosures must be provided when interacting with these AI systems [9].
Regulatory activity has intensified [4], with enforcement actions against companies engaging in discriminatory practices or unfair conduct related to AI [4]. Regulators at both state and federal levels emphasize that existing laws are sufficient to protect consumers without the need for AI-specific legislation [4]. Lawmakers are also considering a bill requiring political ads to disclose AI usage [8]. Collaboration with other states aims to create cohesive legislation as federal policies on AI remain pending [1] [8].
The urgency for sustainable laws is underscored by recent developments in the European Union [8], which has enacted significant AI regulations [8]. Companies incorporating AI into their operations should pay close attention to regulatory guidance [4], focusing on the principles of transparency [4], accountability [4] [7], and fairness [4], as they navigate the evolving landscape of AI legislation. Additionally, a forthcoming Senate Bill 1214 will set requirements for high-risk AI systems utilized by public entities [1] [8], with the chief information officer responsible for policy creation [1] [8]. The governor’s executive order last year established standards for AI use and created a task force to guide policymakers in developing responsible AI practices [1], emphasizing the need to protect national security and citizen data [1].
Proactive measures [7], such as tracking legislative progress and evaluating AI tools [7], can help employers prepare for potential implementation and promote ethical AI practices [7], potentially setting a precedent for similar regulations in other states [7]. Concerns have been raised about the potential for excessive regulation to stifle AI innovation, highlighting the need for a balanced approach that addresses risks while fostering development. Organizations involved with high-risk AI systems should review the proposed requirements and develop a governance and compliance program to strengthen their compliance posture [9], including creating infrastructure for AI system impact assessments [9], annual reviews [9], public disclosures [1] [4] [9], and timely reporting of algorithmic discrimination indicators [9]. If an AI system qualifies for an exemption [9], appropriate documentation should be prepared to substantiate that exemption [9].
Conclusion
The Texas Responsible AI Governance Act and similar legislative efforts in Virginia represent a significant shift towards comprehensive AI regulation, emphasizing transparency [7], accountability [4] [7], and fairness [4]. These regulations aim to prevent discrimination and ensure ethical AI practices, setting a precedent for future legislation. While there are concerns about potential overregulation stifling innovation, a balanced approach is necessary to address risks while fostering AI development. Organizations must stay informed and proactive in compliance to navigate this evolving regulatory landscape effectively.
References
[1] https://virginiamercury.com/2025/01/13/these-bills-would-regulate-high-risk-artificial-intelligence-use-in-virginia/
[2] https://www.aol.com/finance/texas-legislature-consider-tough-ai-111202285.html
[3] https://www.jdsupra.com/legalnews/what-does-the-2025-artificial-4599728/
[4] https://www.jdsupra.com/legalnews/the-year-ahead-in-artificial-3295378/
[5] https://www.yahoo.com/news/texas-legislature-consider-tough-ai-111202614.html
[6] https://natlawreview.com/article/texas-responsible-ai-governance-act-and-its-potential-impact-employers
[7] https://www.forbes.com/sites/alonzomartinez/2025/01/17/texas-hb-1709-the-ai-law-every-employer-needs-to-know-about/
[8] https://www.whro.org/virginia-government/2025-01-13/these-bills-would-regulate-high-risk-artificial-intelligence-use-in-virginia
[9] https://wp.nyu.edu/compliance_enforcement/2025/01/09/sweeping-ai-legislation-under-consideration-in-virginia/