Introduction

The Texas Responsible AI Governance Act (TRAIGA) [2] [3] [6] [10], also known as HB 1709, was filed by Texas State Representative Giovanni Capriglione on December 23, 2024. This legislation aims to establish a comprehensive regulatory framework for high-risk artificial intelligence (AI) systems, focusing on preventing AI-powered discrimination and promoting transparency, accountability [1] [8], and fairness. TRAIGA aligns with global trends in AI regulation, drawing inspiration from the European Union’s AI Act and Colorado’s AI Act.

Description

On 23 December 2024 [2] [4] [9], Texas State Representative Giovanni Capriglione filed the Texas Responsible AI Governance Act (TRAIGA) [2], also known as HB 1709, which establishes a comprehensive regulatory framework for high-risk artificial intelligence (AI) systems. This state-level initiative aims to prevent AI-powered discrimination while emphasizing transparency, accountability [1] [8], and fairness in the deployment of these systems. TRAIGA adopts a risk-based regulatory approach similar to the European Union’s AI Act and draws comparisons to Colorado’s AI Act, reflecting a growing trend in AI regulation.

The Act imposes obligations on developers [2] [4] [9], deployers [1] [2] [3] [4] [5] [6] [7] [8] [9] [10], and distributors of specific high-risk AI systems [4] [5] [7], particularly concerning their impact on significant decision-making areas [2], including employment-related actions such as hiring, performance evaluations [2] [4] [8] [9], compensation [2] [4] [9], disciplinary measures, and termination processes [4] [9]. High-risk AI systems are defined as those that significantly influence such decisions, as well as those involved in healthcare, financial services [5] [7], and criminal justice [5] [7].

Key requirements of the Act include ensuring human oversight by qualified individuals [9], timely reporting of discrimination risks to the Artificial Intelligence Council within 10 days [2] [9], and conducting regular compliance evaluations to mitigate algorithmic discrimination [9]. Developers and deployers are mandated to conduct detailed impact assessments to evaluate risks related to algorithmic discrimination [5], cybersecurity vulnerabilities [5] [7], and transparency [5] [7] [8]. They must also maintain detailed records of training data [3] [10], including metrics on the models’ accuracy [10], explainability [6] [10], reliability [6] [10], and security [6] [10]. Deployers are required to suspend any non-compliant systems and inform developers of any issues that arise [9], while also conducting annual impact assessments and reassessing compliance within 90 days of significant modifications [6]. Furthermore, they must inform consumers when AI significantly influences decisions affecting them [10].

Employers are required to inform individuals about the purpose of AI systems, the potential for consequential decisions [2], the nature of those decisions [2], and the factors influencing them [2] [8], while providing contact information for the deployer [2]. They must establish clear policies regarding the system’s uses, decision-making processes [2] [3] [4] [5] [7], and training requirements [2]. Additionally, developers and deployers must exercise “reasonable care” to protect consumers from algorithmic discrimination and disclose model limitations [3] [6] [10].

TRAIGA explicitly prohibits the use of AI for subliminal manipulation [3] [6], social scoring [3] [6] [10], inferring personal characteristics from biometric data [3] [6] [10], identifying individuals from publicly available images [3], and creating sexual deepfakes [3] [10]. These provisions have raised free-speech concerns [3], even among supporters of the legislation.

To ensure compliance with the Act [2], employers are encouraged to proactively create an AI governance and risk-management framework, allocate resources for oversight [2] [8], and perform due diligence on AI vendors [2] [9]. The Texas attorney general is tasked with enforcing compliance, with fines reaching up to $200,000 per violation and daily administrative fines of $40,000 for ongoing infractions [10]. Consumers will have the right to appeal adverse AI-driven decisions that negatively impact their health [3] [6] [10], safety [3] [6], and rights [3], although the bill does not grant them the right to sue for violations [3] [10].

The establishment of the Texas AI Council is a significant aspect of the Act, as it will explore AI applications in state governance and develop ethical standards for AI [10]. The Council will primarily consist of public members with relevant expertise and will operate under the governor’s office. Additionally, the Act proposes a regulatory “sandbox” to facilitate innovation while ensuring compliance with established standards [5] [7]. This legislation is expected to be a focal point in the upcoming Texas legislative session beginning on 14 January 2025 [9], with ongoing debates about its potential impact on AI development and regulation. It has the potential to serve as a model for future AI-related laws [1]. Employers are encouraged to monitor the bill’s progress and assess their AI tools and processes to ensure compliance and promote ethical AI practices [1].

Conclusion

TRAIGA represents a significant step in AI regulation, aiming to balance innovation with ethical considerations. By establishing a robust framework for high-risk AI systems [1], the Act seeks to mitigate discrimination and enhance transparency and accountability. Its influence may extend beyond Texas, potentially serving as a model for future AI legislation [1]. As the legislative session approaches, stakeholders are advised to stay informed and prepare for compliance to foster responsible AI development.

References

[1] https://www.forbes.com/sites/alonzomartinez/2025/01/17/texas-hb-1709-the-ai-law-every-employer-needs-to-know-about/
[2] https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025
[3] https://www.aol.com/finance/texas-legislature-consider-tough-ai-111202285.html
[4] https://www.jdsupra.com/legalnews/the-texas-responsible-ai-governance-act-3939079/
[5] https://www.jdsupra.com/legalnews/what-does-the-2025-artificial-4599728/
[6] https://www.inkl.com/news/why-a-texas-ai-bill-is-shaping-up-as-the-next-battleground-over-u-s-ai-policy
[7] https://www.littler.com/publication-press/publication/what-does-2025-artificial-intelligence-legislative-and-regulatory
[8] https://www.timesofai.com/news/texas-ai-governance-act-impact-on-employers/
[9] https://natlawreview.com/article/texas-responsible-ai-governance-act-and-its-potential-impact-employers
[10] https://www.yahoo.com/news/texas-legislature-consider-tough-ai-111202614.html