Introduction

State legislators across the United States are increasingly focused on regulating artificial intelligence (AI), with significant attention on the Texas Responsible AI Governance Act (TRAIGA or HB 1709). This legislation [1] [2] [5] [7] [8], introduced by Republican House Representative Giovanni Capriglione [2], aims to establish a comprehensive regulatory framework for high-risk AI systems across various sectors [3], including employment [3] [8], healthcare [3] [5] [8], financial services [3] [5], and criminal justice [3] [5]. TRAIGA seeks to oversee the development, deployment [1] [2] [3] [4] [5] [8], and governance of AI systems in Texas [4], introducing stringent regulations for high-risk AI applications [1].

Description

State legislators across the US are actively proposing and enacting laws to regulate AI [8], with a particular focus on the Texas Responsible AI Governance Act (TRAIGA or HB 1709). Introduced by Republican House Representative Giovanni Capriglione [2], this legislation establishes a comprehensive regulatory framework for high-risk AI systems across various sectors [3], including employment [3] [8], healthcare [3] [5] [8], financial services [3] [5], and criminal justice [3] [5]. TRAIGA aims to oversee the development [2], deployment [1] [2] [3] [4] [5] [8], and governance of AI systems in Texas [4], introducing stringent regulations for high-risk AI applications [1]. It adopts a risk-based governance model similar to the EU AI Act [8], imposing obligations on developers [8], deployers [5] [6] [8], and distributors of high-risk AI systems [5] [6] [8]. The act defines “high-risk” broadly [8], encompassing significant decision-making areas while addressing risks such as data misuse and algorithmic discrimination.

To prevent discrimination against protected classes [6], the act mandates that distributors of high-risk AI systems exercise “reasonable care” and establishes a negligence standard to hold developers accountable for harm caused by their systems. It prohibits certain harmful AI applications [6], including social scoring and manipulative outputs, and bans AI systems deemed to pose “unacceptable risk,” such as those that identify emotions or capture biometric identifiers without consent [8]. Specific prohibited AI use cases overlap with those in the EU AI Act [4]. While the attorney general will primarily enforce the law [1] [8], private litigants have a limited right to sue for violations involving banned AI systems [8].

Generative AI developers are required to maintain detailed records of the datasets used for training their models [8], conduct mandatory risk assessments [5], and implement risk management plans [1]. They must also assess impacts, conduct accuracy evaluations, and withdraw non-compliant systems until they achieve compliance [2]. TRAIGA establishes the Texas Artificial Intelligence Council under the Office of the Governor to oversee compliance, develop ethical guidelines [2], and issue regulations [1], while also including provisions for investigating corporate influence on regulatory development [4]. Key compliance documentation requirements [1], such as high-risk reports and annual assessments [1], may create substantial administrative burdens for companies [1], potentially slowing AI development in Texas [1]. Penalties for non-compliance can range from $50,000 to $200,000 per violation [4], with potential daily fines [4], and state agencies are authorized to suspend or revoke licenses for violations of TRAIGA [4].

The law provides exemptions for many small businesses and allows for limited innovation through an experimental sandbox program [8], which shares similarities with the EU AI Act’s regulatory sandboxes. This provision enables registered AI developers to test their systems with fewer restrictions for a limited time [2], granting temporary immunity from compliance issues [2], although developers must submit detailed reports on their projects [2]. The council can revoke sandbox protections if a project poses public harm or fails to meet reporting obligations [2]. Additionally, TRAIGA seeks to address workforce shortages in AI-related fields by promoting career preparedness and enhancing competition among Texas-based AI companies [4]. Companies under investigation would have a 30-day period to address alleged violations before enforcement actions can be initiated [8].

In late 2024 [7], Texas enacted additional legislation to regulate the use of artificial intelligence systems [7], which mandates the establishment of a risk identification and management policy [7], requires semi-annual impact assessments [7], and calls for the disclosure and analysis of associated risks [7]. This new law also stipulates the need for transparency measures and human oversight in specific situations [7], with provisions set to come into effect on September 1, 2025 [7]. While TRAIGA aims to address algorithmic discrimination [1], existing state and federal anti-discrimination laws already cover this issue [1]. The trajectory of TRAIGA remains uncertain as legislative processes often lead to modifications in scope [8], as seen with the Colorado AI Act [8]. This act represents a significant development in the regulatory landscape for AI in the United States [4], with its future impact contingent on approval and implementation [4]. The timing of TRAIGA is particularly concerning as Texas seeks to attract major investments in AI infrastructure [1], such as the Stargate Project [1], which could be jeopardized by these regulatory hurdles [1]. The rapid pace of technological advancement poses challenges for effective regulation [1], risking a scenario where foreign companies set the standards for AI development [1], potentially hindering domestic progress and leadership in the AI sector [1].

Conclusion

The Texas Responsible AI Governance Act (TRAIGA) represents a significant step in the regulation of AI systems, aiming to mitigate risks associated with high-risk AI applications. While it seeks to prevent discrimination and ensure accountability, the act may impose substantial administrative burdens on companies, potentially slowing AI development in Texas [1]. The introduction of additional legislation in 2024 further underscores the state’s commitment to regulating AI, although the rapid pace of technological advancement presents challenges for effective regulation [1]. The future impact of TRAIGA will depend on its implementation and the ability of Texas to balance regulatory oversight with fostering innovation and attracting investments in AI infrastructure.

References

[1] https://www.forbes.com/sites/jamesbroughel/2025/01/26/texass-left-turn-on-ai-regulation/
[2] https://www.deeplearning.ai/the-batch/texas-introduces-landmark-bill-to-regulate-ai-development-and-use/
[3] https://nquiringminds.com/ai-legal-news-summaries/9744509b770d333535dec8772a864605/
[4] https://www.lumenova.ai/blog/texas-responsible-ai-governance-act-breakdown/
[5] https://www.jdsupra.com/legalnews/what-does-the-2025-artificial-4599728/
[6] https://www.enz.ai/post/texas-considers-comprehensive-ai-regulation
[7] https://natlawreview.com/article/states-ring-new-year-proposed-ai-legislation
[8] https://www.jdsupra.com/legalnews/the-texas-responsible-ai-governance-act-8599844/