Introduction
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) [1] [4] [6], set to take effect on January 1, 2026 [1], marks a pioneering step in AI regulation by a traditionally conservative state. This legislation [2], known as HB 149, aims to balance AI safety, security [3], and transparency with the promotion of technological innovation and private investment, while protecting residents from potential AI-related harms.
Description
Texas has enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) [2] [6], which will take effect on January 1, 2026 [1]. This legislation [2], identified as HB 149 [3], represents a significant advancement in AI safety [3], security [3], and transparency [3], being the first of its kind adopted by a red state [3]. TRAIGA aims to regulate AI systems while promoting technological innovation and attracting private industry investment, all while safeguarding residents from potential harms associated with AI technologies.
The Act applies to both public and private sector AI applications [2], imposing stricter requirements on state agencies [2]. It prohibits the development and deployment of AI systems that intentionally discriminate against protected classes [6], manipulate behavior to incite self-harm or violence [6], or infringe upon constitutional rights [2] [6]. Additionally, it bans the creation or distribution of AI systems intended for child exploitation or unlawful deepfakes [2]. State government agencies are required to notify individuals when they interact with AI systems, although private companies are not subject to similar notification requirements [2]. Furthermore, the law updates consent requirements for the collection of biometric data [6], ensuring that individuals do not consent merely by having their data publicly available online [6], and restricts the use of AI for biometric identification without consent if it infringes on constitutional rights [6].
TRAIGA introduces a regulatory sandbox program [2], allowing businesses to test AI systems in a controlled environment for up to 36 months without full regulatory compliance [6], provided they submit quarterly performance reports [6]. This approach fosters innovation while ensuring oversight. The Texas Artificial Intelligence Advisory Council will oversee ethical AI development [6], provide guidance on regulation [6], and offer training for government use of AI systems [6]. Enforcement of the regulations falls under the authority of the Texas Attorney General [6], who is responsible for imposing penalties for violations, including fines and injunctive relief [6]. Notably, private companies are granted a 60-day period to rectify violations before penalties are enforced [2], reflecting a more lenient regulatory approach for the private sector [2].
TRAIGA emphasizes specific prohibited uses rather than relying solely on comprehensive risk assessments, highlighting the adaptability of traditional regulatory frameworks to technologies that operate at machine speed and influence human agency and choice [5]. While the Act addresses intentional harmful uses [5], it raises complex questions regarding AI systems that affect decision-making in ways that may not align with established regulatory categories [5].
Initially introduced in December 2024 [1], the original draft of TRAIGA proposed a comprehensive regulatory framework similar to the Colorado AI Act and the EU AI Act [1], focusing on “high-risk” AI systems and imposing substantial requirements and liabilities on private sector developers and deployers [1]. However, by March 2025 [1], the bill was amended to significantly reduce its scope [1], with many of the original draft’s stringent requirements either removed or limited to governmental entities [1].
A significant factor influencing TRAIGA’s future is a proposed federal budget provision that could impose a ten-year moratorium on state and local AI regulations unless they accelerate AI deployment [2]. If enacted [2] [3], this federal measure could override TRAIGA and similar state initiatives [2], potentially diminishing its impact [2]. If the federal preemption does not occur [2], TRAIGA may serve as a model for other states in AI governance [2], offering valuable insights into the challenges of applying legal frameworks to technologies that disrupt conventional notions of agency [5], intent [5], and choice [5].
Conclusion
TRAIGA’s enactment signifies a critical development in AI governance, particularly within a conservative state. Its success or failure could influence future state and federal AI regulations. If not preempted by federal law, TRAIGA may become a template for other states [2], providing a framework for balancing innovation with ethical considerations in AI deployment. The Act’s focus on specific prohibitions and its regulatory sandbox approach highlight the complexities and potential of regulating rapidly evolving technologies.
References
[1] https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law
[2] https://www.lexology.com/library/detail.aspx?g=e1af0ffc-6177-43b0-89d3-6b051e159be4
[3] https://www.transparencycoalition.ai/news/historic-moment-gov-abbott-signs-texas-responsible-ai-governance-act-traiga-into-law
[4] https://insider.govtech.com/texas/news/texas-gov-signs-multiple-ai-bills-despite-federal-moratorium
[5] https://natlawreview.com/article/texas-enacts-responsible-ai-governance-act
[6] https://www.nelsonmullins.com/insights/alerts/privacyanddatasecurityalert/all/texas-legislature-passes-house-bill-149-to-regulate-ai-use