Introduction

The “Texas Responsible Artificial Intelligence Governance Act” (TRAIGA) is a legislative initiative aimed at regulating high-risk artificial intelligence (AI) systems in Texas. Introduced by State Representative Giovanni Capriglione, the Act seeks to establish ethical and practical guidelines for AI development and deployment, focusing on transparency, accountability [2], and the prevention of bias and discrimination. It positions Texas as a leader in AI governance, potentially serving as a model for other states [2].

Description

Texas is currently advancing the “Texas Responsible Artificial Intelligence Governance Act” (TRAIGA), introduced by State Rep [5]. Giovanni Capriglione [5]. This legislation aims to regulate the development and deployment of high-risk artificial intelligence systems that impact critical areas such as education, employment [1] [3] [4] [6] [7], and healthcare [3]. TRAIGA establishes ethical and practical guidelines [2], emphasizing transparency [2], accountability [2], and the prevention of bias and discrimination. It explicitly excludes certain technologies, such as pattern detection tools and calculators [4], from its regulations [7].

Key definitions within the Act include “deployer,” referring to businesses in Texas that utilize high-risk AI systems [4], and “developer,” pertaining to those who create or significantly modify such systems [4] [6]. Understanding these roles is essential for determining compliance obligations under the Act [4] [6]. The definition of high-risk AI systems encompasses any AI tool that influences employment decisions [1], potentially affecting all Texas employers using AI in HR processes [1].

Developers are mandated to disclose known limitations [4], performance metrics [4] [6] [7], and potential risks related to algorithmic discrimination and misuse of personal data [4] [6]. They must implement a formal risk management policy prior to deployment and maintain comprehensive records of their training datasets [4] [7], particularly for generative AI. Additionally, developers are required to conduct annual reviews of their high-risk AI systems to ensure compliance with anti-discrimination measures and produce detailed risk reports. The Act also requires ongoing monitoring for algorithmic discrimination, cybersecurity safeguards [1], and transparency issues [1].

Deployers have a duty to exercise reasonable care to protect consumers from foreseeable risks of algorithmic discrimination. They must oversee human involvement in AI-driven decisions [3], promptly report any suspected non-compliance, and ensure that human oversight is maintained for consequential decisions made by AI systems. Furthermore, deployers are required to conduct semi-annual impact assessments, focusing on algorithmic discrimination risks and mitigation strategies [4] [6], and must clarify how the system’s use aligns with the developer’s intended purpose after any significant modifications.

The Act prohibits deceptive manipulation of human behavior [6], social scoring based on social behavior [6], unauthorized collection of biometric identifiers [6], and the inference of sensitive personal attributes or emotions from biometric data without consent. It also bans AI systems that present “unacceptable risks,” including those that produce deepfakes related to child sexual abuse material or prohibited intimate imagery.

To foster innovation while maintaining oversight [2], TRAIGA introduces an Artificial Intelligence Regulatory Sandbox Program [2], allowing companies in the testing phase to apply for temporary exemptions from certain regulatory requirements [2]. The Act promotes workforce development through educational initiatives and training programs to prepare individuals for an AI-driven economy [2]. It includes provisions to protect free speech by ensuring that AI systems used for content moderation are transparent and unbiased [2].

Consumers are granted rights under the Act [4] [6] [7], allowing them to pursue declaratory or injunctive relief against developers or deployers for violations [6]. The Texas Attorney General is empowered to investigate and enforce compliance [4], with administrative penalties for breaches [4]. Additionally, the Act proposes amendments to the Texas Data Privacy and Security Act to include AI-specific regulations and establishes an AI workforce grant program along with a new advisory “AI council.” Exemptions are provided for small businesses and open-source AI developers who take measures to prevent high-risk uses and publicly disclose their AI system’s technical architecture [5].

Businesses utilizing AI systems in Texas should closely monitor the Act’s legislative progress to ensure compliance if enacted [4] [6], especially in light of the current lack of federal regulation that necessitates state-level initiatives [2]. This comprehensive [5], risk-based regulatory framework positions Texas as a leader in AI governance, potentially serving as a model for ethical AI development and deployment across the United States, following the model of the Colorado AI Act. As we approach 2025 [1], TRAIGA marks a significant development in AI regulation [1], providing mechanisms for both governmental and private enforcement [1].

Conclusion

The Texas Responsible Artificial Intelligence Governance Act represents a significant step forward in the regulation of AI technologies, with the potential to influence AI governance across the United States [3]. By establishing a comprehensive framework that addresses ethical concerns and promotes transparency, the Act aims to mitigate risks associated with AI while fostering innovation. Its implementation could serve as a benchmark for other states, highlighting the importance of state-level initiatives in the absence of federal regulation. As businesses and developers adapt to these new requirements, TRAIGA may pave the way for more responsible and ethical AI practices nationwide.

References

[1] https://www.jdsupra.com/legalnews/regulating-artificial-intelligence-in-3528213/
[2] https://www.reformaustin.org/texas-legislature/texas-moves-to-rein-in-artificial-intelligence/
[3] https://dig.watch/updates/new-ai-governance-law-proposed-in-texas
[4] https://www.lexology.com/library/detail.aspx?g=e4342012-95a7-4029-b08a-2238c493f859
[5] https://www.statesman.com/story/business/technology/2024/12/24/texas-bill-1709-artificial-intelligence-governance-act-filing-giovanni-capriglione/77190821007/
[6] https://www.jdsupra.com/legalnews/texas-considers-comprehensive-ai-bill-3086325/
[7] https://natlawreview.com/article/texas-considers-comprehensive-ai-bill