Introduction

The progression of AI-related legislation across several US states highlights a growing focus on transparency, security [3] [4], and consumer protection [1] [3]. States such as California, Texas [1] [2] [3], Nebraska [3], and New York are actively working on bills that address these issues, with particular attention to algorithmic discrimination, the protection of minors [3], and consumer privacy [1].

Description

AI-related legislation is progressing in California [3], Texas [1] [2] [3], Nebraska [3], and New York [1] [3], with various bills addressing transparency [3], security [3] [4], and the protection of minors [3], as well as algorithmic discrimination and consumer privacy.

In California [1] [2] [3] [4], lawmakers are focusing on AI regulation [1], with several bills introduced this year aimed at enhancing AI system transparency and user security. Notably, three bills are currently on hold in the state Senate due to fiscal considerations [4]. These include the AI Abuse Protection Act (SB 11), sponsored by Sen [2]. Angelique Ashby [2], which seeks to classify AI-generated or manipulated content under existing right of publicity and false impersonation laws [4], requiring technology providers to inform consumers of potential legal liabilities [4]. Another bill [4], the Cybersecurity Upgrade for AI (SB 468), introduced by Sen [2]. Josh Becker [2], mandates that deployers of AI systems processing personal information implement specific measures to secure that data [2], aligning with existing state and federal laws on personal information protection [2]. This bill has been approved by the Senate Judiciary Committee and is set for a hearing in the Senate Appropriations Committee. Additionally, the Human Oversight of AI in Critical Infrastructure Act (SB 833) requires that human oversight be maintained for AI systems controlling essential services [4], including transportation [4], energy [4], and emergency services [4]. These bills have been placed in the suspense file by the Senate Appropriations Committee [4], pending analysis of their financial implications [4], with a deadline for committee approval set for May 23; failure to meet this deadline would result in the bills being reintroduced in the 2026 session [4]. Furthermore, the California Privacy Protection Agency (CPPA) is advancing formal rulemaking to update privacy laws and create regulations on automated decision-making technology [1], specifically targeting AI [1], which could impose significant obligations on businesses utilizing this technology [1].

In Texas [1], Rep [1]. Giovanni Capriglione (R) has introduced a revised version of the Responsible Artificial Intelligence Governance Act (TRAIGA), which has been limited to AI systems used by government agencies following industry pushback [3]. This bill is currently under review in the Senate, along with changes proposed by Robert Rodriguez (D) to reduce strict provisions while maintaining protections against algorithmic discrimination.

In Nebraska [3], three bills focused on safeguarding children from AI and social media are nearing final votes [3]. These include measures requiring parental consent for minors on social media [3], restrictions on electronic device use in schools [3], and design requirements for online services to mitigate psychological harm [3].

In New York [1] [3], Assemblymember Alex Bores (D) has released a detailed AI safety bill (NY AB 6453) [1], inspired by California’s previous efforts [1], particularly Sen [1]. Scott Wiener’s (D) AI safety bill (CA SB 1047) [1], which was vetoed by Gov [1]. Gavin Newsom (D) [1]. The New York legislation mandates parental consent for minors interacting with AI chatbots and establishes liability for harmful information provided by these platforms [3]. Additionally, the Artificial Intelligence Training Data Transparency Act is under consideration [3], emphasizing the need for transparency in AI training data [3]. Legislation addressing AI in health care is also being introduced across several states [1], highlighting the importance of protecting consumer privacy and ensuring quality of care as AI advances in diagnosing diseases and drug discovery [1]. The trend of comprehensive consumer protection AI bills continues [1], with Connecticut’s failed SB 2 serving as a model for Colorado’s recently passed law (CO SB 205) [1], which will take effect in February 2026 [1].

Conclusion

The legislative efforts in these states underscore the increasing recognition of the need for robust AI governance frameworks. These initiatives aim to balance innovation with the protection of individual rights and societal interests, setting a precedent for future AI-related policies. As AI technology continues to evolve, the implications of these legislative measures will be significant in shaping the landscape of AI deployment and its integration into various sectors.

References

[1] https://www.multistate.ai/updates
[2] https://www.transparencycoalition.ai/news/ai-bill-update-may-2-2025
[3] https://www.transparencycoalition.ai/news/ai-legislative-update-may-9-2025
[4] https://www.transparencycoalition.ai/news/california-ai-bills