Introduction
On June 6, 2025 [2], California’s legislative session marked a significant milestone in the regulation of artificial intelligence (AI) technologies. The session focused on enhancing transparency and risk management through the advancement of multiple AI-related bills. These legislative efforts aim to address the challenges posed by AI in various sectors, including content authenticity, whistleblower protection [2], and oversight of critical infrastructure [2].
Description
On June 6, 2025 [2], California’s legislative session saw significant progress on multiple AI-related bills focused on transparency and risk management [2]. The state Assembly passed a bill mandating the labeling of AI-generated versus authentic content [2], while the Senate approved a bill protecting whistleblowers in AI companies who report critical risks [2]. Additionally, three other AI bills advanced [2], addressing AI-generated media [2], chatbots [2], and oversight of critical infrastructure [2].
Assembly Bill 53 [2], sponsored by Assm [2]. Buffy Wicks [2], requires large online platforms to label AI-generated content and allows users of devices that capture images or audio to apply digital signatures to authentic material [2]. This bill has moved to the Senate for further consideration [2]. It also mandates that providers of generative artificial intelligence systems offer a free AI detection tool that meets specific criteria [1], including the ability to output system provenance data [1].
Sen [2]. Scott Wiener’s bill aims to protect whistleblowers from foundational AI model developers [2], allowing them to disclose risks to authorities without fear of retaliation [2]. This bill has passed the Senate and is now in the Assembly for review [2].
The AI Copyright Protection Act [2], AB 412 [2], sponsored by Assm [2]. Rebecca Bauer-Kahan [2], has also passed the Assembly and is awaiting action in the Senate [2].
Sen [2]. Angelique Ashby’s SB 11 proposes that AI-generated images and videos be subject to existing publicity rights and false impersonation laws [2]. This bill is ready for a vote in the Assembly [2].
SB 243 [2], sponsored by Sen [2]. Steve Padilla [2], mandates that AI platforms remind minors that they are interacting with chatbots [2], and has passed the Senate [2].
Sen [2]. Jerry McNerney’s SB 833 requires human oversight for AI systems controlling critical infrastructure [2], which has also passed the Senate and is now in the Assembly [2].
In addition, developers or deployers of high-risk automated decision systems are required to conduct an impact assessment prior to public release or deployment [1]. This assessment must be submitted to a state agency and will be kept confidential [1]. Developers must also provide a copy of the assessment to the Attorney General or Civil Rights Department within 30 days of a request [1], which will also remain confidential [1]. The Attorney General or Civil Rights Department is authorized to initiate civil actions for compliance enforcement [1], allowing developers to rectify any violations within 45 days by submitting a written statement under penalty of perjury [1]. The bill expands the scope of perjury offenses and establishes a state-mandated local program [1]. Furthermore, it prohibits state agencies from contracting with individuals who have violated specific civil rights laws regarding high-risk automated decision systems unless they certify compliance with those laws [1].
In Connecticut [2], SB 2 [2], an omnibus AI regulatory proposal [2], failed to advance before the legislative deadline [2].
In New York [2], two bills are under consideration: AB 6578 [2], the Artificial Intelligence Training Data Transparency Act [2], which is in committee [2], and SB 5668 [2], which would require parental consent for minors interacting with chatbots and establish liability for misleading information provided by chatbots [2]. This latter bill is pending a floor vote [2].
Conclusion
The legislative advancements in California reflect a proactive approach to managing the complexities of AI technologies. By addressing issues such as content authenticity, whistleblower protection [2], and the oversight of critical infrastructure [2], these bills aim to foster a safer and more transparent AI environment. The implications of these legislative efforts extend beyond California, potentially influencing AI regulatory frameworks in other states and setting a precedent for future AI governance.
References
[1] https://calmatters.digitaldemocracy.org/bills/ca_202520260sb420
[2] https://www.transparencycoalition.ai/news/ai-legislative-update-june-6-2025