Introduction
In response to the absence of a federal regulatory framework for artificial intelligence (AI), California and other states are proactively advancing legislation to govern AI technologies. This movement reflects a growing trend among states to establish comprehensive guidelines and regulations to address the challenges and opportunities presented by AI.
Description
During the state legislative season [1], California legislators are advancing several AI-related bills ahead of their summer recess [1], reflecting a broader trend among states to establish frameworks governing AI in the wake of a failed federal regulatory moratorium.
SB 11 [1], authored by Sen [1]. Angelique Ashby [1], aims to subject AI-generated images and videos to the state’s right of publicity law and criminal false impersonation statutes [1]. It has passed the Senate and is currently in the Assembly [1], having received unanimous approval from the Assembly Committee on Privacy & Consumer Protection [1].
AB 853 [1], introduced by Assm [1]. Buffy Wicks [1], mandates that large online platforms label AI-generated content and allows users of devices that capture images or audio to apply digital signatures to authentic material [1]. This bill has passed the Assembly and is now in the Senate Appropriations Committee after amendments [1]. Additionally, the California AI Transparency Act requires creators of generative AI systems with over 1,000,000 monthly users to provide a free AI detection tool [3], enabling users to determine if content was generated or altered by AI [3], along with any detected system provenance data [3].
Assm [1]. Rebecca Bauer-Kahan’s AB 412 [1], the AI Copyright Protection Act [1], seeks to establish a framework for copyright owners to determine if their works were used to train generative AI models [1]. It has passed the Assembly and is currently paused in the Senate as a two-year bill [1].
The LEAD for Kids Act (AB 1064) [1], also by Assm [1]. Bauer-Kahan [1], proposes the creation of an AI standards board to evaluate and regulate AI technologies for children [1], emphasizing transparency and privacy [1]. This bill has passed the Assembly and is in the Senate Appropriations Committee [1].
SB 243 [1], sponsored by Sen [1]. Steve Padilla [1], requires AI platforms to remind minors that chatbots are not human [1]. It has passed the Senate and is currently in the Assembly Judiciary Committee [1].
SB 833 [1], introduced by Sen [1]. Jerry McNerney [1], mandates human oversight for AI systems controlling critical infrastructure [1]. This bill has passed the Senate and is now in the Assembly Appropriations Committee [1].
SB 53 [1] [4], sponsored by Sen [1]. Scott Wiener [1] [4], aims to protect whistleblowers in AI development while establishing a pioneering transparency requirement for major AI companies. The bill mandates these companies to publicly disclose their safety and security protocols [4], including risk assessments and safety testing plans, and to report critical safety incidents to the California Attorney General within 15 days [4]. It also requires a transparency report for each major new model [2], outlining safety tests and the rationale for its release [2]. The bill includes provisions for “CalCompute,” a public cloud compute cluster at the University of California [4], designed to provide free and low-cost access to advanced AI models and tools for startups and researchers [4]. Major AI developers [2] [4], including Meta [4], Google [4], OpenAI [4], and Anthropic [4], have committed to conducting safety testing and implementing robust safety protocols [4], which SB 53 codifies to ensure industry-wide accountability [4]. Additionally, the bill enhances whistleblower protections by requiring companies to establish anonymous channels for employees to report legal violations or catastrophic risk concerns [2], while civil penalties for violations are established without creating new liabilities for harms caused by AI systems [4].
In Michigan [1] [2], Rep [1]. Sarah Lightner introduced HB 4668 [1], which requires large AI model developers to establish safety protocols to prevent critical risks [1], and HB 4667 [1], which proposes new criminal penalties for using AI to commit crimes [1]. Both bills are currently with the Judiciary Committee [1].
Starting January 1, 2028 [3], manufacturers of capture devices [3], including cameras [3], mobile phones [3], and voice recorders [3], must allow users to include a latent disclosure in content captured by their devices [3], conveying information such as the manufacturer’s name [3]. Existing laws already require covered providers to include a latent disclosure in AI-generated content that is difficult to remove [3]. The legislation also prohibits GenAI system hosting platforms from offering systems that lack permanent disclosures in modified content and bars providers or distributors of software or online services from offering tools designed to remove latent disclosures [3]. The provisions of the bill are declared to be severable [3].
Conclusion
The legislative efforts in California and other states signify a critical step towards establishing a robust regulatory environment for AI technologies. These initiatives aim to enhance transparency [4], protect consumer rights, and ensure the safe and ethical development of AI systems. As states continue to lead in this domain, the implications for AI governance could set precedents for future federal regulations, ultimately shaping the landscape of AI innovation and application.
References
[1] https://www.transparencycoalition.ai/news/ai-legislative-update-july-18-2025
[2] https://carnegieendowment.org/research/2025/07/state-ai-law-whats-coming-now-that-the-federal-moratorium-is-dead?lang=en
[3] https://calmatters.digitaldemocracy.org/bills/ca_202520260ab853
[4] https://sd11.senate.ca.gov/news/senator-wiener-expands-ai-bill-landmark-transparency-measure-based-recommendations-governors