Introduction
Recent legislative sessions have seen significant developments in AI-related bills across several US states, including California [5], Michigan [2] [4] [5], and New York [1]. These efforts reflect a growing focus on establishing frameworks for AI governance, addressing safety [2], transparency [1] [3] [4] [5], and ethical concerns associated with AI technologies [1].
Description
During the recent legislative session [5], significant developments have occurred regarding AI-related bills in California [5], Michigan [2] [4] [5], and New York [1]. In California [1] [4] [5], several bills are advancing through the Legislature [5]. SB 11 aims to subject AI-generated images and videos to the state’s right of publicity law and criminal false impersonation statutes [5]. AB 853 mandates large online platforms to label AI-generated content and requires manufacturers of devices that capture images or audio to offer digital signatures for authentic material [5]. AB 412 [5], the AI Copyright Protection Act [5], seeks to establish a framework for copyright owners to determine if their works were used to train generative AI models [5]. The LEAD for Kids Act (AB 1064) proposes the creation of an AI standards board to evaluate and regulate AI technologies for children [5], emphasizing transparency and privacy [5]. SB 243 requires AI platforms to remind minors that chatbots are not human [5], while SB 833 mandates human oversight for AI systems controlling critical infrastructure [5].
SB 53 [1] [3] [5], known as the Transparency in Frontier Artificial Intelligence Act [3], has been amended to enhance safety and security protocols for large developers of AI foundation models [3]. This bill mandates these developers to publish detailed safety protocols [3], including testing procedures and risk assessments for catastrophic risks [3], which are defined as those that could lead to significant loss of life or substantial financial damage [3]. It positions California to become the first state to impose significant transparency requirements on leading AI developers [1], including OpenAI [1], Google [1], Anthropic [1], and xAI [1] [5]. Notably, SB 53 does not hold AI model developers liable for harms caused by their models [1], ensuring that it does not impose burdens on startups and researchers using or refining these models [1]. The bill includes whistleblower protections for employees who identify critical risks associated with AI technologies [1], preventing retaliation against those who report violations or risks [3]. Additionally, it requires developers to report critical safety incidents to the Attorney General within 15 days and prohibits misleading statements regarding risk management [3]. The bill is currently under review by the California State Assembly Committee on Privacy and Consumer Protection [1].
In Michigan [2] [4] [5], the AI Safety and Security Transparency Act (HB 4668) has been introduced [5], requiring large developers to implement safety protocols for foundation models [5], conduct annual audits [5], and provide whistleblower protections [1] [3] [5]. This bill introduces third-party audits for compliance but does not mandate incident reporting [4]. State lawmakers have also introduced legislation to address the nonconsensual creation and distribution of deepfake images and videos [2], which can misrepresent individuals in sensitive contexts [2]. This legislation includes criminal penalties and civil action provisions [2], as Michigan currently lacks laws to prevent the sharing of such deepfake content without consent [2]. The bills have received strong support in the House and are now under consideration in the Senate [2], with advocacy from various children’s and civil society groups opposing any federal moratorium on AI regulations [2]. The AI Safety and Security Transparency Act shares similarities with New York’s Responsible AI Safety and Education (RAISE) Act, which has recently passed the legislature [5]. The RAISE Act includes incident reporting requirements and imposes deployment restrictions on models that pose significant risks [4]. Governor Kathy Hochul is considering this bill [1], which also requires large AI developers to publish safety reports [1].
These state-level efforts reflect a renewed focus on establishing frameworks for AI governance, particularly following the recent failure of a federal AI regulatory moratorium [4]. Michigan legislators have expressed concerns regarding potential federal restrictions on state-level regulations of AI [2], sending a bipartisan letter to the Michigan Congressional Delegation urging the federal government to protect state initiatives related to AI [2]. As these legislative initiatives progress [4], they may influence broader discussions on AI governance at both the state and federal levels [4], potentially leading to a patchwork of regulations or a convergence on harmonized standards [4]. The outcomes will be closely monitored by various stakeholders [4], including industry [4], civil society [2] [4], and other states [4].
Conclusion
The legislative initiatives in California, Michigan [2] [4] [5], and New York underscore a proactive approach to AI governance, focusing on transparency, safety [1] [3] [4] [5], and ethical considerations. These efforts may set precedents for other states and influence federal policy, potentially leading to a more cohesive regulatory environment. The evolving landscape of AI legislation will be pivotal in shaping the future of AI development and deployment, with significant implications for industry practices, consumer protection [1], and technological innovation.
References
[1] https://techcrunch.com/2025/07/09/california-lawmaker-behind-sb-1047-reignites-push-for-mandated-ai-safety-reports/
[2] https://gophouse.org/posts/michigan-legislators-look-for-answers-on-concerning-ai-language-in-federal-bill
[3] https://www.multistate.ai/updates/vol-69
[4] https://carnegieendowment.org/research/2025/07/state-ai-law-whats-coming-now-that-the-federal-moratorium-is-dead?lang=en
[5] https://www.transparencycoalition.ai/news/ai-legislative-update-july-11-2025