Introduction
In response to the growing concerns surrounding the development and use of artificial intelligence (AI), Michigan State Representative Sarah Lightner has introduced two bipartisan bills [1]. These legislative measures aim to enhance safety, accountability [1] [2], and legal compliance in AI systems, reflecting a broader trend among states to regulate AI technologies.
Description
Michigan State Rep Sarah Lightner introduced two bipartisan AI-related bills on June 24, aimed at enhancing safety and accountability in the development and use of AI systems [1]. The AI Safety and Security Transparency Act (HB 4668) addresses significant risks associated with foundation models by mandating that developers of large AI models establish, implement [1] [3], and publicly disclose comprehensive safety and security protocols to mitigate “critical risk,” defined as serious harm or death affecting over 100 individuals or damages exceeding $100 million [1]. This legislation specifically targets companies that have invested at least $100 million in total for training AI models [4], with a minimum of $5 million spent on a single AI model within the past year. It requires these companies to conduct thorough risk assessments [1], implement necessary safeguards, and engage third-party auditors for annual compliance checks to evaluate adherence to legal requirements and their Safety and Security Protocol. Noncompliance could result in civil fines of up to $1 million for each violation [2]. Additionally, the bill enhances whistleblower protections by requiring companies to establish anonymous internal reporting channels for employees to raise concerns about legal violations or potential catastrophic risks [4], while also prohibiting retaliation against whistleblowers [4].
Furthermore, HB 4667 introduces new criminal penalties for the use of AI in committing crimes [1], classifying such actions as felonies with a mandatory 8-year sentence for developing [1], possessing [1], or using AI systems with criminal intent [1], including applications like voice replication for fraud. This proposal aligns with existing laws that impose stricter penalties for using firearms during felonies [1]. Michigan’s legislative efforts reflect a growing trend among states like New York and California to enforce safety protocols for large AI models [1], aiming to prevent significant harm and ensure public safety in the rapidly evolving landscape of artificial intelligence. The Michigan bills are still in their early stages of development and face challenges due to ongoing partisan conflicts and a missed budget deadline.
Conclusion
The introduction of these bills signifies a proactive approach by Michigan to address the potential risks associated with AI technologies. By establishing stringent safety and security measures, as well as imposing penalties for misuse, the state aims to safeguard public welfare and foster responsible AI development. These efforts contribute to a national dialogue on AI regulation, highlighting the importance of legislative action in managing the complexities of emerging technologies.
References
[1] https://www.transparencycoalition.ai/news/michigan-lawmaker-brings-the-raise-acts-ai-safety-measures-to-lansing
[2] https://news.bgov.com/states-of-play/michigan-bill-to-restrict-ai-faces-gridlock-states-of-play
[3] https://www.transparencycoalition.ai/news/ai-legislative-update-july-11-2025
[4] https://carnegieendowment.org/research/2025/07/state-ai-law-whats-coming-now-that-the-federal-moratorium-is-dead?lang=en