Introduction
On June 15, 2025 [10], New York made a significant legislative move by enacting the Responsible AI Safety and Education Act (RAISE Act). This pioneering law establishes legal obligations for developers of high-risk AI systems [10], positioning New York as the first US state to implement mandatory transparency standards for advanced AI models. The act targets systems requiring substantial computing resources and those with the potential to cause catastrophic harm.
Description
On June 15, 2025 [10], New York enacted the Responsible AI Safety and Education Act (RAISE Act) [7] [9] [10], establishing legal obligations for developers of high-risk AI systems [10]. This legislation positions New York as the first state in the US to implement legally mandated transparency standards for advanced artificial intelligence models, particularly those that require over $100 million in computing resources and have the potential to cause catastrophic harm, including significant loss of life or economic damage [10].
Having passed both the New York State Senate and Assembly [6], the RAISE Act is set to take effect on January 1, 2026, pending the governor’s signature [4]. Governor Kathy Hochul has expressed support for the initiative, although negotiations for technical adjustments may delay full implementation until 2026 [2]. The act represents a significant advancement in the AI safety movement, aiming to mitigate risks associated with AI technologies [5] [8], including the potential for misuse in creating biological weapons or engaging in automated criminal activities [5].
To ensure compliance [8], the RAISE Act mandates that developers of advanced “frontier models”—AI systems that meet specific computational thresholds and incur significant training costs—submit comprehensive safety and transparency plans prior to releasing their powerful systems. These plans must include internal test results [10], safeguards against critical harm—defined as serious injury or death to 100 or more individuals or damages exceeding $1 billion—and public summaries [3] [6]. Developers are also required to undergo annual independent audits [10], maintain detailed records for five years [9], and disclose the computing power used for training their models [10]. In the event of a serious incident [10], such as unusual AI behavior or theft of AI models [5], developers must report it within 72 hours to the New York Attorney General and federal security agencies [10].
The act imposes substantial civil penalties for violations [10], with fines reaching up to $30 million for repeated offenses [3], enforced by the New York Attorney General [1] [3] [8] [10]. Notably, private individuals are prohibited from pursuing legal action for violations. While the RAISE Act shares some similarities with California’s SB 1047 [1], it has been designed to avoid stifling innovation among startups and academic researchers [1], notably not mandating the inclusion of a “kill switch” in AI models and not holding companies accountable for critical harms related to the post-training of frontier AI models [1] [8]. Additionally, universities and research institutions are exempt from these regulations [8], and the high computational cost threshold is designed to exclude smaller companies from its requirements [8].
In addition to the RAISE Act, New York lawmakers are considering a separate bill that requires state agencies to disclose their use of AI to mitigate bias and prevent civil rights violations. As the regulatory landscape becomes increasingly fragmented [2], with states exploring different AI regulatory frameworks [2], there are concerns about potential federal actions that may preempt or conflict with emerging state laws. Proponents argue that the RAISE Act will not hinder innovation [1], despite concerns from critics regarding potential regulatory burdens. If enacted [1] [3], the act will play a crucial role in shaping future national regulations and addressing the challenges posed by the most dangerous AI systems, positioning New York as a leader in AI accountability [4].
However, the act’s comprehensive approach to safety and transparency may inadvertently create bureaucratic inefficiencies [9], as the necessity of five years of record-keeping raises questions about the effectiveness of existing safety protocols. Compliance costs associated with the RAISE Act may also be underestimated [9], with initial compliance for frontier model companies projected to require between 1,070 and 2,810 hours [9], effectively necessitating a full-time employee [9], and ongoing annual compliance burdens ranging from 280 to 1,600 hours [9]. This variability highlights the uncertainty surrounding the act and similar legislation [9], indicating that the rapid evolution of the AI market necessitates laws that focus on effective risk mitigation rather than mere regulatory compliance [9].
Conclusion
The RAISE Act marks a pivotal step in AI regulation, setting a precedent for transparency and safety in high-risk AI systems. While it aims to mitigate potential harms and foster accountability, the act also raises concerns about compliance costs and bureaucratic challenges. As New York leads the way, the implications of this legislation will likely influence future national and state-level AI regulations, balancing innovation with the need for robust safety measures.
References
[1] https://techcrunch.com/2025/06/13/new-york-passes-a-bill-to-prevent-ai-fueled-disasters/
[2] https://www.fisherphillips.com/en/news-insights/new-york-poised-to-enact-first-of-its-kind-ai-safety-law.html
[3] https://www.transparencycoalition.ai/news/guide-to-the-raise-act-the-new-york-responsible-ai-safety-and-education-bill
[4] https://www.fingerlakes1.com/2025/06/17/ai-safety-bills-head-to-hochul-as-new-york-leads-on-regulation/
[5] https://natlawreview.com/article/new-york-advances-frontier-ai-bill
[6] https://www.transparencycoalition.ai/news/ai-legislative-update-june-20-2025
[7] https://www.wxxinews.org/new-york-public-news-network/2025-06-16/ny-lawmakers-want-to-avoid-critical-harm-by-ai-the-feds-may-block-their-rules
[8] https://www.joneswalker.com/en/insights/blogs/ai-law-blog/new-york-advances-frontier-ai-bill.html?id=102khdz
[9] https://www.city-journal.org/article/new-york-raise-act-artificial-intelligence-safety
[10] https://www.bubecklaw.com/privacyspeak/new-york-passes-landmark-ai-safety-bill-heads-to-governor