Introduction

The Responsible AI Safety and Education Act (RAISE Act [1] [2] [3] [4] [5] [6] [7] [8] [9], S 6953B) [5], passed by New York state lawmakers on June 12, 2025, introduces comprehensive regulations for AI frontier models within the state. This legislation [1] [2] [4] [5] [6], pending the governor’s signature [1] [8] [9], aims to establish safety and accountability standards for large AI developers, focusing on preventing critical harm and ensuring responsible AI deployment.

Description

On June 12, 2025 [5], New York state lawmakers passed the Responsible AI Safety and Education Act (RAISE Act [3] [5] [9], S 6953B) [5], which is pending the governor’s signature and will take effect 90 days thereafter. This bipartisan legislation introduces significant regulations for AI frontier models developed, deployed [5] [7] [8] [9], or operating within New York state [9], specifically targeting “large developers.” These are defined as entities that have trained at least one frontier model [1] [8], characterized by incurring compute costs exceeding $100 million or utilizing knowledge distillation on such models with costs over $5 million [9]. A “frontier model” is defined as an AI model trained using more than 10^26 computational operations or derived from a frontier model through knowledge distillation.

The RAISE Act aims to implement basic safeguards for AI systems and prevent “critical harm,” defined as scenarios resulting in serious injury or death of 100 or more individuals, or damages exceeding $1 billion to rights in money or property [1]. This includes potential misuse of AI models for creating weapons of mass destruction or operating autonomously in ways that could lead to criminal activities under New York law. The Act prohibits the deployment of large AI models in New York if they pose an “unreasonable” risk of causing significant safety incidents and grants substantial authority to state executive agencies, particularly the attorney general [2], to determine what constitutes an “unreasonable risk.”

To mitigate risks associated with frontier models, the Act mandates that developers establish a written AI governance framework, including comprehensive risk management policies and procedures for reviewing AI use and reporting concerns [9]. Developers must publish detailed Safety and Security Protocols (SSPs) [7], which outline their risk assessment and mitigation strategies [7], security measures [7], and safety testing plans [7]. They are required to publish a redacted version of their safety protocols conspicuously and retain an unredacted version for the duration of the model’s deployment plus five years. Additionally, developers must designate senior personnel for compliance and maintain detailed records of tests and results for replication purposes.

The legislation holds the largest AI developers accountable to standards that align with their existing commitments to safety [5], requiring them to create and share safety protocols prior to the public deployment of AI models and to refrain from deploying models that pose unreasonable risks of critical harm [5]. An annual independent safety review is mandated, with modifications published if necessary [9].

In the event of safety incidents [8], defined as critical harm or evidence of increased risk [8], developers are obligated to report these to the AG and the Division of Homeland Security and Emergency Services (DHSES) within 72 hours, detailing the incident’s date [8] [9], classification reasons [8] [9], and a description [8] [9]. This includes reporting unauthorized access to model weights and incidents resulting in over 100 deaths or $1 billion in economic damages [7]. The Act includes a carveout for accredited colleges and universities conducting academic research [9], but if the intellectual property rights of a frontier model are transferred [9], the new owner becomes a “large developer” subject to the Act’s requirements [9].

Liability for critical harm caused by an intervening human actor is limited unless the developer’s activities significantly contributed to the harm [9], which must be foreseeable and preventable [9]. The AG and DHSES are the exclusive enforcement authorities [9], with civil penalties for violations ranging from $5 million to $15 million for first offenses to $30 million for subsequent violations [9], along with potential injunctive or declaratory relief [9].

Critics [2] [3] [6], including representatives from the Software & Information Industry Association (SIIA) [6], have raised concerns that the broad scope and vague definitions within the legislation could create a false sense of security, impose burdens on AI development [6], and potentially drive innovation away from New York State [6]. They argue that the Act misplaces accountability by targeting AI models instead of the individuals who misuse them [3], advocating for collaborative market solutions that prioritize both safety and innovation [4], which they believe are more effective in ensuring safety [3]. Overall, the RAISE Act aims to establish a baseline for safety and accountability in light of recent trends in AI model deployment [5], while raising questions about the responsiveness and accountability of AI governance in New York [2]. The governor has until December 31, 2025 [7], to negotiate potential revisions to the Act.

Conclusion

The RAISE Act represents a significant step towards regulating AI technologies in New York, aiming to balance innovation with safety and accountability. While it sets a precedent for AI governance, it also raises concerns about its potential impact on AI development and innovation within the state. The Act’s implications will depend on its implementation and the response from both the government and the AI industry.

References

[1] https://natlawreview.com/article/new-york-passes-responsible-ai-safety-and-education-act
[2] https://reason.org/commentary/new-yorks-raise-act-expands-executive-power-over-ai-at-the-expense-of-legislative-oversight/
[3] https://www.newsbreak.com/the-national-law-review-329296004/4082991287721-the-coming-battle-over-new-york-s-raise-act
[4] https://www.siia.net/siia-submits-veto-request-on-new-yorks-raise-act/
[5] https://www.transparencycoalition.ai/news/new-yorks-raise-act-authors-give-their-pitch-as-gov-hochul-considers-the-bill
[6] https://privacy-daily.com/article/2025/07/09/seeking-a-veto-siia-says-new-york-ai-bill-would-create-false-sense-of-security-2507090050?BC=bc_6875ce1d23d4e
[7] https://carnegieendowment.org/research/2025/07/state-ai-law-whats-coming-now-that-the-federal-moratorium-is-dead?lang=en
[8] https://www.hunton.com/privacy-and-information-security-law/new-york-passes-the-responsible-ai-safety-and-education-act
[9] https://www.lexology.com/library/detail.aspx?g=44c200bf-f734-4d02-b108-200a8f88914e