Introduction
On June 15, 2025 [4] [6], the New York State Senate passed the Responsible AI Safety and Education (RAISE) Act [6], a pioneering legislative effort aimed at establishing legal responsibilities for developers of high-risk artificial intelligence systems. This act, set to take effect on January 1, 2026 [4], targets advanced AI models with the potential to cause significant harm, particularly those developed with infrastructure costs exceeding $100 million [5]. By excluding startups and small companies [5], the legislation seeks to encourage innovation while regulating major tech firms like OpenAI and Google.
Description
The RAISE Act represents a significant advancement for the AI safety movement by formalizing regulations to manage the risks associated with unchecked AI development. The legislation aims to protect individuals from AI-related harms by preventing “critical harm,” defined as serious injury or death of 100 or more individuals or damages exceeding $1 billion, particularly in the context of potential misuse for chemical [6], biological [6] [8], radiological [6] [8], or nuclear attacks [6] [8].
Under the RAISE Act [1] [2] [5] [6] [7] [8], developers of frontier AI models must implement comprehensive safety and transparency plans prior to releasing these models. These plans must include internal test results [4], safeguards against critical harm [4], and public summaries [4]. Developers are also required to undergo annual independent audits and disclose the computing power used for training [4]. In the event of a serious incident [4], such as misuse or unauthorized access [4], developers must report it within 72 hours to the New York Attorney General and federal security agencies [4].
The New York attorney general will enforce the law [6], imposing civil penalties starting at 5% of compute costs for initial violations and escalating to 15% for repeat offenses, with fines reaching up to $30 million for serious violations [8]. Notably, the legislation does not provide a private right of action for violations and does not mandate a “kill switch” for AI models, nor does it hold companies accountable for critical harms related to the post-training phase of frontier AI models [2].
While the RAISE Act aims to balance innovation with public safety [7], concerns have been raised about potential bureaucratic inefficiencies, particularly regarding the five-year record-keeping requirement and the necessity of a separate “reasonableness” standard for model deployment, which may overlap with the thoroughness of independent audits [1]. Critics argue that the act imposes significant compliance costs and regulatory uncertainty [3], particularly affecting small AI developers and potentially hindering innovation in the state [3]. This contrasts with a similar bill in California that was vetoed due to concerns about its impact on the tech sector [3].
The act attempts to tackle multiple aspects of AI regulation [1], including corporate transparency [1], employee protections [1], and liability [1], within a single framework [1]. This approach risks prioritizing compliance over actual safety outcomes [1], with initial compliance projected to require between 1,070 and 2,810 hours of work [1], effectively necessitating a full-time employee [1], while ongoing compliance burdens are expected to be lower [1], ranging from 280 to 1,600 hours annually [1].
Several other AI-related bills were introduced but did not progress due to the legislative session’s adjournment [6]. These include proposals requiring provenance data for synthetic content [6], transparency and safety protocols for training AI models [6], and disclosure requirements for advertisements using synthetic performers [6]. Additional bills aimed to regulate AI in legal document production [6], impose liability on chatbots for misinformation [6], and require notices regarding the accuracy of AI-generated information [6]. Another proposal sought to prevent algorithmic discrimination in high-risk AI systems through regular audits and enforcement by the attorney general [6].
The RAISE Act was passed without public hearings [3], leading to widespread opposition from industry groups [3], including a coalition letter from Chamber of Progress that highlights technical concerns associated with the bill [3]. However, it has garnered support from prominent AI figures [5], such as Geoffrey Hinton and Yoshua Bengio [5], reflecting a consensus on the necessity of formal regulations to manage AI risks [5].
Conclusion
The implications of the RAISE Act extend beyond New York, potentially influencing AI regulatory efforts in other states [7]. Proponents argue that increased transparency could foster consumer trust and ethical practices in AI development [7], while opponents express fears that stringent regulations could hinder competitiveness and innovation [7]. The act is currently awaiting Governor Kathy Hochul’s decision by July 18, 2025. Despite concerns that regulatory pressures may deter advanced AI models from being offered in New York, supporters maintain that the act’s requirements are manageable and should not stifle innovation in the state, which has a significant economic presence [2]. If enacted [2] [5] [7], it would be the first legally binding AI safety law in the US [5], representing a crucial step in AI governance.
References
[1] https://www.city-journal.org/article/new-york-raise-act-artificial-intelligence-safety
[2] https://techcrunch.com/2025/06/13/new-york-passes-a-bill-to-prevent-ai-fueled-disasters/
[3] https://progresschamber.org/new-york-legislature-fast-tracks-bill-that-would-send-ai-innovators-packing/
[4] https://www.bubecklaw.com/privacyspeak/new-york-passes-landmark-ai-safety-bill-heads-to-governor
[5] https://zugtimes.com/new-york-passes-groundbreaking-raise-act-to-regulate-advanced-ai-systems/
[6] https://www.transparencycoalition.ai/news/as-new-york-state-legislature-adjourns-one-ai-law-remains-standing
[7] https://opentools.ai/news/new-yorks-raise-act-a-bold-move-to-rein-in-ai-or-a-stifling-shackle
[8] https://www.transparencycoalition.ai/news/guide-to-the-raise-act-the-new-york-responsible-ai-safety-and-education-bill