Introduction
As of 2025 [1] [3], the evolving landscape of artificial intelligence (AI) presents significant legal challenges for employers [3], particularly concerning workplace regulations [3]. With various states proposing AI-related legislation that could impact hiring and employment decisions [3], it is crucial for employers to stay informed and proactive [3].
Description
In New York [1] [2] [3] [5], the New York Artificial Intelligence Consumer Protection Act (NY AICPA) is set to take effect on January 1, 2027 [1]. This legislation aims to establish a comprehensive framework for employers utilizing AI in hiring [1], promotions [1] [3] [4], and other consequential employment decisions [1]. Adopting a risk-based regulatory approach similar to Colorado’s regulations [1] [3], the NY AICPA mandates compliance obligations for developers and deployers of high-risk AI systems [1].
Deployers are required to exercise reasonable care to mitigate risks of algorithmic discrimination [1], which includes conducting independent third-party audits for bias and governance [1]. They must provide detailed documentation outlining the AI system’s intended uses [1], known risks [1], and governance parameters [1]. Additionally, a risk management policy must be established in alignment with the NIST AI Risk Management Framework [1], and an annual impact assessment is required to evaluate the AI system’s risks and effectiveness [1].
When AI is employed for consequential decisions affecting individuals [1], deployers must notify consumers [1], disclose the purpose of the AI system [1], and provide contact information [1]. In cases of adverse decisions [1], they are obligated to explain the rationale and allow consumers to correct any inaccuracies in their data and appeal the decision [1].
Effective July 5, 2023 [4] [5], New York City Local Law 144 mandates that employers and employment agencies conduct bias audits and notify applicants when using automated employment decision-making tools (AEDTs) [4]. This regulation applies to AEDTs that significantly assist or replace discretionary decision-making in employment [5]. Employers must inform applicants about the use of an AEDT [4] [5], including how to request reasonable accommodations [5], at least 10 business days prior to its implementation [5]. The notice must detail the job qualifications and characteristics that the AEDT will evaluate [5]. A bias audit must be conducted by an independent third party at least annually [5], and the results must be publicly available [5]. Non-compliance can result in fines of $500 for the first violation and up to $1,500 for subsequent violations [5]. However, concerns have been raised regarding the limited enforcement of this law, which may impact future compliance and monitoring.
In a significant move towards transparency, New York will require businesses to disclose when mass layoffs are linked to the adoption of AI [2], making it the first state to implement such a requirement [6]. This initiative [2] [6], announced by Governor Hochul [2], aims to enhance understanding of AI’s economic impact on employment, although challenges remain in defining what constitutes an AI-related layoff [2]. The state’s Department of Labor will determine the specifics of this requirement [2], which builds on existing WARN Act provisions that mandate advance notice for mass layoffs affecting 25 or more employees at a single location.
In addition to the notification requirement [6], New York plans to enhance opportunities in the AI sector through various executive actions [6], including providing stipends for college students taking online AI-related courses and a $20 million initiative to support minority startups in collaboration with IBM and Armory Square Ventures [6]. The state will also offer increased training for small businesses to adopt AI technologies and enhance state workers’ skills in this area [6]. Furthermore, New York has secured funding for a supercomputer to be established at the University at Buffalo [6], aimed at advancing AI research [6].
Additionally, the Responsible AI Safety and Education Act (RAISE Act) is under consideration [1], which would require developers to create safety plans for AI systems and may offer protections for whistleblowers reporting issues with AI models [1]. Other pending legislation would mandate impact assessments and written notifications to employees regarding AI usage [1], further regulating the collection and use of employee data through AI systems [1].
As these bills are still under consideration [3], it is advisable for employers to begin formulating comprehensive AI governance strategies [3]. This proactive stance not only prepares organizations for compliance but also reflects a commitment to ethical AI practices [3], which are increasingly important to stakeholders and regulatory bodies [3]. Legal experts suggest that the transparency initiative could deter employers from misrepresenting the reasons for layoffs and encourage investment in retraining employees for new roles as technology evolves [2]. By 2025 [4], an increase in lawsuits and regulatory actions concerning AI in hiring and workplace practices is anticipated [4], with more states expected to implement regulations similar to those in Illinois and Colorado [4], emphasizing bias audits and transparency [4]. Other states are exploring similar measures to understand AI’s workforce impact [2], with initiatives including task force studies and funding for research [2], potentially modeling their approaches on New York’s emphasis on transparency and accountability [2].
Conclusion
The introduction of AI-related legislation across various states, particularly in New York, underscores the growing need for employers to adapt to new regulatory landscapes. These measures aim to ensure transparency, accountability, and fairness in AI applications within the workplace. As AI continues to evolve, employers must develop robust governance strategies to comply with emerging regulations and demonstrate a commitment to ethical AI practices. This proactive approach will not only facilitate compliance but also foster trust among stakeholders and mitigate potential legal challenges.
References
[1] https://natlawreview.com/article/states-ring-new-year-proposed-ai-legislation
[2] https://news.bloomberglaw.com/daily-labor-report/ais-power-to-replace-workers-faces-new-scrutiny-starting-in-ny
[3] https://www.jdsupra.com/legalnews/states-ring-in-the-new-year-with-9970231/
[4] https://nquiringminds.com/ai-legal-news/Evolving-State-Regulations-on-AI-in-Employment-Highlight-Discrimination-Concerns/
[5] https://www.lexology.com/library/detail.aspx?g=9f2912d5-1174-4c5f-9315-f1826c7483c7
[6] https://news.bloomberglaw.com/ip-law/businesses-would-report-ai-layoffs-to-new-york-under-hochul-plan




