Introduction

The integration of AI technologies across various sectors, including hiring [1] [7], lending [2] [4] [5], healthcare [3] [4] [5] [7], and advertising [4] [5], necessitates adherence to anti-discrimination [3] [4] [5], consumer protection [3] [4] [5], and privacy laws [5] [7]. Recent legislative developments in states like New York and California highlight the growing regulatory focus on AI’s impact on employment and decision-making processes. These initiatives aim to enhance transparency [1] [4] [5], accountability [2] [4], and fairness in AI applications, while addressing potential risks such as job displacement and algorithmic bias.

Description

AI-driven decisions in sectors such as hiring [5], lending [2] [4] [5], healthcare [3] [4] [5] [7], and advertising must comply with anti-discrimination [4] [5], consumer protection [3] [4] [5], and privacy laws [5] [7]. California’s Attorney General has emphasized that AI systems in these sensitive areas are specifically prohibited from denying care, overriding medical professionals [5], or imposing discriminatory barriers to access [5]. Companies can be held liable for biased [5], deceptive [5], or harmful outcomes generated by their AI systems [5].

In a significant move, New York Governor Kathy Hochul has expanded the Worker Adjustment and Retraining Notification (WARN) Act to require employers to disclose when mass layoffs are linked to the adoption of AI technologies. This pioneering regulation makes New York the first state to apply WARN in this context, aiming to enhance transparency regarding the economic impact of AI on employment [1] [5]. Current data on job displacement due to AI is largely anecdotal [1], making this initiative particularly timely. The WARN Act mandates a 90-day notice for significant layoffs [5], and the new requirements will soon include AI-driven reductions [5]. Employers will need to disclose AI-related job cuts [5], although the specific timeline for this obligation is yet to be determined [5].

In addition to the WARN Act, New York legislation (SB 1169) aims to restrict the use of automated decision-making systems by state agencies [6], allowing private lawsuits against tech companies [6]. AI companies will be required to engage independent auditors to assess their systems for algorithmic bias in sectors such as employment [6], banking [1] [5] [6], and government services [6]. This measure represents a significant step in regulating automated decision-making and permits state residents to sue for violations [6], amidst anticipated opposition from the tech industry [6]. The implementation details of these initiatives will depend on the New York Department of Labor, which already collects extensive information from employers regarding WARN notices [1]. However, defining what constitutes an AI-related layoff presents challenges [1]. New York’s stricter WARN requirements necessitate notice for layoffs affecting a smaller number of employees compared to the federal standard [1]. The New York Labor Department is known for its thorough enforcement of WARN regulations [1], which may deter employers from misrepresenting the reasons for layoffs [1].

To ensure compliance [3] [5] [7], businesses should conduct thorough audits of their AI systems to identify risks related to anti-discrimination, consumer protection [3] [4] [5], and privacy laws [5] [7]. Organizations must prioritize careful consideration and collaboration across business departments as they navigate the complexities of AI tools in performance management [7]. Companies in New York should prepare for the forthcoming AI layoff reporting requirements [5]. Strengthening transparency and explainability in AI usage for employment [5], healthcare [3] [4] [5] [7], and financial decisions is essential [5]. The introduction of the AI Transparency Act further underscores the need for transparency in AI decision-making processes [3], mandating the development of AI detection tools and disclosure of AI usage [3]. The California AI Transparency Act also mandates transparency in the training data for generative AI systems [7], requiring developers to provide detailed information about datasets [7], including their sources [7], purposes [7], and any modifications made [7].

Additionally, investing in employee reskilling and workforce planning will help organizations adapt to the transformative effects of AI [5], mitigating regulatory pressures and talent shortages [5]. Comprehensive employee training on AI-related regulations is crucial [3], along with monitoring AI regulations in other states [5], as New York and California are leading the way in establishing a robust legal framework for responsible AI use. Proposed regulations under the California Consumer Privacy Act focus on Automated Decision-Making Technology (ADMT) [7], emphasizing the need for documentation and risk assessments for high-risk AI systems that impact employment [7], education [4] [5] [7], and healthcare [4] [5] [7]. Consumer rights related to these systems include the right to notice [7], explanation [2] [4] [7], correction [7], and appeal [7], alongside a duty of care to prevent discrimination [7]. Hochul’s proposal is seen as a potential first step for states to address worker displacement risks and encourage businesses to invest in retraining and upskilling employees, fostering a collaborative environment where workers feel valued and informed [2]. This proactive approach emphasizes the importance of preparing existing staff for new roles that utilize AI technologies, rather than solely relying on external hires for AI-related positions [2]. Legal challenges surrounding the use of personal data in AI systems are anticipated [7], alongside heightened regulatory scrutiny from state attorneys general and industry regulators [7], particularly in light of a notable rise in class actions related to data breaches and privacy violations.

Conclusion

The evolving regulatory landscape in states like New York and California underscores the critical need for transparency, accountability [2] [4], and fairness in AI applications. These legislative measures aim to mitigate risks such as job displacement and algorithmic bias, while promoting responsible AI use. Businesses must adapt by conducting thorough audits [3], investing in employee reskilling [4] [5], and ensuring compliance with emerging regulations. As AI continues to transform various sectors, proactive measures will be essential in fostering a collaborative environment where both organizations and employees can thrive.

References

[1] https://news.bloomberglaw.com/daily-labor-report/ais-power-to-replace-workers-faces-new-scrutiny-starting-in-ny
[2] https://www.forbes.com/sites/philkirschner/2025/01/15/did-ai-cause-those-layoffs-ny-employers-may-have-to-disclose/
[3] https://www.archyde.com/ai-under-watch-new-developments-in-new-york-and-california-push-businesses-toward-ai-transparency-and-compliance-fisher-phillips/
[4] https://www.fisherphillips.com/en/news-insights/ai-under-watch-new-developments-in-new-york-and-california.html
[5] https://www.jdsupra.com/legalnews/ai-under-watch-new-developments-in-new-1551635/
[6] https://news.bloomberglaw.com/esg/ai-systems-to-be-independently-audited-under-new-york-measure
[7] https://www.jacksonlewis.com/insights/year-ahead-2025-tech-talk-ai-regulations-data-privacy