Introduction
California is taking significant steps to regulate the use of artificial intelligence (AI) and automated decision systems (ADS) in the workplace. These measures focus on preventing discrimination, ensuring transparency [5], and maintaining human oversight in employment decisions. The regulations [1] [2] [4] [5] [6] [7] [8], set to take effect in 2025, aim to address the potential biases and legal risks associated with AI-driven hiring and management tools.
Description
California is implementing significant regulatory measures concerning the use of artificial intelligence (AI) and automated decision systems (ADS) in the workplace [8], particularly focusing on AI-driven hiring and management tools [1]. New civil rights regulations are expected to take effect on July 1, 2025, aimed at preventing discrimination based on protected characteristics such as race [6], gender [6], age [6] [7], disability [4] [5] [6], or religion [5] [6]. While these regulations do not outright prohibit the use of AI tools, they make it illegal to employ any system that results in discriminatory outcomes [6].
Employers are mandated to provide at least 30 days’ written notice to employees [1] [8], applicants [1] [4] [6] [7] [8], and contractors before implementing any ADS [1] [8], including AI-driven tools like facial and emotion recognition [4]. They must also disclose all such tools in use [1] [8]. Human oversight is required [1] [8], prohibiting reliance solely on AI for significant employment decisions such as hiring or firing [1] [8]. Key legislation includes Assembly Bills (AB) 1221 and AB 1331, which focus on workplace surveillance technologies [8]. AB 1221 sets requirements for data handling by vendors and mandates notice to employees about monitoring, while AB 1331 restricts the use of various tracking tools [4], particularly during off-duty hours or in private areas [8].
As AI becomes more prevalent in recruitment processes—through automated resume filters [2], personality assessments [2], and video interview analysis—employers must be aware of compliance obligations under various laws [2], including Title VII and the Fair Employment and Housing Act (FEHA) [2]. On March 21, 2025 [1] [4], California’s Civil Rights Council adopted regulations that apply existing anti-discrimination laws to AI tools in employment [1] [4] [8], increasing the burden on employers to demonstrate efforts to test for and mitigate bias. Employers using automated decision-making systems must retain records of AI-driven decisions for at least four years [8], and third-party AI vendors can be considered agents of the employer under the FEHA [1] [8].
Employers must ensure that AI tools are transparent, explainable [2], and validated for adverse impact on protected groups [2]. The Equal Employment Opportunity Commission (EEOC) and the California Civil Rights Department may interpret reliance on non-transparent or unvalidated AI as a breach of anti-discrimination laws [2]. The definition of “Automated Decision System” is broad [1], encompassing any computational process that aids or replaces human decision-making in employment contexts [4] [8], including tools for resume scanning and predictive performance analytics [4]. Specific AI practices are explicitly banned [8], including those that infer protected characteristics [1] [8], conduct predictive behavioral analysis [8], retaliate against employees exercising legal rights [1] [8], or set pay based on discriminatory individualized data [8]. Workers are granted rights to access and correct data used by ADS and to appeal AI-driven decisions to a human reviewer [1] [8]. The legislation includes anti-retaliation clauses and enforcement provisions [8].
The proposed “No Robo Bosses Act” (SB 7) aims to enforce strict oversight on automated decision systems to mitigate workplace discrimination while addressing employer concerns about business efficiency. This legislation would impose compliance responsibilities on employers using these systems [2], as well as on developers of AI tools [2], with potential penalties reaching $25,000 per violation [2]. It introduces requirements for the use of ADS in employment-related decisions [5], ensuring that AI assists rather than replaces human judgment [5]. Key provisions include safeguarding worker privacy by prohibiting the collection of sensitive personal information through ADS [5], which could limit employers’ ability to utilize these systems effectively [5]. Employers must provide written notice to workers when ADS is used in decision-making [5], detailing the data collected and its intended use [5], thereby promoting transparency [5]. Additionally, the legislation emphasizes workplace safety by banning ADS that could violate labor laws or health standards [5], requiring compliance with existing safety regulations [5].
To comply with these regulations [2], employers should conduct internal audits of recruitment and HR tools that incorporate automation or AI [2]. They should obtain documentation from vendors confirming that their tools are validated and tested for bias [2]. Additionally, training for HR and hiring managers on the appropriate use of AI in decision-making is essential [2], along with updating privacy and hiring policies to clarify the use of these tools and the options available to candidates [2].
California’s Fair Employment and Housing Act establishes the Civil Rights Department [3], responsible for enforcing the act through civil actions [3]. The Department of Technology is tasked with conducting a comprehensive inventory of high-risk automated decision systems proposed for use by state agencies by September 1, 2024 [3]. Regulations are introduced for the development and deployment of ADS that make consequential decisions [3]. Starting January 1, 2027 [3], deployers must disclose information to individuals affected by consequential decisions made by the ADS [3], offer opt-out opportunities [3], allow appeals of decisions [3], and submit the ADS to third-party audits [3], which will have specific requirements [3].
Recent legal actions underscore the risks associated with AI in the workplace [7]. In 2023, the EEOC reached a settlement of $365,000 with a company for using an AI system that disproportionately affected older female and male applicants [7]. Furthermore, the ongoing case of Mobley v [8]. Workday [1] [4] [6] [7] [8], Inc. exemplifies the legal risks associated with AI in hiring [1] [4] [8], as the plaintiff alleges that Workday’s AI tools disproportionately rejected older [1] [4] [8], Black [1] [4] [6] [8], and disabled applicants [1] [4] [7] [8], raising concerns under anti-discrimination laws [4]. The lead plaintiff [6], Derek Mobley [6], claims he faced repeated rejections after applying to over 100 jobs using Workday’s systems [6]. This case has progressed to a nationwide class action under the Age Discrimination in Employment Act (ADEA) [6], highlighting the potential for widespread liability for both employers and AI vendors if discriminatory outcomes arise from automated hiring processes [4]. Employers can be held liable for the discriminatory effects of third-party algorithms used in their hiring processes [6], even if they did not develop the AI tools themselves [6]. Regular analysis of hiring data is recommended to identify any unexplained disparities by age [6], race [6], or gender [6], which may indicate potential legal issues [6].
California is establishing a comprehensive framework for regulating automated hiring and management tools [8], holding them to the same standards as human decision-makers [1] [4] [8]. Employers must stay informed about these developments [8], as new legislative measures and regulations could impose additional responsibilities [8], and existing litigation underscores the accountability of both employers and AI vendors for discriminatory outcomes [8]. While AI has the potential to enhance hiring efficiency [6], it must be employed responsibly and equitably [6]. Companies are encouraged to proactively address these changes to ensure their hiring processes are both effective and fair [6].
Conclusion
The regulatory measures introduced by California represent a significant shift in how AI and ADS are integrated into workplace practices. By emphasizing transparency, accountability [8], and fairness [6], these regulations aim to mitigate the risks of discrimination and bias in employment decisions. Employers and AI vendors must adapt to these changes, ensuring compliance and fostering equitable hiring practices. As AI continues to evolve, maintaining a balance between technological advancement and ethical responsibility will be crucial for sustainable business operations.
References
[1] https://www.klgates.com/2025-Review-of-AI-and-Employment-Law-in-California-5-29-2025
[2] https://www.jdsupra.com/legalnews/california-employers-artificial-1498037/
[3] https://calmatters.digitaldemocracy.org/bills/ca_202520260ab1018
[4] https://natlawreview.com/article/2025-review-ai-and-employment-law-california
[5] https://natlawreview.com/article/exploring-californias-proposed-ai-bill
[6] https://www.hollandhart.com/new-ai-hiring-rules-and-lawsuits-put-employers-on-notice-what-hr-needs-to-know
[7] https://www.callaborlaw.com/entry/ai-in-hiring-litigation-and-regulation-update
[8] https://www.jdsupra.com/legalnews/2025-year-to-date-review-of-ai-and-5434307/