Introduction
The ongoing legal case of Mobley v. Workday [1] [2] [3], Inc. in the US District Court for the Northern District of California underscores the growing concerns about algorithmic bias in employment practices, particularly regarding age discrimination. This case highlights the potential legal liabilities for companies using AI-driven tools in their hiring processes.
Description
In a significant legal case currently underway in the US District Court for the Northern District of California [1], Mobley v [2]. Workday [1] [2] [3], Inc. [1] [2], an older job applicant has alleged employment discrimination against Workday, claiming that the company’s AI-driven applicant screening tools systematically disadvantage candidates over 40 [1], in violation of the Age Discrimination in Employment Act [1] [2], Title VII [1] [3], and the Americans with Disabilities Act [1]. The plaintiff [1] [3], Mobley [2] [3], who is over 40, applied for more than 100 positions through Workday’s system and was quickly rejected [3], suggesting a lack of human review [3]. The court has allowed Mobley’s claims to proceed as a nationwide collective action [1] [2], focusing on whether the AI system disproportionately impacts older applicants [2]. This litigation highlights the potential for algorithmic bias in employment decisions [2], emphasizing the need for employers to evaluate their use of AI tools to mitigate bias and reduce legal risks [2].
This case establishes a precedent for potential direct liability of AI vendors in employment discrimination claims [3]. The complexities of agentic AI arise as users authorize the AI to perform autonomous actions [3]. If the AI operates within its instructions [3], the deployer may bear responsibility for its actions [3], such as a SaaS provider disabling a customer’s account based on a false positive detection of fraud [3].
In instances where a customer claims against a SaaS provider [3], the provider’s ability to seek indemnity from the AI system will depend on contractual language and applicable laws [3]. Tort law and strict liability may apply if the AI is deemed inherently unsafe [3]. AI developers are advised to include risk and liability terms in customer agreements and consult legal counsel and insurance brokers regarding AI-related coverage [3]. Deployers of agentic AI systems should conduct risk assessments and ensure that internal policies address these risks [3].
Developers should also consider implementing safeguards to prevent foreseeable harm [3], as the responsibility for safety increasingly falls on them [3]. This raises questions about liability in cases of software defects or inadequate warnings [3]. As regulatory frameworks evolve [3], such as the EU AI Act requiring human oversight [3], there is a growing need to embed accountability mechanisms [3]. Balancing system autonomy with human supervision is crucial as agentic AI systems become more integrated into business operations [3], necessitating proactive risk management from both developers and deployers [3]. Businesses must remain vigilant to ensure compliance and protect against legal exposure and reputational damage [2].
Conclusion
The Mobley v. Workday case serves as a critical reminder of the potential legal and ethical challenges posed by AI in employment settings. It underscores the importance of addressing algorithmic bias and ensuring that AI systems are used responsibly. As AI technology continues to evolve, businesses must proactively manage risks, comply with emerging regulations, and safeguard against potential liabilities to protect their reputation and legal standing.
References
[1] https://www.mintz.com/insights-center/viewpoints/2226/2025-07-15-ai-driven-employment-litigation-post-trump-ai-eos
[2] https://natlawreview.com/article/ai-driven-employment-litigation-post-trump-ai-eos
[3] https://www.jdsupra.com/legalnews/liability-considerations-for-developers-5820092/