Introduction
In 2024 [1] [2], generative AI has become a focal point for privacy-related litigation in California [1] [2] [3], with companies facing legal challenges under various theories, particularly concerning the California Invasion of Privacy Act (CIPA) [1] [3]. These legal actions highlight the growing concerns about privacy and consumer protection as generative AI technology evolves.
Description
Generative AI has emerged as a significant focus for privacy-related litigation in California in 2024 [1] [3]. Companies utilizing this technology are facing lawsuits under various legal theories [1] [2], particularly violations of the California Invasion of Privacy Act (CIPA) [1] [3]. CIPA § 631(a) establishes liability for activities such as intentional wiretapping and unauthorized interception of communications [1]. Recent lawsuits allege that AI-powered chatbots [1], which are increasingly replacing human customer service agents, engage in “AI eavesdropping” by intercepting and recording customer communications without consent [1], thereby violating CIPA [1]. These claims are often accompanied by allegations under the California Unfair Competition Law and other torts [3], such as intrusion upon seclusion [1] [3].
In addition to private lawsuits [1] [3], government entities [1] [3], including the Federal Trade Commission (FTC) and state attorneys general [1] [3], have initiated consumer protection actions against AI companies [1] [3]. These actions are based on allegations of false or misleading statements regarding the accuracy of generative AI products [1] [3]. The FTC’s “Operation AI Comply” has led to multiple cases against companies for deceptive practices [1] [3], including claims that their AI services could replace human expertise or guarantee income from AI-driven business opportunities [1]. State attorneys general have also pursued generative AI issues under state consumer protection laws [1] [3], exemplified by the Texas Attorney General’s lawsuit against Pieces Technologies [1], Inc. for misleading claims about its generative AI products in healthcare [1] [3], resulting in a settlement that mandates clearer disclosures and prohibits false representations [1].
Moreover, private parties have filed lawsuits alleging deceptive use of personal data for training AI models [3]. A notable case involved a class action against Google and the University of Chicago [1], where plaintiffs claimed that the university unlawfully provided anonymized patient records to Google for AI training [3], invoking the Illinois Consumer Fraud and Deceptive Business Practices Act [1] [3]. However, the district court dismissed the fraud claim for lack of standing [1] [3], and the remaining claims were also dismissed for failure to state a claim, a decision later upheld by the Seventh Circuit [1].
Overall, these legal challenges reflect ongoing concerns about privacy and consumer protection in the context of generative AI [3], with various theories of liability being tested in courts as the technology continues to evolve. As the California Privacy Protection Agency (CPPA) finalizes updates to the California Consumer Privacy Act (CCPA) and regulations concerning Automated Decisionmaking Technology (ADMT) within the year [4], proactive compliance measures from businesses will be crucial to align with the forthcoming regulations. The proposed ADMT regulations emphasize transparency and consumer rights [4], mandating businesses to disclose their use of ADMT outputs and the factors influencing these outputs [4], ensuring consumers are informed and protected against discrimination [4], particularly in employment decisions [4]. Businesses utilizing ADMT are advised to conduct thorough risk assessments and develop robust AI governance programs to safeguard their interests and build employee trust [4].
Conclusion
The legal scrutiny surrounding generative AI in California underscores the critical need for companies to prioritize privacy and consumer protection. As regulatory frameworks evolve, businesses must adopt proactive compliance strategies to navigate the complex legal landscape. This includes enhancing transparency, conducting risk assessments, and establishing robust governance programs to mitigate potential liabilities and foster trust among consumers and employees.
References
[1] https://www.lexology.com/library/detail.aspx?g=370eb9f1-9568-4a02-9779-6951c9a8b2a5
[2] https://www.jdsupra.com/legalnews/year-in-review-2024-generative-ai-1681092/
[3] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250307-year-in-review-2024-generative-ai-litigation-trends
[4] https://natlawreview.com/article/californias-ai-revolution-proposed-cppa-regulations-target-automated-decision