Introduction

The rapid advancement of artificial intelligence (AI) across various sectors has garnered significant attention from state attorneys general (AGs) from both major political parties [3]. They are increasingly concerned about AI’s implications for consumers [3], including potential discrimination [3], data sources for AI training, consumer harm from misleading AI-generated information [3], and the exploitation of children through child sexual abuse material (CSAM) [3]. As federal regulatory frameworks evolve [3], state AGs are poised to leverage existing consumer protection laws and Unfair or Deceptive Acts or Practices (UDAP) authority to enforce compliance and safeguard consumers [3].

Description

Recent legal advisories from California AG Rob Bonta and Massachusetts AG Andrea Campbell provide guidance on applying existing state laws to AI. Bonta emphasizes the dual potential of AI for economic growth and scientific advancement [3], alongside risks such as bias [3], discrimination [3], and fraud [3]. His advisories highlight that AI systems are prevalent in everyday activities [3], including consumer credit evaluations and targeted advertising [3], and stress the necessity for AI developers and users to adhere to state laws that protect consumers from fraud [3], discrimination [3], and data misuse [3]. Notably, new regulations will take effect in California on January 1, 2025, addressing competition, consumer protection [1] [2] [3] [4], civil rights [1] [2] [4], data protection [1] [2] [3] [4], and election misinformation [1] [2] [4]. Campbell has asserted that AI practices must comply with the Massachusetts Consumer Protection Act [1] [4], which prohibits misleading advertising regarding the quality [1] [4], usability [1], and safety of AI systems [2] [4]. She has also joined a coalition of 39 AGs and the Department of Justice to challenge Google’s mandatory AI functionality on Android devices [1], citing unfair competition practices [1].

Under California law [3], enforcement authority is granted to the AG [3], local prosecutors [3], and plaintiff’s attorneys to regulate deceptive practices related to AI [3]. Specific deceptive practices identified include false claims about AI capabilities [3], misrepresentation of AI’s role in systems [3], failure to disclose AI usage in media [3], and unlawful impersonation using AI [3]. California’s False Advertising Law and the Fair Employment and Housing Act (FEHA) provide additional protections against deceptive advertising and discrimination [3], respectively [3]. The California Consumer Privacy Act (CCPA) also governs the collection and use of personal information by AI systems [3].

Guidance for healthcare entities outlines obligations under California law when employing AI [3], including prohibitions against overriding medical decisions [3], making incorrect medical determinations [3], and discriminating against patients based on prior healthcare access [3]. In Texas [1] [2] [4], AG Ken Paxton reached a settlement with an AI healthcare technology company accused of making false claims about the accuracy and safety of its products used in hospitals [1] [4]. This reflects a growing recognition of the need to address the implications of AI across various sectors [3], particularly in consumer protection and healthcare [3].

As AI technology increasingly affects consumers [1] [2] [4], it is anticipated that more AGs will initiate enforcement actions based on current consumer protection laws and forthcoming AI legislation [1] [2] [4]. The recent successes of plaintiffs and the heightened focus on AI by state regulators suggest that businesses should exercise caution when investing in new technologies [1] [2] [4]. Companies are advised to review their consumer-facing disclosures to ensure clarity and transparency regarding their use of AI technologies [1] [2] [4], while also demanding similar transparency from their technology providers and remaining vigilant against “AI washing,” which involves exaggerating AI capabilities while downplaying associated risks [1] [2] [4]. Risk mitigation strategies for businesses adopting AI technologies [1], such as chatbots and predictive analytics [1], are essential [1] [4], and scrutiny of representations made to business partners [4], consumers [1] [2] [3] [4], and investors is crucial [4].

Conclusion

The increasing scrutiny of AI by state attorneys general underscores the importance of compliance with existing consumer protection laws and the anticipation of new AI-specific regulations. Businesses must remain vigilant in their use of AI technologies, ensuring transparency and accuracy in their representations to consumers and stakeholders. As AI continues to permeate various sectors, the emphasis on consumer protection and ethical AI practices will likely intensify, necessitating proactive measures by companies to mitigate risks and align with evolving legal standards.

References

[1] https://www.lexology.com/library/detail.aspx?g=8d1132ff-3858-42fc-8920-b3bc5d327cef
[2] https://natlawreview.com/article/state-regulators-eye-ai-marketing-claims-federal-priorities-shift
[3] https://www.jdsupra.com/legalnews/state-attorneys-general-on-applying-8784127/
[4] https://www.jdsupra.com/legalnews/state-regulators-eye-ai-marketing-2623010/