Introduction
California has enacted legislation targeting large developers of generative AI systems, known as “Covered Providers,” to enhance transparency and accountability in AI-generated content. This legislation [2] [3] [5], including the AI Transparency Act (SB942) and other related bills, mandates specific compliance measures for AI developers, particularly those with significant user bases in California. The laws aim to address concerns about misinformation and consumer privacy while balancing innovation and regulatory oversight.
Description
California legislation mandates that large developers of generative AI systems [2], defined as “Covered Providers” with over 1,000,000 monthly users accessible within California, provide AI detection tools and watermarking capabilities for audiovisual content [2]. These developers must assess their compliance with these laws and prepare accordingly [2], particularly in light of the AI Transparency Act (SB942), which will take effect on January 1, 2026 [3] [5]. This law establishes disclosure requirements for Covered Providers and third-party licensees regarding AI-generated content [3], necessitating the disclosure of specific information about the datasets used for training their systems [2]. This includes details on data sources [2], copyright status [2], personal information [2], and any modifications made to the datasets [2].
Concerns have been raised regarding the broad definition of Covered Providers, which may complicate compliance for open-source and fine-tuned AI systems. The law may face legal challenges due to its ambiguous requirements [2], especially regarding the level of detail needed for compliance [2]. Exceptions exist for AI systems related to cybersecurity [2], physical safety [2], aircraft operation [2], and those used exclusively by federal entities [2].
Governor Newsom has enacted several targeted bills concerning AI [1], including AB 2013 [1] [4], which mandates transparency regarding the use of consumer data for training AI models [1]. This bill is expected to be enforced under California’s Unfair Competition Law [2], allowing actions by the attorney general and private plaintiffs who can demonstrate injury from violations [2]. Meanwhile, SB 942 requires Covered Providers to enable users to identify AI-generated or altered content through watermarking, thereby enhancing transparency and accountability in the use of AI systems [4]. This includes ensuring that contracts with third parties require the maintenance of necessary disclosures in any generated content [3]. If a licensee modifies the system to bypass these disclosures [3] [5], the provider must revoke the license within 96 hours [3] [5], and the licensee must cease usage immediately [5]. Noncompliance can lead to civil penalties of $5,000 per day [3], with each day of violation counted separately [3], along with the potential for attorney’s fees [3].
A recent veto by California’s governor on SB 1047 [2], a proposed comprehensive law aimed at regulating generative AI [1], reflects a nuanced approach to balancing innovation with consumer privacy rights [1]. This bill would have introduced significant safety restrictions and requirements [1], such as a mandatory “kill switch” for AI technology and safety testing by AI companies [1]. The veto underscores the challenges of navigating regulatory frameworks while fostering technological advancement.
As AI-generated content becomes more realistic [2], concerns about misinformation have prompted state legislators to implement transparency measures [2], including watermarking requirements [2]. This legislation aligns with international trends for greater transparency in AI development [2], similar to requirements in the European Union’s Artificial Intelligence Act [2].
Developers should conduct internal audits of their training data sources and establish practices for tracking and approving datasets to ensure compliance by the effective date [2]. They must also monitor the effective dates of these regulations and prepare for necessary changes to their systems and contractual agreements to ensure compliance [2]. Legal developments may also alter the requirements before the laws take effect [2], contributing to a fragmented regulatory environment for AI in the United States [5], amidst a landscape of over 120 AI-related bills in Congress [5], many of which lack legislative progress [5].
Conclusion
The legislative measures in California represent a significant step towards regulating AI development, focusing on transparency and accountability [4]. These laws have implications for AI developers, requiring them to adapt their practices to meet new compliance standards. While the regulations aim to protect consumer privacy and prevent misinformation, they also highlight the ongoing challenge of balancing innovation with regulatory oversight. As the legal landscape continues to evolve, developers must remain vigilant and proactive in ensuring compliance with these emerging standards.
References
[1] https://www.coblentzlaw.com/news/with-the-end-of-the-2024-legislative-term-governor-gavin-newsom-takes-a-measured-approach-to-data-privacy-legislation/
[2] https://www.jdsupra.com/legalnews/california-s-new-ai-laws-focus-on-5218852/
[3] https://www.lexology.com/library/detail.aspx?g=bbab1a78-a36a-416e-91f6-4b3f6dc22c59
[4] https://cset.georgetown.edu/article/governor-newsom-vetoes-sweeping-ai-regulation-sb-1047/
[5] https://ktslaw.com/en/insights/alert/2024/10/ai%20transparency%20and%20compliance%20key%20takeaways%20from%20californias%20ai%20transparency%20act