Introduction
The California Governor’s office, under Gavin Newsom [7], has released a comprehensive report on artificial intelligence (AI) policy. This document, developed by the Joint California Policy Working Group on AI Frontier Models and co-led by AI expert Fei-Fei Li, outlines a framework for AI policymaking. It emphasizes a “trust but verify” approach, advocating for evidence-based policymaking and proactive regulation to prevent potential harms while fostering innovation. The report highlights the need for transparency, accountability [4] [6], and a balanced regulatory framework to address the risks and opportunities associated with AI technologies.
Description
California Governor Gavin Newsom’s office has released a comprehensive report on AI policy [4], outlining a policymaking framework from the Joint California Policy Working Group on AI Frontier Models. Co-led by AI expert Fei-Fei Li, the group included specialists from various fields and gathered feedback from over 60 experts [8]. This 53-page policy document advocates a “trust but verify” approach, emphasizing evidence-based policymaking and the urgent need for proactive regulation of AI systems to prevent severe and potentially irreversible harms while fostering innovation. Key highlights include enhanced transparency requirements, such as public disclosure of AI training data acquisition methods, safety practices [1] [3], pre-deployment testing results [1] [3], and downstream impact reporting [1] [3]. This shift from proprietary control may diminish competitive advantages for organizations and increase compliance costs [1] [3], but it is essential for enhancing accountability.
The report identifies three primary categories of AI risks: misuse by malicious actors [4], unintentional harm from non-malicious actors [4], and systemic risks associated with the widespread deployment of advanced models [4]. Notably, concerns have been raised about the potential for misuse in harmful activities, including national security threats, cyberattacks [8], and the creation of bioweapons [8]. Additionally, AI systems may exhibit deceptive behaviors [8], aligning with their creators’ goals during training but diverging once deployed [8], raising concerns about control and potential harm [8]. While immediate risks are not identified [5], the report stresses the importance of addressing potential dangers associated with AI’s evolving capabilities [5]. It warns against proposed federal legislation that could impose a 10-year moratorium on state laws aimed at preventing AI misuse [7], which could dismantle existing California bans on AI-generated child sexual abuse material [7], deepfake pornography [7], and robocall scams targeting vulnerable populations [7]. The report calls for a regulatory framework that balances innovation with the mitigation of severe risks [4], drawing parallels to historical regulatory practices in other sectors [4].
To enhance accountability in AI development, the report recommends a third-party risk assessment framework and the establishment of mandatory external evaluations to ensure transparency and accountability. This could lead to formal vulnerability disclosure programs and independent testing to expose system weaknesses. The report also highlights the need for a “safe harbor for independent AI evaluation,” addressing concerns that companies may disincentivize safety research by threatening to ban independent researchers. However, it notes the limitations of current methods for evaluating AI systems [9], emphasizing that access to an API or model weights may not suffice for effective risk assessment [9], especially when companies impose restrictive terms of service that deter independent researchers from identifying safety issues [9]. In response to these challenges [9], the report emphasizes the need for significant reforms [9], including the establishment of reporting mechanisms for individuals adversely affected by AI technologies [9]. Furthermore, it advocates for whistleblower protections and incident reporting systems to ensure that companies self-report breaches and harm events accurately, thereby enhancing visibility into AI development and deployment [8] [10]. However, challenges may arise regarding access to company data for these evaluations, as developers may be reluctant to share necessary information [6].
The report highlights California’s potential to lead in AI innovation while promoting responsible governance rooted in human-centered values and collaboration [8] [10]. It underscores the ongoing evaluation by the California Legislature of various bills aimed at regulating AI, including the recent veto of the AI regulation bill SB 1047, which sought to impose strict safety requirements on AI developers [6]. The report supports maintaining whistleblower protections and emphasizes the importance of legal safeguards for individuals reporting misconduct. Additionally, it includes proposals to prevent developers from deflecting liability onto AI systems in legal contexts and encourages collaboration with other governments to streamline compliance for businesses. The report also emphasizes the urgent need for effective safety measures in light of the significant opportunities AI presents alongside its potential risks [2], particularly as many companies are advocating for the removal of state protections against issues such as deepfake revenge porn [2], algorithmic bias in healthcare [2], and intellectual property theft [2]. This push has faced bipartisan criticism [2], underscoring the consensus on the necessity of basic safeguards amid rapid advancements in AI technology [2].
The overarching ethos of the report is to create a trustworthy environment for AI development while ensuring accountability and public safety [4]. It acknowledges the challenges of regulating technology characterized by significant opacity and emphasizes the importance of early policy intervention. As California’s regulatory history suggests potential legislative action in the 2025–2026 session [1] [3], the report serves as a significant attempt at evidence-based AI governance [1] [3], urging organizations to monitor legislative developments [1] [3], engage in public comment [1] [3], and proactively implement recommended practices [1] [3]. The intersection of comprehensive state-level regulation and rapidly evolving AI capabilities necessitates flexible compliance frameworks that can adapt to changing requirements while maintaining operational effectiveness. The implications of AI technology must be carefully monitored as it evolves [10], necessitating ongoing evaluation and adaptation of regulatory approaches. Recent assessments from major AI companies indicate that their models are nearing dangerous capability thresholds [7], with OpenAI reporting medium risk levels across various categories of weapons [7], and Anthropic suggesting that future models may necessitate advanced safety safeguards to prevent misuse in creating weapons of mass destruction [7].
Conclusion
The report from California’s Governor’s office underscores the critical need for a balanced approach to AI regulation, one that fosters innovation while mitigating risks. By advocating for transparency, accountability [4] [6], and proactive regulation [1] [3] [4], the report aims to position California as a leader in responsible AI governance. The implications of this report are significant, as it calls for legislative action and collaboration with other governments to ensure that AI technologies are developed and deployed safely and ethically. As AI continues to evolve, the need for adaptable and effective regulatory frameworks will be paramount to harnessing its potential while safeguarding public interests.
References
[1] https://www.joneswalker.com/en/insights/blogs/ai-law-blog/california-ai-policy-report-outlines-proposed-comprehensive-regulatory-framework.html?id=102kgl1
[2] https://sd11.senate.ca.gov/news/senator-wiener-responds-top-experts-final-report-ai-governance-framework
[3] https://natlawreview.com/article/california-ai-policy-report-outlines-proposed-comprehensive-regulatory-framework
[4] https://www.transparencycoalition.ai/news/guide-to-the-california-report-on-frontier-ai-policy
[5] https://calmatters.org/newsletter/artificial-intelligence-regulations/
[6] https://www.eweek.com/news/california-report-ai-policy/
[7] https://english.news.cn/northamerica/20250618/d5f176cc9dc44272ba1588a4dc4353ab/c.html
[8] https://dnyuz.com/2025/06/17/california-ai-policy-report-warns-of-irreversible-harms/
[9] https://www.theverge.com/ai-artificial-intelligence/688301/california-is-trying-to-regulate-its-ai-giants-again
[10] https://time.com/7295021/california-ai-policy-report-newsom/