California Governor Gavin Newsom vetoed SB-1047 [1] [2] [4] [5] [7] [8], a bill aimed at regulating generative AI and holding major AI companies accountable for safety protocols [6]. Despite broad bipartisan support and endorsements from leading AI researchers [2], Newsom cited concerns about burden on AI companies [4], California’s lead in the space [4], and the bill’s lack of consideration for high-risk environments or sensitive data [6].
Description
California Governor Gavin Newsom vetoed SB-1047 [1] [2] [4] [5] [7] [8], known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act [4] [6], on September 29. The bill aimed to regulate generative AI and hold major AI companies accountable for safety protocols [6], including maintaining a kill switch and undergoing third-party testing [1]. Despite broad bipartisan support and endorsements from leading AI researchers and figures like Elon Musk [2], Newsom cited concerns about burden on AI companies [4], California’s lead in the space [4], and the bill’s lack of consideration for high-risk environments or sensitive data [6]. He described the bill as well-intentioned but said it would apply stringent standards to even basic functions [1]. Newsom plans to consult with experts to develop more adaptable regulations for AI technology, as heavy regulation could hinder AI development and drive AI companies out of the state. State Senator Scott Wiener criticized the veto [7], stating that without regulations [7], the industry is left to police itself [7], despite voluntary commitments from industry leaders [7]. The now-killed bill would have mandated safety tests on powerful AI models [7], but without it [7], accountability and regulation in the AI industry remain uncertain [7]. Melissa Ruzzi [2], director of artificial intelligence at AppOmni [2], acknowledged the challenges of regulating AI but stressed the importance of starting somewhere to address potential risks and uncertainties surrounding AI technology [2]. Newsom plans to continue working on the issue next year to ensure public safety in the face of rapidly evolving risks from AI models, such as threats to democracy [3], misinformation [3], privacy [2] [3], critical infrastructure [3], and workforce disruptions [3].
Conclusion
The veto of SB-1047 by Governor Newsom raises concerns about the lack of accountability and regulation in the AI industry. While heavy regulation could hinder AI development [5], the absence of regulations leaves the industry to police itself, potentially leading to risks and uncertainties surrounding AI technology [2]. Moving forward, it is crucial to address these challenges and develop adaptable regulations to ensure public safety in the face of evolving risks from AI models.
References
[1] https://time.com/7026653/gavin-newsom-ai-safety-bill-sb-1047/
[2] https://www.darkreading.com/application-security/calif-gov-vetoes-ai-safety-bill
[3] https://arstechnica.com/ai/2024/09/california-governor-vetoes-controversial-ai-safety-bill/
[4] https://www.theverge.com/2024/9/29/24232172/california-ai-safety-bill-1047-vetoed-gavin-newsom
[5] https://variety.com/2024/biz/news/gavin-newsom-sag-aftra-ai-bill-1047-artificial-intelligence-1236158405/
[6] https://www.techrepublic.com/article/gavin-newsom-veto-california-ai-bill/
[7] https://www.npr.org/2024/09/20/nx-s1-5119792/newsom-ai-bill-california-sb1047-tech
[8] https://apnews.com/article/california-ai-safety-measures-veto-newsom-92a715a5765d1738851bb26b247bf493