Introduction

California has taken a significant step in regulating the use of artificial intelligence (AI) in healthcare by enacting legislation that will come into effect on January 1, 2025. This legislation aims to ensure transparency, human oversight [2] [5], and ethical use of AI technologies in medical settings, particularly focusing on generative AI (genAI) and its implications for patient care and decision-making.

Description

California has enacted significant legislation aimed at regulating the use of artificial intelligence (AI) in healthcare settings [4], with several key laws taking effect on January 1, 2025. Among these, Assembly Bill 3030 specifically addresses the use of generative artificial intelligence (genAI) by mandating that healthcare facilities, including clinics [4], doctor’s offices [1] [3] [4], and group practices [1] [3], disclose when they utilize genAI to communicate clinical information to patients [1] [3] [4]. These communications must include a disclaimer indicating that the information was generated by genAI [1] [3] [4], along with clear instructions for patients on how to contact a human healthcare provider for further assistance [1] [3]. Notably, communications produced by genAI that have been reviewed by a licensed or certified healthcare provider are exempt from these disclosure requirements, as are administrative functions such as appointment scheduling [1] [3] [4], even if AI-assisted [3] [4].

In addition to AB 3030, the “Physicians Make Decisions Act” or Senate Bill 1120 restricts healthcare service plans from using AI to deny coverage, emphasizing the necessity of human oversight in critical medical decisions [2]. This law mandates that licensed healthcare professionals supervise AI-assisted medical necessity determinations [5], requiring that decisions regarding the approval [5], modification [5], or denial of medical care be reviewed by qualified medical personnel who consider the patient’s medical history and records [5]. It explicitly prohibits exclusive reliance on AI for these decisions [5], addressing concerns about increased denial rates linked to AI analysis in claims and treatment authorizations [5]. AI-assisted decisions must take into account individual patient circumstances and clinical histories rather than relying solely on group datasets [5], ensuring that the judgment of licensed healthcare professionals remains paramount.

Both pieces of legislation emphasize the importance of transparency and compliance in the use of AI technologies. Organizations utilizing AI are required to undergo periodic audits and compliance reviews to ensure adherence to ethical and legal standards [5]. The performance of AI systems must be regularly evaluated for accuracy and reliability [5], and consumer data used by AI must be protected from misuse in accordance with privacy laws [5]. The California Office of the Attorney General (OAG) has issued a legal advisory outlining the implications of these laws for healthcare providers [5], insurers [5], and AI developers [2] [5], highlighting risks such as potential patient harm, systemic bias [5], and data misuse [5]. The advisory also addresses unlawful practices under health consumer protection laws [2], discrimination [2] [5], and patient privacy [2], noting that the use of AI to deny health insurance claims that contradict doctors’ recommendations may violate regulations [2].

As the adoption of AI technologies in healthcare continues to grow [1] [3] [4], healthcare providers must evaluate whether their systems qualify as genAI as defined by AB 3030. Those that do must adhere to the notice requirements or ensure that communications generated by genAI are reviewed by a licensed provider to qualify for the exemptions [1]. This legislation presents an opportunity for healthcare companies to closely monitor their use of AI technologies and ensure compliance with the evolving legal landscape, particularly in anticipation of expected increases in AI-related litigation starting in 2025 [4]. Proponents of these laws argue that they balance technological advancement with consumer protection [5], ensuring that human oversight remains integral to medical decision-making and prioritizing patient safety and fairness in the integration of technology into medical practices. Additionally, there are growing concerns regarding the use of generative AI for drafting patient-related documents that could contain misleading information [2], particularly if based on stereotypes related to protected classifications [2]. AI’s role in determining patient access to healthcare is also under scrutiny [2], especially when it relies on historical claims data that may disadvantage certain groups [2], underscoring the need for careful oversight and regulation in this rapidly evolving field.

Conclusion

The enactment of these laws marks a pivotal moment in the integration of AI in healthcare, emphasizing the need for transparency, accountability, and human oversight [2] [5]. By mandating disclosure and supervision, California aims to protect patients from potential risks associated with AI, such as misinformation and biased decision-making. These regulations not only safeguard patient interests but also set a precedent for other states to follow, potentially influencing national standards in AI healthcare applications. As the legal landscape evolves, healthcare providers and AI developers must remain vigilant in ensuring compliance and prioritizing patient safety and ethical practices.

References

[1] https://www.jdsupra.com/legalnews/california-turns-to-the-use-of-ai-in-8404974/
[2] https://www.jdsupra.com/legalnews/california-ag-issues-two-ai-legal-5422113/
[3] https://www.lexology.com/library/detail.aspx?g=d8d2bafd-d95f-415f-bd31-d07dac2c0cca
[4] https://www.bclplaw.com/en-US/events-insights-news/california-turns-to-the-use-of-ai-in-healthcare.html
[5] https://www.thevbpblog.com/california-enacts-law-mandating-oversight-of-ai-in-medical-necessity-decisions/