Introduction

As artificial intelligence (AI) becomes increasingly integrated into healthcare, states are enacting regulations to govern its use [1], particularly concerning prior authorization for medical services. This legislative focus reflects growing concerns about the implications of AI in healthcare decision-making and patient interactions.

Description

As AI becomes increasingly integrated into healthcare, numerous states are enacting regulations to govern its use [1], particularly concerning prior authorization for medical services. In 2025 [1] [2], over 250 health-related AI bills were introduced across thirty-four states [1], reflecting a growing legislative focus on this technology. Notably, in June 2025 [2], Nebraska enacted a law requiring that a physician or qualified healthcare professional (HCP) review any health insurance claim before it can be denied [2]. This brings the total to six states—Arizona [2], California [2], Indiana [2], Maryland [2], Nebraska [2], and North Dakota—implementing similar laws [2], with many more states considering comparable measures [2].

These regulations typically encompass several key provisions: they prohibit AI from denying [2], delaying [2], or altering healthcare services without HCP review [2], and they establish criteria for AI use in these processes [2], ensuring that decisions are based on individual patient data and do not discriminate against specific patient groups [2]. Additionally, certain bills mandate that hospitals and laboratories inform patients when AI is utilized [1], requiring clear communication about AI’s role in their care [1]. The push for regulation is partly a response to public concerns about the high rate of prior authorization denials [2], which are often perceived as driven by profit motives rather than medical necessity [2]. Courts have recognized that prior authorization decisions are medical in nature [2], raising issues about insurance companies potentially practicing medicine without a license by delegating these decisions to non-HCPs [2].

There are also concerns that AI may be used by insurers to sustain high denial rates [2], complicating the landscape of medical necessity determinations [2]. New regulations require insurers to provide explanations for their decisions and grant patients the right to appeal [1]. Some states have introduced laws requiring AI chatbots in healthcare to disclose their non-human status in communications with patients [2]. California mandates disclosure for all patient interactions involving AI [2], while Colorado requires similar transparency for communications with residents [2]. Other states [2], like Utah and New Jersey [2], have specific regulations addressing the use of AI in mental health contexts [2], ensuring that chatbots do not misrepresent themselves as licensed providers [2].

These state laws do not account for other potential regulatory frameworks [2], such as the federal Food [2], Drug [2], and Cosmetic Act [2], which could classify certain AI software as medical devices [2]. The trend of increasing state regulation in 2025 indicates a significant shift in the governance of AI in medicine [2], with the potential for future laws to further restrict AI’s role in prior authorization processes [2]. However, these regulations do not entirely exclude AI from medical practice; they still allow HCPs to utilize AI as a supportive tool for expediting claims approval and enhancing diagnostic and treatment processes, provided that physicians validate AI recommendations before proceeding with patient care [1].

AI chatbots remain permissible [2], provided that companies comply with the necessary reporting and notification requirements [2]. Federal legislation could potentially override state-level restrictions on AI in medicine [2]. Although a recent bill aimed to impose a 10-year ban on state regulation of AI technologies [2], this provision was not included in the final version [2]. The possibility remains for Congress to establish a cohesive federal regulatory framework for AI [2], which could supersede the current patchwork of state laws [2], ultimately enhancing patient safety and fostering trust in AI technologies within the healthcare sector [1].

Conclusion

The increasing regulation of AI in healthcare by states highlights a significant shift in governance, driven by concerns over patient safety and the ethical use of technology. While these regulations aim to ensure that AI supports rather than replaces human decision-making, they also underscore the need for a cohesive federal framework. Such a framework could harmonize state laws, enhance patient safety [1], and foster trust in AI technologies within the healthcare sector [1].

References

[1] https://digitalchew.com/2025/08/07/states-lead-the-way-in-ai-regulation/
[2] https://www.jdsupra.com/legalnews/will-ai-be-your-new-doctor-probably-not-1903021/