Introduction
The legal challenges surrounding the use of artificial intelligence (AI) in healthcare insurance decisions have come to the forefront, as evidenced by a class action lawsuit against UnitedHealth Group (UHG). This case highlights the tension between state law claims and federal preemption under the Medicare Act, as well as the broader implications of AI-driven decision-making in the healthcare industry.
Description
A class action complaint has been filed against UnitedHealth Group (UHG) by the estates of patients whose post-acute care coverage was terminated, along with UHG Medicare Advantage Plan customers [1]. The complaint asserts various state law claims, including breach of contract [1] [3] [5], breach of the implied covenant of good faith and fair dealing [1] [3] [5], unjust enrichment [5], and bad faith [1]. UHG sought to dismiss the complaint [1], arguing that the Medicare Act preempted these state law claims [1]. However, a US District Court ruled that the Medicare Act did not preempt the claims for breach of contract and breach of the implied covenant of good faith and fair dealing [1], allowing those claims to proceed while dismissing others [1].
The plaintiffs allege that UHG improperly utilized an AI tool, specifically the nH Predict program developed by its subsidiary NaviHealth, to determine coverage criteria for post-acute care [1]. This reliance on AI has reportedly led to unjust denials of medically necessary care based on unrealistic predictions for recovery, often contradicting physicians’ treatment recommendations [1] [3]. The court noted a concerning 90% reversal rate in appeals, raising significant questions about the reliability and transparency of AI decision-making processes. Furthermore, the court found that the analysis of the breach of contract claims would only require interpretation of contractual terms [1], which the Medicare Act does not regulate [1], and highlighted that UHG had not disclosed its use of AI in these decision-making processes [1].
The ruling indicates that certain state law claims may remain viable despite the Medicare Act’s preemption clause [1], provided they do not interfere with the Act’s standards [1]. This raises questions about the potential for other state law claims to be actionable and whether different US Circuit Courts will align with the Eighth Circuit’s interpretation of the preemption clause [1].
As health insurance companies increasingly adopt AI and algorithms [1], ongoing litigation highlights the scrutiny of algorithmic processes in determining coverage for mental health and substance use disorder claims [1]. A related case in the Ninth Circuit reversed a district court’s dismissal of a class action lawsuit against UHG [1], remanding the case for further proceedings to assess potential violations of ERISA and the Mental Health Parity and Addiction Equity Act [1], as well as breaches of health plan terms related to algorithmic decision-making [1]. Similar claims have emerged against other insurers [3], such as Cigna and Humana [3], which have faced accusations of using algorithms to deny payments based on preset criteria [3]. The potential for wrongful claim denials by health insurers utilizing AI tools raises significant concerns regarding access to necessary healthcare services [5].
In response to growing concerns about the ethical implementation of AI in insurance coverage decisions, the Biden administration has introduced voluntary agreements and an executive order aimed at establishing standards for AI use in healthcare [4]. The Centers for Medicare & Medicaid Services (CMS) has mandated that Medicare Advantage plans must consider individual circumstances rather than relying solely on algorithms for coverage determinations [4]. Recent government guidelines emphasize the importance of human intervention in healthcare decisions influenced by algorithms [2]. Ensuring that AI recommendations align with clinical best practices and regulatory standards is essential [3], with clinical oversight maintained to augment rather than replace human medical judgment [3]. Implementing safeguards [3], such as mandatory human reviews of AI-generated denials [3], could help mitigate potential harm [3]. Stronger regulatory compliance is necessary to prevent the misuse of AI in ways that compromise patient care [3], emphasizing the need for clearer guidelines from governing bodies like CMS to ensure ethical and fair implementation of AI in coverage determinations. The ongoing legal challenges surrounding AI-driven insurance denials underscore the need for insurers to balance operational efficiency with ethical responsibility [3], ensuring that AI supports rather than hinders patient access to necessary care [3].
Conclusion
The ongoing legal scrutiny of AI in healthcare insurance decisions underscores the critical need for a balance between technological advancement and ethical responsibility. As AI becomes more prevalent in determining healthcare coverage, it is imperative for insurers to ensure that these tools enhance rather than impede patient access to necessary care. The evolving legal landscape, coupled with government initiatives, highlights the importance of transparency, human oversight, and adherence to clinical best practices in the deployment of AI in healthcare.
References
[1] https://www.jdsupra.com/legalnews/unitedhealthcare-must-face-state-law-8442700/
[2] https://mtsoln.com/blog/ai-news-727/us-health-insurers-face-pressure-over-ai-role-in-claim-decisions-1633
[3] https://insights.wchsb.com/2025/03/05/ai-in-insurance-balancing-efficiency-and-ethical-responsibility/
[4] https://www.ft.com/content/600e53b6-963b-4c62-9548-b2b98788a950
[5] https://litigationtracker.law.georgetown.edu/litigation/estate-of-gene-b-lokken-the-et-al-v-unitedhealth-group-inc-et-al/