Court Allows Lawsuit Over AI Use in Benefit Denials to Proceed
April 22, 2025
On March 31, 2025, a U.S. District Court for the Eastern District of California allowed a class action to proceed over the use of an insurer’s automated, AI-based algorithm called PxDx.
In 2023, Cigna Corporation and Cigna Health and Life Insurance Company (Cigna) were sued in U.S. District Court following the release of a ProPublica report that they relied on PxDx algorithms to reject plan participant claims without review by Cigna doctors. The case cited reporting by ProPublica that PxDx denied more than 300,000 requests for payments over two months in 2022, with Cigna doctors spending an average of 1.2 seconds reviewing each claim. The participants argued that the insurer's use of the PxDx system constituted an ERISA fiduciary breach, as it contradicted the governing health plan terms, which required a review of medical necessity claims by a medical director.
Cigna argued that it was acting within its discretionary authority in interpreting plan terms to use the algorithm, and that it sends disclosures to the doctor and particpant whenever it uses PxDx. However, the court disagreed and found that Cigna’s interpretation was an abuse of discretion. Since participants had alleged sufficient facts to support their assertions that delegating medical necessity decisions to the automated algorithm violated the plan terms, the court allowed certain claims to proceed.
Additionally, the plaintiffs in the case had alleged that the defendants’ use of PxDx violated California Health & Safety Code § 1367.01(e), which states that “No individual, other than a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues involved in the healthcare services requested by the provider, may deny or modify requests for authorization of healthcare services for an enrollee for reasons of medical necessity.” The court subsequently held that the plaintiffs had adequately pleaded a violation of the code provision under the California Unfair Competition Law, and that the law is not expressly preempted by ERISA.
Employer Takeaway
The court’s decision was one of the first rulings in a series of new cases involving health insurers’ use of AI-based tools to decide health plan preauthorization requests, as well as benefit claims or appeals. While the defendants in this case asserted that the technology used to evaluate claims is similar to software that other insurers and CMS have used for years, the case nonetheless highlights the potential liability issues surrounding AI-based tools for medical claims and preauthorization.
A number of state legislatures are currently considering legislation to regulate the use of AI in health insurer coverage decisions, and a recent U.S. Senate Committee report found that the increased adoption of AI-based claims software had contributed to a surge in coverage denial rates for post-acute care among the nation’s largest insurers. Given the increased scrutiny and legislative activity surrounding the usage of AI-based technology in health insurance, employers should familiarize themselves with the issue and more carefully evaluate whether certain AI-based services could potentially pose liability risks.
Kisting-Leung v. CIGNA Corp.