Center for Practical AI
Education · AI and Healthcare

The algorithm that denied your claim.

Gene Lokken was 91 years old. After a fall, he was transferred to a rehabilitation facility. UnitedHealthcare's AI system — nH Predict — flagged him for discharge after 10 days. His physicians said he needed more time. The AI disagreed. He was sent home. Three months later he was dead. His family sued.

This is a guide to how AI makes healthcare decisions, what the evidence shows, and what patients can do.

0.4s

The number behind this guide

The algorithm reviewed Gene Lokken's case in 0.4 seconds.

His physicians said he needed more time. The algorithm sent him home. He was 91. He died three months later.

AI Insurance Denials

The algorithm that overrules your doctor.

AI prior authorization tools review millions of claims per day — faster than any human, and with denial rates that have risen as adoption has accelerated.

Gene Lokken

Outcome: The case is part of a wave of litigation against UnitedHealthcare over nH Predict. A 2023 ProPublica/Big Local News investigation found the algorithm had a 90% denial rate for post-acute care claims — and that in 87.5% of reviewed cases, the human reviewers overruled the AI but the denials were sent anyway.

UHC nH Predict denial rate for post-acute care (ProPublica, 2023)

Medicare Advantage prior auth requests improperly denied — OIG report, 2023

Denied claims that are appealed by patients

External appeals that overturn the insurer's denial

The structural problem

Insurance companies are legally required to make coverage decisions based on clinical evidence — not actuarial models optimized for cost. But enforcement is sparse, the appeals process is deliberately complex, and most patients do not appeal. The math works: deny broadly, pay some reversals, net savings are substantial.

AI has accelerated this dynamic. A human reviewer reading 50 charts a day can be audited, trained, and held accountable. An algorithm processing 300,000 claims a day is harder to examine and easier to defend.

How the Algorithms Work

What AI actually does with your claim.

The algorithms are not reading your medical records the way a physician does. They are scoring patterns.

Prior authorization AI (e.g., nH Predict, Evicore, Gold Carding)

How it works

The algorithm receives structured data from your claim — diagnosis codes, procedure codes, length of stay, age, and in some systems, zip code and facility type. It compares your case to population-level statistical patterns and assigns a probability that continued care is 'medically necessary' by the insurer's definition of that term.

The documented problem

Your individual clinical presentation is not the input. Your physician's clinical judgment is not the input. Population statistics are the input. An algorithm trained on historical claims data from a population that was systematically undertreated will systematically undertreat future patients.

Diagnostic AI (e.g., IDx-DR for diabetic retinopathy, Paige for pathology, Viz.ai for stroke)

How it works

These tools analyze imaging data — retinal scans, pathology slides, CT scans — and flag findings that match patterns in their training data. FDA-cleared diagnostic AI tools are among the most validated applications of AI in healthcare.

The documented problem

Training data demographics matter enormously. Most AI diagnostic tools were trained on datasets dominated by white patients at academic medical centers. A 2019 Google AI analysis of chest X-rays found the algorithm underperformed on Black patients — despite training on a diverse dataset — because the Black patients in the dataset were systematically sicker, creating different baseline patterns.

Clinical decision support AI (embedded in EHR systems)

How it works

Sepsis alerts, readmission risk scores, medication interaction flags — these tools are embedded in electronic health records and surface alerts to clinicians. They are often invisible to patients.

The documented problem

A 2019 Science paper analyzed a widely used commercial algorithm that determined which patients needed extra care management. It turned out to be heavily correlated with healthcare cost — not health need. Because Black patients historically receive less healthcare spending, the algorithm systematically underidentified Black patients as high-need, even when they were sicker.

Diagnostic AI

Real promise. Real failure modes.

AI diagnostic tools are among the most legitimate applications of AI in healthcare — and among the most consequential when they fail.

Where AI shows genuine clinical promise

  • IDx-DR: FDA-cleared, autonomous detection of diabetic retinopathy in primary care settings — no ophthalmologist required for initial screening. 87.2% sensitivity, 90.7% specificity.
  • Paige Prostate: FDA-approved pathology AI reducing false negatives in prostate cancer diagnosis by detecting subtle findings radiologists miss.
  • Viz.ai: Stroke detection on CT angiography, reducing time-to-treatment by alerting neurology teams before radiologist reads are complete.
  • Sepsis prediction: Epic's sepsis prediction model, when implemented with strict clinical protocols, reduced sepsis mortality at some institutions — though results are highly implementation-dependent.

Documented failure modes

  • Skin cancer AI: Tools trained on dermatology datasets that underrepresent darker skin tones consistently underperform for Black and brown patients — who already face diagnostic delays for skin conditions.
  • Pulse oximeters: FDA cleared, but decades of research show they overestimate oxygen levels in Black patients due to sensor design. EHR-integrated AI built on this data inherited the bias.
  • Mental health prediction: Crisis prediction algorithms trained on historical police contact data — which reflects racial disparities in policing — overpredict risk for Black patients.
  • Pain management AI: Studies show AI systems recommend lower pain medication doses for Black patients, replicating documented physician bias in the training data.
Who Bears the Risk

The burden is not distributed equally.

AI healthcare tools perform differently across racial, economic, and geographic populations — and the populations with least access to alternatives bear the most concentrated risk.

Lower income adults more likely to face prior authorization denial (KFF, 2023)

Higher rate of AI diagnostic tool underperformance for darker skin tones (JAMA, 2022)

Patients affected by racially biased clinical algorithm (Obermeyer et al., Science 2019)

Medicare Advantage enrollees — disproportionately elderly and chronically ill — who face AI prior auth review

Annette Amick

Outcome: Her case was among dozens documented in a 2023 congressional investigation into Medicare Advantage AI denials. The investigation found that several major MA plans had denial rates 4–12x higher than traditional Medicare for the same conditions.
What You Can Do

Action for every level of influence.

1

For yourself

  • If a claim is denied, always appeal. Fewer than 1 in 10 denials are appealed — and a majority of appealed claims are reversed.
  • Request the specific clinical criteria used in any denial. You are entitled to this under ERISA or ACA. Without it, you cannot effectively appeal.
  • Ask your physician if an AI or automated review system was used. In California, Colorado, and Texas, disclosure is legally required.
2

For a family member

  • If a family member is navigating a denial alone, offer to help. The appeal process is deliberately complex — a second person reviewing deadlines and requirements dramatically improves outcomes.
  • Contact a patient advocate. Patient Advocate Foundation provides free case management for complex insurance denials. NAMI HelpLine can help with mental health coverage.
  • If an AI-generated denial involves a life-threatening condition, request expedited external review — the insurer must respond within 72 hours by law.
3

For an organization

  • Healthcare providers: if you believe an insurer's AI denial system is producing systematically inaccurate outcomes, file a complaint with your state insurance commissioner and document the pattern.
  • Employer benefits administrators: review your plan's prior authorization vendor. Ask whether AI review systems are used, what their denial rates are by diagnosis, and whether demographic disparities have been audited.
  • Patient advocacy organizations: join the coalition supporting the PARAM Act (Prior Authorization Reform and Modernization) and state-level AI prior authorization legislation.
4

For policy

  • Support the PARAM Act (federal): requires insurers to report AI system denial rates, conduct demographic impact audits, and provide disclosure to patients.
  • Support state AI prior authorization laws: California SB 1120, Colorado SB 21-199, and Texas HB 2453 require disclosure and limit fully automated denials without clinical review.
  • Advocate for CMS enforcement of Medicare Advantage prior authorization violations — the OIG found MA plans denied 13% of requests that would have been approved under traditional Medicare.

For Educators

Teaching AI and healthcare to students or patients?

Facilitation guide for the denial navigator, discussion questions on algorithmic accountability in healthcare, and patient advocacy exercises.

Educator Guide →

Want CPAI to deliver patient rights training to your community?

We work with patient advocacy organizations, community health centers, and legal aid organizations.