The algorithm that denied your claim.
Gene Lokken was 91 years old. After a fall, he was transferred to a rehabilitation facility. UnitedHealthcare's AI system — nH Predict — flagged him for discharge after 10 days. His physicians said he needed more time. The AI disagreed. He was sent home. Three months later he was dead. His family sued.
This is a guide to how AI makes healthcare decisions, what the evidence shows, and what patients can do.
0.4s
The number behind this guide
The algorithm reviewed Gene Lokken's case in 0.4 seconds.
His physicians said he needed more time. The algorithm sent him home. He was 91. He died three months later.
The algorithm that overrules your doctor.
AI prior authorization tools review millions of claims per day — faster than any human, and with denial rates that have risen as adoption has accelerated.
Gene Lokken
UHC nH Predict denial rate for post-acute care (ProPublica, 2023)
Medicare Advantage prior auth requests improperly denied — OIG report, 2023
Denied claims that are appealed by patients
External appeals that overturn the insurer's denial
The structural problem
Insurance companies are legally required to make coverage decisions based on clinical evidence — not actuarial models optimized for cost. But enforcement is sparse, the appeals process is deliberately complex, and most patients do not appeal. The math works: deny broadly, pay some reversals, net savings are substantial.
AI has accelerated this dynamic. A human reviewer reading 50 charts a day can be audited, trained, and held accountable. An algorithm processing 300,000 claims a day is harder to examine and easier to defend.
What AI actually does with your claim.
The algorithms are not reading your medical records the way a physician does. They are scoring patterns.
Prior authorization AI (e.g., nH Predict, Evicore, Gold Carding)
How it works
The algorithm receives structured data from your claim — diagnosis codes, procedure codes, length of stay, age, and in some systems, zip code and facility type. It compares your case to population-level statistical patterns and assigns a probability that continued care is 'medically necessary' by the insurer's definition of that term.
The documented problem
Your individual clinical presentation is not the input. Your physician's clinical judgment is not the input. Population statistics are the input. An algorithm trained on historical claims data from a population that was systematically undertreated will systematically undertreat future patients.
Diagnostic AI (e.g., IDx-DR for diabetic retinopathy, Paige for pathology, Viz.ai for stroke)
How it works
These tools analyze imaging data — retinal scans, pathology slides, CT scans — and flag findings that match patterns in their training data. FDA-cleared diagnostic AI tools are among the most validated applications of AI in healthcare.
The documented problem
Training data demographics matter enormously. Most AI diagnostic tools were trained on datasets dominated by white patients at academic medical centers. A 2019 Google AI analysis of chest X-rays found the algorithm underperformed on Black patients — despite training on a diverse dataset — because the Black patients in the dataset were systematically sicker, creating different baseline patterns.
Clinical decision support AI (embedded in EHR systems)
How it works
Sepsis alerts, readmission risk scores, medication interaction flags — these tools are embedded in electronic health records and surface alerts to clinicians. They are often invisible to patients.
The documented problem
A 2019 Science paper analyzed a widely used commercial algorithm that determined which patients needed extra care management. It turned out to be heavily correlated with healthcare cost — not health need. Because Black patients historically receive less healthcare spending, the algorithm systematically underidentified Black patients as high-need, even when they were sicker.
Real promise. Real failure modes.
AI diagnostic tools are among the most legitimate applications of AI in healthcare — and among the most consequential when they fail.
Where AI shows genuine clinical promise
- ✓IDx-DR: FDA-cleared, autonomous detection of diabetic retinopathy in primary care settings — no ophthalmologist required for initial screening. 87.2% sensitivity, 90.7% specificity.
- ✓Paige Prostate: FDA-approved pathology AI reducing false negatives in prostate cancer diagnosis by detecting subtle findings radiologists miss.
- ✓Viz.ai: Stroke detection on CT angiography, reducing time-to-treatment by alerting neurology teams before radiologist reads are complete.
- ✓Sepsis prediction: Epic's sepsis prediction model, when implemented with strict clinical protocols, reduced sepsis mortality at some institutions — though results are highly implementation-dependent.
Documented failure modes
- ✕Skin cancer AI: Tools trained on dermatology datasets that underrepresent darker skin tones consistently underperform for Black and brown patients — who already face diagnostic delays for skin conditions.
- ✕Pulse oximeters: FDA cleared, but decades of research show they overestimate oxygen levels in Black patients due to sensor design. EHR-integrated AI built on this data inherited the bias.
- ✕Mental health prediction: Crisis prediction algorithms trained on historical police contact data — which reflects racial disparities in policing — overpredict risk for Black patients.
- ✕Pain management AI: Studies show AI systems recommend lower pain medication doses for Black patients, replicating documented physician bias in the training data.
The burden is not distributed equally.
AI healthcare tools perform differently across racial, economic, and geographic populations — and the populations with least access to alternatives bear the most concentrated risk.
Lower income adults more likely to face prior authorization denial (KFF, 2023)
Higher rate of AI diagnostic tool underperformance for darker skin tones (JAMA, 2022)
Patients affected by racially biased clinical algorithm (Obermeyer et al., Science 2019)
Medicare Advantage enrollees — disproportionately elderly and chronically ill — who face AI prior auth review
Annette Amick
Your rights. The gaps.
Federal law gives patients significant appeal rights. State law is adding AI-specific protections. Enforcement is uneven.
Federal rights you have now
- ✓ACA §2719: Right to internal appeal and independent external review for all ACA-compliant plans. External reviews are free. External reviewers overturn 40–60% of denials.
- ✓ERISA §503: Employer-sponsored plans must provide specific denial reasons and access to documents used in the denial. You can sue in federal court for benefits wrongly denied.
- ✓Medicare Advantage: CMS regulations require MA plans to approve all requests that would be approved under traditional Medicare. OIG found they are frequently not doing so — but enforcement is weak.
- ✓No Surprises Act (2022): Prohibits surprise bills for emergency care and for non-emergency care at in-network facilities when you did not affirmatively choose an out-of-network provider.
AI-specific state legislation
- →California SB 1120 (2024): Prohibits fully automated prior authorization denials for certain medical services — a licensed physician must review. Insurers must disclose whether AI was used.
- →Colorado SB 21-199 (2021): Requires health insurers to disclose use of external data sources and algorithms in coverage decisions.
- →Texas HB 2453 (2023): Requires disclosure of AI-assisted prior authorization decisions and creates physician peer-to-peer review rights.
- →Federal: PARAM Act (proposed): Would require insurers to report AI denial rates, audit for demographic disparities, and provide individualized disclosure to patients.
The enforcement gap
The gap between legal requirements and insurer behavior is significant. CMS documented that Medicare Advantage plans denied 13% of requests that should have been approved — but imposed minimal penalties. State insurance commissioners vary widely in AI-specific enforcement capacity. The primary mechanism for patient vindication is individual appeal, not systemic enforcement. This is why the external review rate — how often patients actually use the appeals process — is the most important number in healthcare AI accountability.
Action for every level of influence.
For yourself
- If a claim is denied, always appeal. Fewer than 1 in 10 denials are appealed — and a majority of appealed claims are reversed.
- Request the specific clinical criteria used in any denial. You are entitled to this under ERISA or ACA. Without it, you cannot effectively appeal.
- Ask your physician if an AI or automated review system was used. In California, Colorado, and Texas, disclosure is legally required.
For a family member
- If a family member is navigating a denial alone, offer to help. The appeal process is deliberately complex — a second person reviewing deadlines and requirements dramatically improves outcomes.
- Contact a patient advocate. Patient Advocate Foundation provides free case management for complex insurance denials. NAMI HelpLine can help with mental health coverage.
- If an AI-generated denial involves a life-threatening condition, request expedited external review — the insurer must respond within 72 hours by law.
For an organization
- Healthcare providers: if you believe an insurer's AI denial system is producing systematically inaccurate outcomes, file a complaint with your state insurance commissioner and document the pattern.
- Employer benefits administrators: review your plan's prior authorization vendor. Ask whether AI review systems are used, what their denial rates are by diagnosis, and whether demographic disparities have been audited.
- Patient advocacy organizations: join the coalition supporting the PARAM Act (Prior Authorization Reform and Modernization) and state-level AI prior authorization legislation.
For policy
- Support the PARAM Act (federal): requires insurers to report AI system denial rates, conduct demographic impact audits, and provide disclosure to patients.
- Support state AI prior authorization laws: California SB 1120, Colorado SB 21-199, and Texas HB 2453 require disclosure and limit fully automated denials without clinical review.
- Advocate for CMS enforcement of Medicare Advantage prior authorization violations — the OIG found MA plans denied 13% of requests that would have been approved under traditional Medicare.
For Educators
Teaching AI and healthcare to students or patients?
Facilitation guide for the denial navigator, discussion questions on algorithmic accountability in healthcare, and patient advocacy exercises.
Patient rights & further reading.
Want CPAI to deliver patient rights training to your community?
We work with patient advocacy organizations, community health centers, and legal aid organizations.