A computer made the call. A Black man went to jail.
Robert Williams was handcuffed in his own driveway, in front of his wife and daughters, who watched in tears. The crime he was accused of: shoplifting watches from a Detroit store in 2018. The evidence: a blurry surveillance still. A facial recognition algorithm matched it to Williams. A detective showed a store employee the photo. She said yes. Williams was held for 30 hours. He looked at the surveillance photo and told the detective: 'I hope y'all don't think all Black people look alike.' His ACLU lawsuit resulted in a $300,000 settlement in June 2024 and — by civil rights advocates' account — the strongest police facial recognition policy in the United States. Detroit police facial recognition use dropped 91% after the settlement.
His case was the first publicly reported wrongful arrest caused by a facial recognition false match. As of 2026, there have been at least 14 documented cases. Every one where race is known involved a Black individual.
The number behind this guide
Same crime. Same record. Different future.
False positive rate: 44.9% for Black defendants vs. 23.5% for white defendants. ProPublica, 2016. Still in use.
Interactive Tool
"The Score" — COMPAS Risk Score Simulator
Build a defendant profile using the same inputs as the COMPAS algorithm, generate a risk score, and see how the false positive rate differs by race for the same score — using ProPublica's dataset of 10,000+ real cases from Broward County, Florida.
Open the simulator →What the simulator covers
- →COMPAS inputs: age, arrests, employment, family history
- →Visual risk score dial (1-10)
- →Disparate false positive rates by race from real ProPublica data
- →The Chouldechova impossibility theorem — visual proof
- →Cross-domain: same logic in housing, credit, and healthcare
- →Facilitator mode with discussion questions
When algorithms arrest the wrong person.
14 documented wrongful arrests in the United States caused by facial recognition false matches. In every case where race is known, the defendant was Black.
Known wrongful arrests caused by facial recognition misidentification. All where race is known: Black defendants.
Factor by which false positive rates vary across demographic groups in NIST's official facial recognition evaluation.
of the Labeled Faces in the Wild dataset — a standard facial recognition training dataset — is white. Underrepresentation → higher error rates on other faces.
Analysis of LFW dataset demographics
misidentification rate admitted by the Detroit police chief in testimony — for a tool that was still being used to initiate arrests.
Detroit Police testimony, 2019
Kimberlee Williams
Oklahoma · June 2021
Arrested at a military base while accompanying her daughter on a DoorDash delivery — because a Maryland police department had obtained a warrant based on a facial recognition match. She had no ties to Maryland. She spent six months in multiple Maryland jails before all charges were dismissed.
Porcha Woodruff
Detroit · February 2023 · 8 months pregnant
Arrested at home — eight months pregnant — with six officers at her door at 8 a.m., accused of carjacking. The 2015 mugshot used in the lineup was six years old. Charges were dropped for insufficient evidence.
Randal 'Quran' Reid
Georgia · November 2022
Arrested on the way to Thanksgiving dinner — for crimes committed in Louisiana, a state he says he has never visited. The algorithm used: Clearview AI, a tool that scraped billions of social media images without consent. He spent six days in jail before charges were dropped.
The accountability gap
Only one of 52 agencies studied by Georgetown Law obtained legislative approval before using facial recognition. Not one required a warrant to run a facial recognition search. The technology deployed nationwide with no regulatory framework and no requirement for corroborating evidence before a match could trigger an arrest.
Race was not in the formula. The outcome was racialized anyway.
COMPAS is a proprietary algorithm used in bail, sentencing, and parole decisions across multiple jurisdictions. ProPublica analyzed 10,000+ cases and found its false positive rates are nearly double for Black defendants.
False positive rate for Black defendants labeled 'high risk' by COMPAS — those who did not reoffend.
False positive rate for white defendants labeled 'high risk' by COMPAS — those who did not reoffend.
The same tool. The same score. Black defendants who received a "high risk" label were almost twice as likely to not reoffend as white defendants who received the same label. Northpointe (COMPAS' maker) argued their tool was fair by a different mathematical definition — one that is simultaneously true and irreconcilable with ProPublica's.
State v. Loomis (2016)
The Wisconsin Supreme Court upheld COMPAS in sentencing even though the defendant could not examine the proprietary algorithm that contributed to his sentence. COMPAS remains in use in multiple jurisdictions.
What COMPAS actually asks (selected questions)
- →How many times have you been arrested?
- →How old were you when you were first arrested?
- →Has a family member ever been imprisoned?
- →How many times have you failed to appear in court?
- →Are you currently employed?
- →What is your highest education level?
"Race" is not on this list. But prior arrests encode policing patterns. Family incarceration encodes structural disadvantage. These are proxies — and their correlations with race in U.S. data are well-documented.
The feedback loop that feeds itself.
Predictive policing tools use historical arrest data to predict future crime. But the data carries the history of who has always been policed.
The Markup's 2023 investigation of PredPol (now Geolitica) found that Black-majority neighborhoods were sent police twice as often as comparable white-majority neighborhoods for equivalent predicted crime. Geolitica's accuracy in Plainfield, NJ: less than 1%.
ShotSpotter: accuracy by the numbers
ShotSpotter and the murder charge
Chicago · 2021
A ShotSpotter alert led Chicago police to a location where they found Safarain Herring. He was taken to the hospital and later died. A prosecutor initially built a murder case partly on ShotSpotter evidence. The prosecutor later sought to drop the case against the accused man when ShotSpotter's evidence came under scrutiny.
The bias isn't only in criminal justice.
The same proxy-variable pattern produces algorithmic discrimination in healthcare, housing, lending, and employment.
People per year processed by the class of healthcare algorithm Obermeyer's team found reduced Black patients' access to care by more than half. After the fix: 84% reduction in racial bias.
Arkansas and Idaho Medicaid
Both states deployed algorithms to calculate care hours for Medicaid recipients. Both algorithms had coding errors that failed to account for conditions like cerebral palsy and diabetes. Courts in both states found due process violations. The Arkansas algorithm was abandoned after litigation; the Idaho algorithm affected 4,000+ people.
Facebook/Meta housing ads
The ad delivery algorithm — not just advertiser targeting choices — skewed ad delivery by race. A 2022 DOJ settlement was the first consent decree targeting a machine learning ad delivery algorithm's discriminatory outputs.
Rite Aid facial recognition
Deployed in approximately 200 stores, predominantly in Black, Latino, and Asian neighborhoods. The FTC banned Rite Aid from using facial recognition for five years — the first FTC enforcement action for 'algorithmic unfairness.'
Mortgage lending AI
LLM-based mortgage AI in one documented study: white applicants were 8.5% more likely to be approved with identical financial profiles. Urban Institute: Black and Brown borrowers are more than twice as likely to be denied a loan as white applicants with equivalent credit characteristics.
For the intersection of AI-driven bias and AI writing detection in schools, see our guide on AI in Schools — Writing Detection. For AI-driven hiring bias specifically, see AI in Hiring and the Workplace.
Where to file complaints
- EEOC:Employment discrimination — publicportal.eeoc.gov · 180-day window
- HUD:Housing discrimination — hud.gov/program_offices/fair_housing_equal_opp/online-complaint
- CFPB:Credit / financial products — consumerfinance.gov/complaint
- FTC:Consumer protection / deceptive practices — reportfraud.ftc.gov
- DOJ Civil Rights:Federal civil rights violations — civilrights.justice.gov
Removing 'race' from the data doesn't remove racial outcomes.
Zip code, credit history, school name, employment history — these variables encode the history of segregation and discrimination. Algorithms trained on them learn that history.
Algorithmic redlining is the practice, documented across mortgage lending, housing ad delivery, and credit scoring, in which algorithms reach racially discriminatory outcomes despite never explicitly using race — because the variables they do use (neighborhood, credit history, education, employment history) are themselves products of decades of explicitly discriminatory policy.
Legal frameworks (as of 2026)
- →NYC Local Law 144: annual independent bias audits of automated employment decision tools, public posting of results, candidate notice, and an alternative process option.
- →Illinois AI Video Interview Act: notification, consent, and demographic data sharing requirements for AI-analyzed interviews.
- →Colorado AI Act (SB 24-205, effective February 1, 2026): 'reasonable care' requirement across high-risk AI domains including housing, credit, employment, and healthcare. Consumers have the right to appeal algorithmic decisions and correct inaccurate data.
- →EU AI Act: facial recognition in public spaces banned (with narrow exceptions); criminal risk assessment, benefits eligibility, and credit scoring designated 'high risk' — documentation, accuracy standards, and human oversight required.
The black-box problem and what communities can do.
91% of federal AI systems flagged as affecting civil rights received compliance extensions because required safeguards were not in place.
Of 227 federal AI systems flagged as affecting civil rights or safety that received compliance extensions because required safeguards were not in place.
NTIA, ~2023
The central problem with algorithmic accountability is trade secret protection. Northpointe refused to disclose COMPAS's formula to defendants whose sentences were influenced by it. The Wisconsin Supreme Court upheld that refusal. What cannot be examined cannot be challenged.
What communities can do: four tiers of action
Learn: what tools are being used in your community (1 hour).
- Run "The Score" simulator at cpai.org/education/algorithmic-bias/simulator — experience COMPAS from the inside.
- Look up whether your city or county uses predictive policing tools: the Brennan Center's database and The Markup's reporting are starting points.
- Look up your employer: does the company use automated hiring tools? NYC Local Law 144 requires employers to publicly post bias audit results for automated employment decision tools.
Act: file a FOIA request (1 day).
- Submit a public records request to your local police department asking: (1) Do you use facial recognition? Which vendor? (2) Do you use predictive policing tools? (3) Has the department conducted a bias audit?
- ACLU facial recognition FOIA template: aclu.org/resources/facial-recognition-foia-template
- If a government algorithm made a decision about your benefits, housing, or healthcare: you may have the right to appeal under Colorado SB 24-205 (effective February 1, 2026) or analogous state law.
Engage: bring it to the policy table (1 month).
- Attend a city council or county commission meeting to ask about algorithmic tool procurement. Ask: what bias audits are required? Who reviews results?
- In New York City: use the publicly posted AEDT audit results (required under Local Law 144) to evaluate employers who claim to use automated screening.
- File a discrimination complaint: EEOC for employment (publicportal.eeoc.gov), HUD for housing (hud.gov/fair_housing), CFPB for credit (consumerfinance.gov/complaint).
Organize: connect with the movement.
- Support the Lawyers' Committee for Civil Rights Under Law's AI Civil Rights Act campaign.
- Support the ACLU's facial recognition litigation and Connect with the Algorithmic Justice League (ajl.org), AI Now Institute, Brennan Center, and NAACP Legal Defense Fund.
- Participate in comparative audit testing: submit identical applications with different names or zip codes to document disparate treatment.
- Engage your state legislature: the Colorado AI Act is a model — 30 states have introduced similar legislation.
For Educators
Facilitating "The Score" and teaching algorithmic fairness
Running the COMPAS simulator with students who have criminal justice exposure requires deliberate framing. The educator guide covers that, plus how to teach the Chouldechova impossibility theorem to non-technical audiences and where to find Joy Buolamwini's curriculum resources.
Go to the Educator Guide →Where to learn more.
How we sourced this page
Every statistic on this page comes from: ProPublica's COMPAS analysis (public dataset, github.com/propublica/compas-analysis), NIST's official facial recognition evaluation program (FRVT), Georgetown Law's "Perpetual Line-Up" report, peer-reviewed research (Obermeyer et al., Science 2019; Chouldechova, Big Data 2017), MacArthur Justice Center, The Markup, and official government sources (FTC, DOJ, NTIA).
Want CPAI education resources for your community?
We partner with districts, libraries, and nonprofits to bring this research into classrooms and community spaces in Durham and beyond.