Center for Practical AI
Educator Guide · Algorithmic Bias

Algorithmic Bias: Educator & Facilitator Guide

For high school and college instructors, public policy educators, legal aid trainers, and community advocates. Facilitation guide for The Score simulator, discussion questions on fairness and impossibility, and audit advocacy exercises.

Simulator Facilitation

How to run The Score in a group.

The simulator is designed to surface the impossibility theorem through experience rather than explanation.

Setup (5 minutes)

  • Open the simulator at cpai.org/education/algorithmic-bias/simulator. Enable Facilitator Mode using the toggle in the upper right.
  • Do NOT introduce the Chouldechova theorem before running the simulator. Let participants discover the disparity first — the theorem is more legible after the experience.
  • Frame it: 'You're going to build a risk score for two hypothetical defendants using the same inputs judges receive. Make the most accurate prediction you can.'
  • Run two profiles: one with a college education and stable employment, one without. Hold all criminal history constant. Compare scores.

Discussion questions — Step 1 (inputs)

  • "Which of these questions surprised you? Which did you expect?" — Surfaces hidden assumptions about what 'relevant' information looks like in criminal justice.
  • "If you were designing this tool, which inputs would you remove? What would happen to predictive accuracy?" — Introduces the proxy problem.
  • "Who decides which factors go in? Who is accountable if the factors turn out to correlate with race?" — Opens the governance question.
  • "Does the defendant see this questionnaire? Can they dispute the answers?" — State v. Loomis: the answer in Wisconsin was no.

Discussion questions — Step 3 (fairness analysis)

  • "The tool correctly predicts recidivism at about the same rate for both groups. Is that fair?" — Predictive parity vs. error rate parity.
  • "Group A has a false positive rate of 44.9%. Group B has 23.5%. Who is more likely to sit in jail awaiting trial for a crime they won't commit?" — Makes the human cost concrete.
  • "Can we fix both problems at the same time?" — Now introduce Chouldechova: when base rates differ, you cannot. Something has to give. Who decides what gives?
  • "If we can't make the math fair, what should we do with the tool?" — Ban it? Reform it? Require human override? This is where policy advocacy begins.
Teaching the Math

The impossibility theorem in plain language.

Participants don't need advanced math to understand what the theorem means. They need to understand what it requires us to choose.

The three definitions of fairness (and why they conflict)

Calibration (predictive parity)

If the tool gives 100 people a score of 7, approximately 70% of them should reoffend — regardless of race.

COMPAS satisfies this. Northpointe used this to argue the tool is fair.

Error rate parity

Black and white defendants who do NOT reoffend should have an equal chance of being wrongly flagged as high-risk.

COMPAS fails this. ProPublica used this to argue the tool is discriminatory.

Individual fairness

Two defendants with similar risk profiles should receive similar scores.

When socioeconomic factors are used as proxies, defendants with similar individual risk but different economic backgrounds receive different scores.

Chouldechova's finding (2017)

When two groups have different base rates of the outcome being predicted (different recidivism rates due to disparate policing, prosecution, and incarceration), no algorithm can simultaneously satisfy all three definitions. This is not a programming flaw. It is a mathematical constraint. Any risk score tool operating in a society with racial disparities in the criminal justice system will be "unfair" by at least one definition.

Teaching the upstream problem

The most important insight for participants: COMPAS is not biased because its designers were biased. It is biased because the data it is trained on reflects a justice system with documented racial disparities. Black Americans are arrested, charged, and convicted at higher rates for the same conduct. That disparity enters the training data. The algorithm learns it. The algorithm reproduces it.

This is called feedback loop bias: biased outcomes create biased data, which trains biased predictions, which drive biased decisions, which create more biased outcomes. Auditing the algorithm alone cannot break this loop. The only way to break it is to address the upstream disparities.

Policy Advocacy Exercise

The algorithmic audit exercise.

A structured advocacy simulation that moves participants from critique to action. Works for high school civics, college policy courses, and community organizing contexts.

Phase 1 · 15 min

Investigation

Using public records, identify whether your county, city, or state uses algorithmic decision-making in at least one of: pretrial detention, child welfare assessments, public benefits eligibility, or school discipline. Resources: the AI Now accountability toolkit and the Upturn public records request template.

Phase 2 · 20 min

Audit demand letter

Draft a formal audit request to the relevant agency. Required elements: (1) identify the specific algorithm and vendor; (2) request the validation study; (3) ask for demographic impact analysis; (4) request appeal procedures for affected individuals. The ACLU model letter is a usable template.

Phase 3 · 20 min

Policy response options

For each finding from Phase 1, identify which policy lever applies: federal (CFPB, HUD, EEOC enforcement), state (automated decision law — IL, CO, or equivalent), local (NYC Local Law 144 model), or vendor contract (procurement standards). Map what exists, what is being proposed, and what would need to be built.

Phase 4 · 15 min

Presentation and action commitment

Each group presents: what tool they found, what audit demand they drafted, and one concrete action they can take within the next 30 days (file a records request, attend a city council meeting, sign a coalition letter, or contact a state legislator during an active bill session).

Content Guidance

Framing and classroom considerations.

This material engages race, criminal justice, and structural inequality. Thoughtful framing is part of the pedagogy.

Lead with a wrongful arrest case, not statistics

Robert Williams, Nijeer Parks, or Randal Reid grounds the abstraction. 'This happened to a real person because of a tool like this one' is a more powerful opening than the NIST error rate numbers.

Name the math as political, not neutral

Choosing which fairness definition to optimize is a political choice that affects which community bears the cost of error. There is no 'objective' algorithm here.

Connect to systems participants already use

Credit scoring, hiring ATS, and benefits algorithms are in the room. Most participants have been scored by an algorithm in the last 12 months.

Give participants an action path

The audit exercise ensures the session ends with agency, not just critique. Despair is not a learning outcome.

Frame this as a tech industry problem

These tools are procured by governments, deployed by courts, and upheld by legislatures. The accountability pathway runs through public institutions, not vendor reform.

Use 'AI is racist' as shorthand

The tools are not racist in the way people are racist. They are precise encoders of existing structural inequality. The distinction matters for policy: the fix is not a better algorithm, it is addressing upstream disparities.

Want CPAI to deliver this workshop in your community?

We partner with universities, legal aid organizations, public defenders, and civil rights groups.