The algorithm that read your resume — and put it in the bin.
75% of resumes submitted to large employers are rejected by software before a human ever sees them. The tools doing the rejecting were often trained on historical hiring data — which encodes decades of documented racial, gender, and economic discrimination. In 2018, Amazon scrapped an AI hiring tool after discovering it systematically downgraded resumes from women. Most companies never discover this, or don't disclose it.
75%
The number behind this guide
of resumes are rejected before a human reads them.
The software doing the rejecting was often trained on historical hiring data — which encodes decades of documented discrimination.
Before a human reads your resume.
AI hiring tools are embedded across the hiring funnel — from initial screening to video interview scoring to reference checking.
Resumes rejected by ATS before human review at large employers
Fortune 500 companies using ATS
Hiring managers who report ATS filtered out a qualified candidate
Callback gap between Black-perceived and white-perceived names (Kline et al., 2022)
Stage 1: ATS Screening
Workday, Greenhouse, Lever, iCIMS
What it does
Parses resume for keywords, credentials, experience dates, and employer names. Applies hard filters (degree requirement, years of experience). Ranks remaining candidates by match score.
Bias pathway
Trained on which resumes produced hires at that company — inheriting historical hiring preferences. Keyword filtering disadvantages candidates from different industry vocabulary backgrounds.
Stage 2: AI Resume Scoring
HireEz (formerly Hiretual), Eightfold.ai, SeekOut
What it does
AI ranks resumes against ideal profiles built from top performer data. Surfaces candidates from similar employer pedigrees, education institutions, and career trajectories.
Bias pathway
Top performer profiles encode the characteristics of whoever was historically considered a top performer — often filtered by who got opportunities, not who performed.
Stage 3: AI Video Interview
HireVue, Spark Hire, Outmatch
What it does
Candidates complete a video interview with no human present. AI analyzes facial expressions, vocal tone, word choice, vocabulary complexity, and eye contact patterns. Generates a 'fit score.'
Bias pathway
Validated on historical hire data. Documented performance gaps for darker skin tones and non-standard accents (MIT Media Lab). Not meaningfully audited by most employers before deployment.
Stage 4: Background Check AI
Checkr, Sterling, HireRight
What it does
Automated criminal background check flagging. Some systems apply predictive tools to assess risk from record type, date, and context.
Bias pathway
Black Americans are arrested and convicted at higher rates for the same conduct. Automated background check flagging without individualized review replicates this disparity. EEOC guidance requires individualized assessment before adverse action.
When the algorithm discriminates.
These are not hypothetical. They are documented cases with legal filings, internal investigations, or independent audits.
Amazon — Scrapped AI Hiring Tool (2018)
iTutorGroup — EEOC Settlement (2023)
Workday — Class Action (2023)
After you're hired. The algorithm watches.
AI surveillance and management tools monitor worker performance in real time — and can terminate employment without human review.
Amazon warehouse workers monitored by AI for productivity metrics (Time, 2021)
Average productivity target cycle for Amazon picker rates
Injury rate at Amazon vs. industry average — linked to productivity surveillance (AFL-CIO)
Human managers reviewing most AI-generated termination recommendations at Amazon (Reuters, 2021)
Keystroke and screen monitoring
Hubstaff, ActivTrak, Teramind
Used in remote work — captures keystrokes, screenshots, mouse movement. Disproportionately deployed for lower-wage knowledge workers. Creates constant surveillance pressure that research links to burnout and productivity loss.
AI-generated productivity scores
Amazon UPT, TikTok/ByteDance systems, call center AI
Workers receive an algorithm-generated productivity score that affects scheduling, pay, and termination. The scoring criteria may not be disclosed, and workers often have no appeal path.
Emotional and expression analysis
Call center AI (Cogito, NICE, Genesys), customer-facing workers
AI listens to customer calls and provides real-time coaching — 'smile more,' 'slow down,' 'sounds stressed.' Research shows these tools perform worse on workers of color and non-native English speakers.
Route and movement optimization
UPS ORION, Uber/Lyft routing, gig delivery platforms
Productivity baselines are set algorithmically, without accounting for traffic, weather, worker disability, or regional differences. Workers who deviate from the route — for safety or efficiency — are flagged.
Independent contractor controlled by algorithm.
Gig platforms claim workers are independent contractors — but the algorithmic control they exercise is more granular than most traditional employment.
Uber and Lyft — algorithmic control and deactivation
- →Uber and Lyft drivers can be 'deactivated' (fired) based on algorithmic assessment of their ratings, cancellation rates, and acceptance rates — without human review.
- →Drivers have documented cases of deactivation following false accusations, facial recognition failures in identity verification, and rating manipulation by passengers.
- →The EEOC has found that algorithmic deactivation without appeal violates worker rights in specific contexts — but gig workers classified as independent contractors have fewer protections than employees.
- →Uber's facial recognition identity verification has documented accuracy gaps for Black drivers — who have been deactivated after the system failed to verify their identity.
The misclassification problem
Platforms classify workers as independent contractors to avoid employment law protections — minimum wage, overtime, anti-discrimination law, workers' compensation. But the algorithmic control they exercise (setting wages, controlling routing, defining performance standards, deactivating for non-compliance) mirrors employer control. Courts and legislators are split on how to classify this. California AB5, the EU's Platform Work Directive, and the DOL's 2024 independent contractor rule are all attempts to address the gap.
What the law says right now.
Federal civil rights law applies to AI hiring tools. Enforcement is expanding.
Federal law
- ✓Title VII: Prohibits employment discrimination based on race, color, religion, sex, and national origin. The disparate impact theory applies — if an AI tool produces racially disparate outcomes without business justification, it may be unlawful regardless of intent.
- ✓ADEA: Age Discrimination in Employment Act prohibits discrimination against workers 40+. The iTutorGroup case established this applies to AI tools.
- ✓ADA: Americans with Disabilities Act prohibits using selection procedures that screen out individuals with disabilities unless the criteria is job-related. AI tools using certain voice, facial, or biometric analysis may implicate this.
- ✓EEOC AI Guidance (2023): The EEOC issued specific guidance stating that employers remain responsible for the discriminatory impact of AI tools they use, even if a third-party vendor built the tool.
State and local law
- →NYC Local Law 144 (2023): Requires employers using AI hiring tools to conduct and publish annual bias audits by an independent third party. The first US law specifically regulating AI in employment. Penalty: $375 per violation per day.
- →Illinois AI Video Interview Act (2020): Requires employers to notify applicants before using AI to analyze video interviews, and to explain how the AI works.
- →Maryland HB 1202 (2020): Prohibits employers from recording video interviews without consent and using facial recognition in video interviews without explicit consent.
- →California CPRA: Workers have the right to know what personal data is collected about them and how it is used in employment decisions.
What does an ATS actually flag
on a resume?
Walk through an annotated resume — section by section — to see what ATS systems flag, where bias enters, and what you can do. Includes facilitator mode for classroom use.
Open the ATS Scanner →Also see: Algorithmic Bias for the broader context on AI discrimination in criminal justice, housing, and credit.
Action for every level of influence.
For yourself as a job seeker
- Use a single-column, plaintext-friendly resume format for any position with ATS submission — visual design features cause parsing failures.
- Mirror the exact language of the job description for required skills. ATS keyword matching is lexical — synonyms are not recognized.
- If asked to complete an AI video interview, you can request a human interview in lieu. Ask HR explicitly: 'Is AI analysis used to score this interview? I'd like to request a human review.'
For workers subject to algorithmic management
- Request access to the data used to evaluate your performance. Under GDPR (EU), CCPA (California), and VCDPA (Virginia), you have the right to know what data is being collected and how it is used in decisions about you.
- If you believe an AI tool has made a discriminatory employment decision, contact the EEOC or your state employment discrimination agency. The complaint process is free.
- Organize with coworkers. Algorithmic management affects everyone in a workplace — collective action on data practices is more effective than individual complaints.
For HR and employers
- Before deploying any AI hiring tool, require a bias audit by an independent third party. NYC Local Law 144 requires this and is a model for compliance.
- Remove degree requirements that are not genuinely necessary for job performance. These exclude 30–40% of otherwise-qualified workers and disproportionately affect Black and Hispanic candidates.
- Audit your ATS rejection rate by zip code, institution type, and employment gap. Disparate rejection rates without business justification are EEOC liability.
For policy
- Support the Algorithmic Accountability Act (federal) — would require impact assessments for automated decision systems used in employment.
- NYC Local Law 144 (2023) is the first US law requiring bias audits for AI hiring tools. Support equivalent legislation in your state or city.
- Advocate for EEOC enforcement of existing Title VII disparate impact theory as applied to AI — the legal authority exists, enforcement resources are the limiting factor.
For Educators
Teaching AI and employment rights?
Facilitation guide for the ATS scanner, discussion questions on disparate impact, and an employment discrimination advocacy exercise.
Worker rights & further reading.
Want CPAI to deliver worker rights training to your community?
We partner with unions, workforce development programs, legal aid organizations, and community colleges.