She made a sarcastic joke in a school chat. She was arrested, strip-searched, and jailed overnight.
In 2022, a 13-year-old student in Williamson County, Tennessee sent a message in a Google Workspace chat referencing a classroom game — a sarcastic joke about school-assigned countries. Gaggle, the school's AI surveillance platform, flagged it. A human reviewer without conversation context forwarded it. Police arrived at school without notifying her family. She was handcuffed, taken to the Fairview Police Department, strip-searched, held overnight, and suspended. No weapon. No credible threat. The arrest record followed her. Every step followed the correct procedure.
This is not a worst-case scenario. This is how the system worked as designed — and Gaggle monitors 4.8 million U.S. students.
4.8M
The number behind this guide
students in AI surveillance systems.
Gaggle monitors emails, documents, and messages for 4.8 million students — flagging content for administrative review, without a warrant.
Try It
EdTech Privacy Policy Audit
Read the actual privacy policies of Google Workspace for Education, Turnitin, Canvas, and Khan Academy — with red-flag clauses highlighted and explained in plain language. See what your child's school agreed to on their behalf. Includes Facilitator Mode for educator use.
Open the audit tool →What the audit tool covers
- →Red, yellow, and green flag ratings for each policy clause
- →Plain-language explanations of what corporate language actually means
- →Turnitin's perpetual license claim on student writing
- →Google Workspace data retention after account deletion
- →Khan Academy as a positive comparison baseline
- →Printable school board questions for your next meeting
Your child's school is watching every word.
AI surveillance tools now monitor millions of students' email, assignments, chats, and school-issued devices — in real time, all day.
students currently monitored by Gaggle in U.S. schools.
Gaggle company data, 2024
EdTech market size, with K-12 as the largest and fastest-growing segment.
HolonIQ, 2024
The major platforms — Gaggle, Navigate360, Bark, and GoGuardian — use keyword filters, natural-language processing, and sentiment analysis to scan student communications in real time. What they monitor: Google Workspace, Microsoft 365, Canvas, school email, and school-issued Chromebooks.
Most systems monitor only school-issued accounts. But the scope of "school account" now includes nearly all school communication, assignments, and social interaction during the school day. When your child writes a poem for English class or DMs a friend on a school Chromebook, that content flows through a surveillance queue reviewed by employees of a private company.
What AI surveillance tools actually monitor
- →Every email sent or received on a school account
- →Google Docs and Drive content — including private drafts never submitted
- →Canvas assignment submissions and messages
- →School-issued Chromebook search history and browser activity
- →Microsoft Teams and Outlook messages
- →Any image uploaded to school accounts, analyzed for content
Parents are typically not notified when a flag occurs — only when police have already been called.
The vendors do not publish their keyword lists. Parents cannot see what terms will trigger a flag. Students cannot see their own flag history. There is no standard for how quickly a human must review a flag, or what training that reviewer must have, before police are called.
Two out of every three Gaggle flags are false alarms.
In Lawrence, KS, student journalists investigated Gaggle's false positive rate — and were flagged by Gaggle while doing it.
Gaggle alerts reviewed in Lawrence, KS were classified as nonissues requiring no action — out of 1,200 alerts reviewed.
Lawrence USD 497 data, Lawrence Journal-World / student journalists, 2024
Student journalists at Lawrence's Schlagle High School were investigating Gaggle's false positive problem when they were themselves flagged by Gaggle and subjected to the same process they were reporting on. They filed a First Amendment lawsuit in August 2024. A federal court allowed the lawsuit to proceed.
In Polk County, Florida, 72 students were involuntarily hospitalized under the state's Baker Act following Gaggle alerts — from a single county, in a single year, out of approximately 500 alerts reviewed.
The LGBTQ+ keyword problem
Until January 2023, Gaggle's keyword lists flagged LGBTQ+ affirming language — including the words "gay," "queer," and "lesbian" — as potential threats. Students using school accounts were effectively outed to school administrators who may not have been safe adults, without any indication that any threat had occurred.
Gaggle updated its keyword policy after public pressure in January 2023. Whether current keyword lists are published for parent review is not publicly documented.
What the research actually says
A 2023 RAND Corporation study found "scant evidence" that AI activity monitoring tools either prevent student suicide or cause harm — because rigorous research on the question barely exists. Schools are purchasing surveillance infrastructure with documented false-positive consequences in the absence of evidence that it works.
Lawrence Student Journalists
Lawrence, KS · Schlagle High School · 2024
Student journalists investigating Gaggle's false positive rate were flagged by Gaggle while conducting their reporting. Their records were reviewed under the same process their stories were about. They filed a First Amendment lawsuit against the district in August 2024.
Vancouver, WA School District
Vancouver, WA · 2020-21 school year
In a single school year, Gaggle flagged 10% of the district's entire student enrollment. The district did not publicize this figure; it was obtained through a public records request.
The AI writing detector flags real humans.
Non-native English speakers, neurodivergent students, and students who write in formal academic styles are disproportionately accused of cheating — by tools that cannot reliably detect AI.
false positive rate for AI writing detection on TOEFL essays by non-native English speakers. A Stanford study of seven AI detectors found 61.22% of ESL writers' essays were falsely flagged as AI-generated.
AI writing detectors score text on "perplexity" and "burstiness" — rough proxies for unpredictability. Human writing tends to be more varied; AI writing tends to be more predictable. But non-native English speakers, students with certain neurodivergent profiles, and students from cultures that emphasize formal academic expression write with patterns that AI detectors misread as machine-generated.
What independent tests found
- →Turnitin claims less than 1% false positive rate in company marketing.
- →A Washington Post test of Turnitin's AI detector found approximately 50% false positive rate on some sample sets.
- →The gap between vendor claims and independent testing is consistent across products.
- →Vanderbilt University disabled Turnitin's AI detector in 2023 after finding it unreliable.
- →TRAILS/NSF researchers concluded: 'Detecting AI may be impossible' — the fundamental statistical problem cannot be engineered away.
Named cases (composite)
Johns Hopkins · Miami University · NYU · 2023-2024
Students at Johns Hopkins, Miami University, and NYU documented cases where AI detection tools flagged their original work, resulting in academic misconduct investigations. In each documented case, the students were non-native English speakers or had writing styles that diverged from the statistical patterns the tool associated with human writing. None of these cases ended in confirmed cheating.
The detection arms race compounds the problem: as AI writing tools improve at mimicking human variability, the detection tools must become more aggressive — increasing the false positive rate further. TRAILS/NSF researchers argue this ceiling cannot be overcome because the statistical signal AI detectors use is inherently unreliable.
Cross-reference
AI writing detectors are one instance of a broader pattern: algorithmic decision-making that produces disparate outcomes by race, language background, and disability status. For more on this pattern in hiring, credit, and criminal justice, see our guide on Algorithmic Bias and Civil Rights.
The camera flags darker skin.
Remote proctoring software doesn't just watch students — it discriminates by skin tone, disability, and living situation.
rate at which darker-skinned students were flagged for 'missing from frame' per assessment, compared to 0.83× for lighter-skinned students.
of the time, Proctorio's facial recognition failed on Black faces in independent testing.
Media reports and academic testing, 2020-2022
Remote proctoring software — Respondus Monitor, Proctorio, ProctorU — uses facial recognition, keystroke logging, eye-tracking, audio monitoring, and lockdown browsers to surveil students during online exams. When the system detects "anomalies," it generates flags for human review.
The problem: "anomaly" is defined relative to what the system learned to expect from its training data. Students in shared living spaces who are interrupted, students with disabilities that affect movement or eye contact, and students with older or lower-quality webcams systematically generate more flags. Darker skin tones are harder for the facial recognition systems to accurately track, producing "missing from frame" flags at nearly five times the rate of lighter-skinned students.
Who the system systematically disadvantages
- →Students of color: Facial recognition systems have documented failure rates on darker skin tones, producing more flags per test.
- →Students with disabilities: Eye-tracking flags students whose disability affects gaze patterns. ADA/Section 504 claims have been filed.
- →Students in shared housing: Family members, roommates, or environmental noise trigger audio and motion flags.
- →Students with older equipment: Lower-quality webcams produce tracking errors that read as suspicious behavior.
ADA and Section 504 require that schools provide equitable assessment conditions for students with disabilities. Several students have filed complaints arguing that AI proctoring's disparate flag rates constitute a failure to accommodate. The U.S. Commission on Civil Rights flagged AI proctoring's equity implications in its 2024 K-12 AI report.
96% of school apps share student data with third parties.
FERPA was written in 1974. It was not designed for an ecosystem of 2,591 EdTech tools per district, behavioral analytics, and advertising ID tracking.
of school apps share student data with third parties, according to CDT analysis.
EdTech tools used per district per year on average — the scale makes meaningful oversight nearly impossible.
CDT, Off Task, 2024
apps tested collected advertising IDs from student devices, enabling cross-app behavioral profiling without parental knowledge.
CDT, Off Task, 2024
students (plus 9.5M teachers) had records exposed in the 2024-2025 PowerSchool breach.
PowerSchool breach reporting, 2024-2025
What FERPA actually covers — and what it doesn't
FERPA gives parents the right to inspect and request correction of their child's educational records. But the law's "school official exception" allows schools to share student records with third-party vendors without parental consent — as long as the vendor meets four conditions. Most EdTech vendor contracts do not meet all four conditions, but enforcement is rare.
What FERPA does not cover: data collected from student devices that is not part of the official educational record. Behavioral analytics, browsing history, and advertising IDs fall into gray areas that vendors exploit. The 2025 COPPA rule update requires separate consent for advertising data collected from children under 13 — but FERPA enforcement on school vendors remains weak.
File a FERPA request: what to ask for
- →A complete list of all third-party vendors that receive your child's educational records
- →The stated purpose of each disclosure
- →Copies of all vendor data privacy agreements
- →Documentation of how the school determined each vendor meets the school-official exception
- →Any data breaches involving your child's records in the past 12 months
File a FERPA complaint
FERPA.Complaints@ed.gov · 180-day window from when you learned of the violation · studentprivacy.ed.gov/file-a-complaint
Illuminate Education Breach
National · January 2022
A breach of Illuminate Education's platform exposed records for more than 10 million students across multiple states, including New York City's 820,000 students. Illuminate had been used to manage grades, attendance, and student information. The company did not notify districts for weeks after the breach was discovered.
An AI wrote your child's IEP. That likely violates federal law.
57% of special education teachers used AI for IEP writing in 2024-25. The Center for Democracy and Technology says this likely violates IDEA's individualization requirements.
The Individuals with Disabilities Education Act (IDEA) requires that Individualized Education Programs be individually tailored to each child through a collaborative process involving parents, the student, teachers, and evaluators who have actual knowledge of that child. The word "individualized" is not incidental — it is the legal standard.
An AI language model has never met your child. It cannot observe how your child processes language, responds to different teaching styles, or behaves under stress. CDT's 2025 analysis concluded that AI-generated IEPs — even when reviewed and modified by a teacher — likely violate IDEA's individualization requirement when the AI output forms the substantive basis of the document.
What parents of students with disabilities can do
- →Ask directly at the IEP meeting: was any part of this document drafted by an AI tool?
- →Request that the IEP team explain how each goal was developed based on this specific child's assessments, not a template.
- →If you suspect an AI-generated IEP, file a complaint with your state's Department of Education under IDEA's procedural safeguards.
- →Contact your state's Parent Training and Information Center (PTI) — funded by IDEA to support parents in exactly these disputes.
The IEP issue sits at the intersection of AI automation and disability rights. For the broader pattern of algorithmic decision-making affecting marginalized communities, see our guide on Algorithmic Bias and Civil Rights.
A smarter school is possible — but only if the rules are right.
AI tutoring shows real promise in RCTs. AI surveillance has 'scant evidence.' The difference is evidence-first policy.
What the research supports
RAND's review of AI tutoring randomized controlled trials found positive learning gains when AI uses Socratic scaffolding — asking guiding questions rather than providing answers. One 2024 study found that unrestricted ChatGPT use during math practice improved in-session scores but lowered test performance. The distinction matters: AI as a cognitive scaffold works; AI as a substitute for thinking does not.
The policy gap
As of May 2024, only 14% of U.S. school districts had AI-specific policies. CDT's 2025 legislative tracker counted 134 AI-in-education bills in 31 states. NYC banned ChatGPT in January 2023 and reversed the ban four months later — without changing its approach or issuing new guidance. Colorado's evidence-first model — pilot, collect data, then consider statewide adoption — is the standard other states should follow.
What families can do: four tiers of action
Look up your school's apps (1 hour).
- Search your child's school tools at privacy.commonsense.org — the Common Sense Privacy database rates 400+ EdTech products.
- Look for Pass, Warning, or Fail ratings. Warning or Fail means the tool shares student data in ways parents should know about.
- Make a list of any tools that earn Warning or Fail. That list is your agenda for the next step.
Send a FERPA request (1 day).
- Every parent has the right to request a list of third-party vendors that receive their child's educational records.
- Email or mail your school district's FERPA compliance officer asking: (1) a complete list of EdTech vendors that receive student data, (2) the stated purpose of each disclosure, and (3) copies of all vendor data privacy agreements.
- The district must respond within 45 days. Keep a copy of your request and the response.
- If the district cannot name all its EdTech vendors, that is itself a finding worth bringing to the school board.
Bring it to your school board (1 month).
- Bring your FERPA findings to a school board meeting during public comment.
- Ask three questions: Does this vendor sell student data? What are the data retention terms? Has this tool been independently audited for bias?
- Request that the district adopt the SDPC National Data Privacy Agreement template for all EdTech vendor contracts — this shifts the legal negotiation burden from individual districts to a pre-vetted standard.
- If the district uses AI proctoring, ask for bias audit results by demographic group.
Organize and escalate.
- Connect with parent organizations using CDT's digital equity toolkit or the Electronic Frontier Foundation's student privacy resources.
- Support state legislation requiring evidence-of-efficacy before AI safety tools can expand to new districts — Connecticut's pilot-first model is the standard.
- If you have evidence of a FERPA violation: file a complaint at studentprivacy.ed.gov/file-a-complaint. Time limit: 180 days from when you learned of the violation.
- FERPA.Complaints@ed.gov — Student Privacy Policy Office, 400 Maryland Ave SW, Washington, DC 20202-8520.
For Educators & Administrators
Vetting EdTech tools, model policy language, and IDEA compliance guidance
Guidance for teachers, administrators, and district technology coordinators: how to vet EdTech tools, what FERPA's school-official exception actually requires, model AI policy language from CDT's 2025 analysis, IDEA compliance for AI-assisted IEPs, and how to talk to students about surveillance.
Go to the Educator Guide →Where to go next.
How we sourced this page
Every statistic on this page comes from peer-reviewed research or authoritative organizations: the Center for Democracy and Technology, RAND Corporation, Stanford Human-Centered AI Institute, the Future of Privacy Forum, the Electronic Frontier Foundation, the U.S. Commission on Civil Rights, and peer-reviewed journals including Frontiers in Education. The Fairview, TN arrest was first reported by BuzzFeed News (2022) and has been confirmed by multiple subsequent news organizations.
Privacy policy excerpts in the EdTech Audit tool are drawn from policies current as of May 2026. Privacy policies change; we re-review these excerpts quarterly.
Want CPAI education resources for your school or community?
We partner with school districts, libraries, and nonprofits to deliver research-based AI and digital-safety education.