Center for Practical AI
Public Education · Deepfakes & NCII

"She woke up to texts from a classmate. Have you seen what's been posted? She hadn't. But within hours, hundreds of students had."

On October 2, 2023, Elliston Berry — then 14 years old — woke to frantic messages from a friend. Classmates had used a free AI app to generate nude images of Berry and eight other girls at Aledo High School from their social media photos. The images were posted to anonymous Snapchat accounts. For more than eight months, her family could not get Snapchat to remove them — until a U.S. Senator intervened directly with the company. No criminal charges were filed. Berry and her mother Anna McAdams testified before Congress, met with the First Lady, and are widely credited with driving the passage of the TAKE IT DOWN Act, signed into federal law on May 19, 2025. The AI app that started this required no technical skill. It was free. It took seconds.

This is no longer rare. It is happening in schools across every state. And most parents don't know it until it has already spread.

8 months

The number behind this guide

to remove images of a 14-year-old from Snapchat.

Elliston Berry's family couldn't get them down. It took a U.S. Senator. The TAKE IT DOWN Act passed May 19, 2025.

Interactive Tool

Synthetic Media Lab — Spot the Tells

Most people think they can spot a deepfake. Research shows humans correctly identify high-quality AI-generated images only 24.5% of the time. Explore the 7 visual artifacts trained researchers use — with plain-language explanations and facilitator mode for classroom use.

Open the Synthetic Media Lab →

What the lab covers

  • 7 visual artifacts: eyes, hair, skin, teeth, lighting, accessories, background
  • Plain-language explanations of why AI fails at each
  • Real vs. AI-generated comparison panels for each artifact
  • Reliability caveats — where each heuristic breaks down
  • Facilitator mode with technical explanations for educators
What It Is

It doesn't require skill. It requires a phone and a free app.

Non-consensual intimate imagery (NCII) generated by AI is the use of artificial intelligence to create fake nude or sexual images of real people without their consent.

Three methods — all available today

"Nudify" or "undress" apps

Takes a normal photo of a clothed person and generates a realistic nude version. The user uploads a photo; the app does the rest. No technical knowledge needed. 55 such apps were identified in Google Play and 47 in the Apple App Store in January 2026, including 31 rated suitable for minors.

Face-swap tools

Takes a face from one image and places it onto a body from another. Free, open-source tools like DeepFaceLab have been publicly available for years and are documented repeatedly in school-based NCII incidents.

Custom AI model training

Uses 20 or more photos of a specific person to train an AI — then generates that person in any scenario, including sexual ones, in approximately 15 minutes. Requires a consumer gaming computer. No longer limited to technical experts.

Seconds

Time to create one deepfake NCII image using a nudify app.

Oxford Internet Institute, 2025

~15 min

Time to train a custom AI model targeting a specific person using 20 photos.

IWF/Oxford, 2025-26

$0

Cost of the most common tools used in documented school incidents.

Tech Transparency Project, 2026

8 months

Time Elliston Berry's family waited for Snapchat to remove the images — until a Senator intervened.

Texas Tribune, 2025

A single search for "nudify app" on major search engines returns working tools in the top results. A 2026 study by the Institute for Strategic Dialogue found 20.7 million monthly visits to 31 active NCII tool sites — all discoverable through standard searches.

Issaquah, WA (2024)

14-year-old student · Issaquah, Washington

School Incident

A 14-year-old student told police he found a nudify app on TikTok and used it to generate nude images of six female classmates. He shared the images at the lunch table that same day.

Outcome: The case illustrates the gap between creation time (seconds) and the harm timeline (ongoing, social, irreversible within the school community). No state law at the time prohibited the conduct.
How Fast It Spreads

47 million views. 17 hours.

Once posted, non-consensual intimate imagery spreads faster than any platform can respond — and communities are designed to preserve and redistribute it.

47M

Views of one Taylor Swift deepfake post before removal from X — within approximately 17 hours.

The Star, January 2024

26,385%

Increase in AI-generated CSAM videos identified by the Internet Watch Foundation from 2024 to 2025 (13 videos to 3,443).

IWF, 2026

1,325%

Increase in AI-related reports to NCMEC CyberTipline from 2023 to 2024.

NCMEC, 2024

~440K

AI-related NCMEC CyberTipline reports in the first half of 2025 alone.

NCMEC, 2025

In school contexts, the Issaquah, Westfield, and Aledo cases each show the same pattern: images reach hundreds of students within a single school day of initial posting. Thorn found that 16% of minors who experienced deepfake NCII said it was shared on Snapchat first; Instagram, Messenger, Facebook, and TikTok followed.

Taylor Swift

January 2024

Public Figure Case

Sexually explicit AI-generated images were shared on X and 4chan. One post reached 47 million views before removal. X briefly blocked all searches for Swift's name. The platform's 'synthetic and manipulated media policy' proved insufficient to stop spread.

Outcome: The case illustrated that without federal law, even the most resourced victims had no recourse — and accelerated congressional momentum for what became the TAKE IT DOWN Act.
The School Crisis

It is not an outlier. It is a pattern — in every state, at every grade level.

39%

of students — approximately 5.97 million — report hearing about NCII of someone at their school.

CDT 'In Deep Trouble,' 2024

1 in 10

minors ages 9-17 said they knew peers who used AI to generate nude images of other children.

Thorn Youth Perspectives, 2023

22%

of high school principals reported a deepfake bullying incident at their school — nearly 1 in 4.

RAND, October 2024

<20%

of high school students received any information from their school about deepfakes.

CDT, 2024

5%

of teachers said their school provided resources to help victims remove images.

CDT, 2024

13%

of K-12 principals (all levels) reported a deepfake bullying incident at their school.

RAND, 2024

Lancaster Country Day School

Pennsylvania · October 2023 – May 2024

347 Images · 59 Victims

Two juvenile male students created 347 AI-generated pornographic images and videos of 59 minors and one adult — 48 of whom were students at their school. A Safe2Say Something tip triggered discovery in November 2023. However, school officials did not contact police or CPS, citing a loophole in Pennsylvania's mandated reporter statute that exempted child-on-child abuse.

Outcome: Both boys pleaded guilty to 59 felony counts of manufacturing child sexual abuse material. The head of school and board president were removed. Pennsylvania amended laws effective December 20, 2024, closing the mandated reporter loophole and defining AI-generated CSAM as CSAM.

Sixth Ward Middle School

Louisiana · August 2025

Victim Expelled

An 8th-grade girl reported that AI-generated nude images of her and other girls were circulating on Snapchat. On the school bus that afternoon, boys displayed the images in front of her. She struck one of the boys. The school district expelled the 13-year-old victim for more than 10 weeks and sent her to an alternative school, barring her from extracurricular activities. One perpetrating boy was charged with 10 counts under Louisiana's 2024 AI NCII law.

Outcome: The family plans a federal lawsuit. The case captures a documented pattern: schools are more likely to punish the child who reacted than to systematically protect the child who was targeted.

'Voices of the Strong 44' — Cascade High School

Iowa · March 2025

44 Victims · Joint Statement

Deepfake nude images of 44 girls at Cascade High School were created and circulated. Three male students were charged as juveniles. The 44 affected girls issued a joint public statement: 'We are teenage girls who should have been enjoying our last few months of school. Instead, we've been forced to take matters into our own hands and put ourselves out there to fight for the most basic protections and support from our school. We decided to write this statement so our voices can be heard and changes can be made, not just for us, but for all the girls coming after us.'

Outcome: Their public statement became one of the most widely cited firsthand accounts of the school deepfake crisis, cited in congressional testimony and media coverage nationally.
Who It Targets

A gendered crime with specific victims.

Non-consensual intimate imagery is overwhelmingly targeted at women and girls — and LGBTQ+ teens face compounded risk.

97%

of AI-generated CSAM videos reviewed by the IWF in 2025 depicted girls.

IWF, 2026

96%

of deepfake pornography features female subjects without any indication of consent.

Oxford Internet Institute, 2025

LGBTQ+ teens are twice as likely to experience sextortion involving deepfakes compared to non-LGBTQ+ peers.

Thorn, June 2025

1.2M

children disclosed having had their images manipulated into explicit deepfakes in a single year.

UNICEF/ECPAT/INTERPOL, 2026

The school context amplifies the harm. Perpetrators in school-based NCII cases target classmates precisely because the shared social environment amplifies the damage — every person who sees the image is someone the victim also sees. The psychosocial damage is not only about the image; it is about the permanent alteration of every relationship the victim has with peers.

Victims report symptoms paralleling PTSD: intrusion, hyperavoidal, hyperarousal, shame, and in some cases inability to attend school. The harm is not abstract — it is clinical and documented. (AI & Society, Springer Nature, 2025; The Lancet Psychiatry, 2025.)

Who creates school-based deepfake NCII

  • In documented school cases: overwhelmingly male students targeting female classmates
  • Images sourced from public social media, school events, and class photos
  • UK Revenge Porn Helpline (2024): over 81% of perpetrators are male
  • In 67% of cases, the perpetrator is a current or former partner (adult cases)
  • In school contexts: predominantly peer-on-peer, not partner-based

For broader patterns of AI-enabled exploitation of minors, see also our guides on Doxxing and Disclosure and AI and Mental Health.

What Actually Works

Four tiers of real action.

Prevention, preparation, and response — built from every documented school case and the tools that have actually worked.

1

Create safety before something happens.

  • Say this, now, before anything happens: "If anyone sends you — or makes — an image of you without your clothes, come to me immediately. You will not be in trouble. I will help you."
  • Shame is the deepfake perpetrator's central tool. The most important thing you can do is remove that weapon in advance.
  • Every documented school case shows that victim hesitation to disclose was the primary barrier to early intervention. The barrier is fear of parental reaction.
2

Reduce the attack surface.

  • Review your child's social media profile visibility. Public accounts provide unlimited source material for deepfake tools.
  • School events are particularly risky: group photos tag identifiable uniforms, locations, and peers.
  • Talk about what photos mean in the AI era. A school portrait is a training dataset for someone who wants to harm you. This is the documented mechanism of every school case.
  • A single search for 'nudify app' returns working tools in the top results. Awareness is not paranoia.
3

Know the tools — before you need them.

  • NCMEC Take It Down (takeitdown.ncmec.org): Free, anonymous. Creates a hash of the image so platforms block it from being uploaded. For minors only.
  • StopNCII.org: Same mechanism for adults. Works across Facebook, Instagram, TikTok, Snapchat, Reddit, and others. 90.9% takedown success rate.
  • Cyber Civil Rights Initiative (cybercivilrights.org): 24/7 helpline, legal aid roster, state-by-state law guide.
  • FTC reportfraud.ftc.gov: Report TAKE IT DOWN Act non-compliance by platforms.
4

If it has happened — the first 48 hours.

  • Do not delete anything. Screenshots, app names, usernames, and timestamps are evidence.
  • Call NCMEC CyberTipline: 1-800-843-5678. For images of minors, this is the first call.
  • Report to the school immediately. Request documentation in writing.
  • Report to the platform using the TAKE IT DOWN Act mechanism — platforms must respond within 48 hours.
  • Connect your child with a counselor immediately — before any conversation about 'what happened' that could retraumatize.
If It Happens

The first 48 hours when you find out.

Speed matters — but retraumatization matters more. Here is how to respond.

1

Don't delete anything

Screenshots, usernames, app names, timestamps, and any messages are legal evidence. Document everything before reporting.

2

Call NCMEC first

For images of minors: 1-800-843-5678 or cybertipline.org. NCMEC can coordinate platform takedown through established relationships. This is the first call, not the last.

3

Use Take It Down / StopNCII

For minors: takeitdown.ncmec.org. For adults: stopncii.org. These services create hashes that block images from being re-uploaded across participating platforms without you or your child submitting the image itself.

4

Report to school and platform simultaneously

Report to the school principal in writing. Report to the platform using the TAKE IT DOWN Act mechanism (48-hour response required). Request confirmation of both reports in writing.

5

Connect your child with a counselor first

Before any conversation about 'what happened' — especially a parent-led one — connect your child with a counselor who has trauma-informed training. Asking 'why did you post that?' before therapy can retraumatize. The emotional response comes before the fact-finding.

6

If crisis: call 988

If your child is in emotional distress or you are concerned about their safety: call or text 988. Free, confidential, 24/7. Parallel to all other steps, not after.

For School Staff

What to do in the first 24 hours when you learn an incident may have occurred

Who to notify and in what order, how to preserve evidence without investigative overreach, trauma-informed response for victims, what not to say to the school community, and the CDT Model Policy for K-12 schools.

Go to the Educator Guide →
Resources

Where to go right now.

How we sourced this page

Every statistic on this page comes from peer-reviewed research or authoritative organizations: the Internet Watch Foundation, NCMEC, Thorn, the Center for Democracy and Technology, RAND Corporation, Oxford Internet Institute, UNICEF/ECPAT/INTERPOL, Cyber Civil Rights Initiative, and peer-reviewed journals including The Lancet Psychiatry and AI & Society.

Named cases are drawn from credible news reporting, court filings, and official statements. The Lancaster Country Day case is drawn from court records. The Sixth Ward Louisiana case is drawn from local news reporting and family statements; the lawsuit was pending at time of publication.

The DEFIANCE Act status should be verified before publication — the Senate passed it unanimously in January 2026, but House action was pending as of the date of this publication.

Last reviewed: May 2026We review this page quarterly. Statistics in this category change rapidly.

Want CPAI resources in your school or community?

We partner with schools, parent groups, and youth-serving organizations to bring this research into classrooms and communities where students need it most.