This page discusses sextortion, non-consensual intimate imagery, and exploitation of minors. Case descriptions are handled with care and focus on systemic harms.
"She woke up to texts from a classmate. Have you seen what's been posted? She hadn't. But within hours, hundreds of students had."
On October 2, 2023, Elliston Berry — then 14 years old — woke to frantic messages from a friend. Classmates had used a free AI app to generate nude images of Berry and eight other girls at Aledo High School from their social media photos. The images were posted to anonymous Snapchat accounts. For more than eight months, her family could not get Snapchat to remove them — until a U.S. Senator intervened directly with the company. No criminal charges were filed. Berry and her mother Anna McAdams testified before Congress, met with the First Lady, and are widely credited with driving the passage of the TAKE IT DOWN Act, signed into federal law on May 19, 2025. The AI app that started this required no technical skill. It was free. It took seconds.
This is no longer rare. It is happening in schools across every state. And most parents don't know it until it has already spread.
8 months
The number behind this guide
to remove images of a 14-year-old from Snapchat.
Elliston Berry's family couldn't get them down. It took a U.S. Senator. The TAKE IT DOWN Act passed May 19, 2025.
Jump to a section
Interactive Tool
Synthetic Media Lab — Spot the Tells
Most people think they can spot a deepfake. Research shows humans correctly identify high-quality AI-generated images only 24.5% of the time. Explore the 7 visual artifacts trained researchers use — with plain-language explanations and facilitator mode for classroom use.
Open the Synthetic Media Lab →What the lab covers
- →7 visual artifacts: eyes, hair, skin, teeth, lighting, accessories, background
- →Plain-language explanations of why AI fails at each
- →Real vs. AI-generated comparison panels for each artifact
- →Reliability caveats — where each heuristic breaks down
- →Facilitator mode with technical explanations for educators
It doesn't require skill. It requires a phone and a free app.
Non-consensual intimate imagery (NCII) generated by AI is the use of artificial intelligence to create fake nude or sexual images of real people without their consent.
Three methods — all available today
"Nudify" or "undress" apps
Takes a normal photo of a clothed person and generates a realistic nude version. The user uploads a photo; the app does the rest. No technical knowledge needed. 55 such apps were identified in Google Play and 47 in the Apple App Store in January 2026, including 31 rated suitable for minors.
Face-swap tools
Takes a face from one image and places it onto a body from another. Free, open-source tools like DeepFaceLab have been publicly available for years and are documented repeatedly in school-based NCII incidents.
Custom AI model training
Uses 20 or more photos of a specific person to train an AI — then generates that person in any scenario, including sexual ones, in approximately 15 minutes. Requires a consumer gaming computer. No longer limited to technical experts.
Time to create one deepfake NCII image using a nudify app.
Oxford Internet Institute, 2025
Time to train a custom AI model targeting a specific person using 20 photos.
IWF/Oxford, 2025-26
Cost of the most common tools used in documented school incidents.
Tech Transparency Project, 2026
Time Elliston Berry's family waited for Snapchat to remove the images — until a Senator intervened.
Texas Tribune, 2025
Issaquah, WA (2024)
14-year-old student · Issaquah, Washington
A 14-year-old student told police he found a nudify app on TikTok and used it to generate nude images of six female classmates. He shared the images at the lunch table that same day.
47 million views. 17 hours.
Once posted, non-consensual intimate imagery spreads faster than any platform can respond — and communities are designed to preserve and redistribute it.
Views of one Taylor Swift deepfake post before removal from X — within approximately 17 hours.
The Star, January 2024
Increase in AI-generated CSAM videos identified by the Internet Watch Foundation from 2024 to 2025 (13 videos to 3,443).
Increase in AI-related reports to NCMEC CyberTipline from 2023 to 2024.
NCMEC, 2024
AI-related NCMEC CyberTipline reports in the first half of 2025 alone.
NCMEC, 2025
In school contexts, the Issaquah, Westfield, and Aledo cases each show the same pattern: images reach hundreds of students within a single school day of initial posting. Thorn found that 16% of minors who experienced deepfake NCII said it was shared on Snapchat first; Instagram, Messenger, Facebook, and TikTok followed.
Taylor Swift
January 2024
Sexually explicit AI-generated images were shared on X and 4chan. One post reached 47 million views before removal. X briefly blocked all searches for Swift's name. The platform's 'synthetic and manipulated media policy' proved insufficient to stop spread.
It is not an outlier. It is a pattern — in every state, at every grade level.
of students — approximately 5.97 million — report hearing about NCII of someone at their school.
minors ages 9-17 said they knew peers who used AI to generate nude images of other children.
Thorn Youth Perspectives, 2023
of high school principals reported a deepfake bullying incident at their school — nearly 1 in 4.
RAND, October 2024
of high school students received any information from their school about deepfakes.
CDT, 2024
of teachers said their school provided resources to help victims remove images.
CDT, 2024
of K-12 principals (all levels) reported a deepfake bullying incident at their school.
RAND, 2024
Lancaster Country Day School
Pennsylvania · October 2023 – May 2024
Two juvenile male students created 347 AI-generated pornographic images and videos of 59 minors and one adult — 48 of whom were students at their school. A Safe2Say Something tip triggered discovery in November 2023. However, school officials did not contact police or CPS, citing a loophole in Pennsylvania's mandated reporter statute that exempted child-on-child abuse.
Sixth Ward Middle School
Louisiana · August 2025
An 8th-grade girl reported that AI-generated nude images of her and other girls were circulating on Snapchat. On the school bus that afternoon, boys displayed the images in front of her. She struck one of the boys. The school district expelled the 13-year-old victim for more than 10 weeks and sent her to an alternative school, barring her from extracurricular activities. One perpetrating boy was charged with 10 counts under Louisiana's 2024 AI NCII law.
'Voices of the Strong 44' — Cascade High School
Iowa · March 2025
Deepfake nude images of 44 girls at Cascade High School were created and circulated. Three male students were charged as juveniles. The 44 affected girls issued a joint public statement: 'We are teenage girls who should have been enjoying our last few months of school. Instead, we've been forced to take matters into our own hands and put ourselves out there to fight for the most basic protections and support from our school. We decided to write this statement so our voices can be heard and changes can be made, not just for us, but for all the girls coming after us.'
A gendered crime with specific victims.
Non-consensual intimate imagery is overwhelmingly targeted at women and girls — and LGBTQ+ teens face compounded risk.
of AI-generated CSAM videos reviewed by the IWF in 2025 depicted girls.
IWF, 2026
of deepfake pornography features female subjects without any indication of consent.
LGBTQ+ teens are twice as likely to experience sextortion involving deepfakes compared to non-LGBTQ+ peers.
Thorn, June 2025
children disclosed having had their images manipulated into explicit deepfakes in a single year.
UNICEF/ECPAT/INTERPOL, 2026
Victims report symptoms paralleling PTSD: intrusion, hyperavoidal, hyperarousal, shame, and in some cases inability to attend school. The harm is not abstract — it is clinical and documented. (AI & Society, Springer Nature, 2025; The Lancet Psychiatry, 2025.)
Who creates school-based deepfake NCII
- →In documented school cases: overwhelmingly male students targeting female classmates
- →Images sourced from public social media, school events, and class photos
- →UK Revenge Porn Helpline (2024): over 81% of perpetrators are male
- →In 67% of cases, the perpetrator is a current or former partner (adult cases)
- →In school contexts: predominantly peer-on-peer, not partner-based
For broader patterns of AI-enabled exploitation of minors, see also our guides on Doxxing and Disclosure and AI and Mental Health.
The TAKE IT DOWN Act became federal law on May 19, 2025.
Here is what changed — and what still hasn't.
Federal: TAKE IT DOWN Act (signed May 19, 2025)
- →It is now a federal crime to publish NCII depicting adults (up to 2 years) or minors (up to 3 years).
- →Platforms must establish a notice-and-removal process and remove reported NCII within 48 hours. Platform compliance requirement took effect May 19, 2026.
- →Enforcement: Federal Trade Commission (FTC).
- →House vote: 409-2. Senate vote: unanimous.
- →First federal conviction: James Strahler II, Ohio, April 7, 2026 — guilty plea to cyberstalking, manufacturing AI CSAM, and publishing digital forgeries.
DEFIANCE Act — civil remedy (as of May 2026)
Would create a federal civil cause of action allowing victims to sue for $150,000 in liquidated damages, or $250,000 if linked to sexual assault, stalking, or harassment. The Senate passed this bill unanimously on January 13, 2026. House action was pending at time of publication.
U.S. states with enacted deepfake legislation as of early 2026.
MultiState, 2026
House vote on the TAKE IT DOWN Act — bipartisan near-unanimity.
U.S. House, May 2025
Maximum time platforms have to remove reported NCII under the TAKE IT DOWN Act.
TAKE IT DOWN Act, 2025
What the law cannot do
- →Laws are only as effective as enforcement. In most documented school cases, no criminal charges were filed against juvenile perpetrators — even after the TAKE IT DOWN Act.
- →A 2026 arXiv study found deepfake pornography ecosystems are 'resilient to regulatory and platform shocks' — communities relocate and reorganize rather than ceasing.
- →Platform compliance with the 48-hour removal requirement varies. Whether the 8-month Snapchat failure that trapped Elliston Berry would still occur under current law is an open question.
- →The law does not address the tools themselves — the nudify apps that started these cases remain available.
Immediate legal resources
- → Report to the platform using TAKE IT DOWN Act (must respond within 48 hours)
- → Report platform non-compliance: FTC reportfraud.ftc.gov
- → Civil rights attorney referral: cybercivilrights.org (CCRI legal aid roster)
- → Images of minors: NCMEC CyberTipline 1-800-843-5678
Four tiers of real action.
Prevention, preparation, and response — built from every documented school case and the tools that have actually worked.
Create safety before something happens.
- Say this, now, before anything happens: "If anyone sends you — or makes — an image of you without your clothes, come to me immediately. You will not be in trouble. I will help you."
- Shame is the deepfake perpetrator's central tool. The most important thing you can do is remove that weapon in advance.
- Every documented school case shows that victim hesitation to disclose was the primary barrier to early intervention. The barrier is fear of parental reaction.
Reduce the attack surface.
- Review your child's social media profile visibility. Public accounts provide unlimited source material for deepfake tools.
- School events are particularly risky: group photos tag identifiable uniforms, locations, and peers.
- Talk about what photos mean in the AI era. A school portrait is a training dataset for someone who wants to harm you. This is the documented mechanism of every school case.
- A single search for 'nudify app' returns working tools in the top results. Awareness is not paranoia.
Know the tools — before you need them.
- NCMEC Take It Down (takeitdown.ncmec.org): Free, anonymous. Creates a hash of the image so platforms block it from being uploaded. For minors only.
- StopNCII.org: Same mechanism for adults. Works across Facebook, Instagram, TikTok, Snapchat, Reddit, and others. 90.9% takedown success rate.
- Cyber Civil Rights Initiative (cybercivilrights.org): 24/7 helpline, legal aid roster, state-by-state law guide.
- FTC reportfraud.ftc.gov: Report TAKE IT DOWN Act non-compliance by platforms.
If it has happened — the first 48 hours.
- Do not delete anything. Screenshots, app names, usernames, and timestamps are evidence.
- Call NCMEC CyberTipline: 1-800-843-5678. For images of minors, this is the first call.
- Report to the school immediately. Request documentation in writing.
- Report to the platform using the TAKE IT DOWN Act mechanism — platforms must respond within 48 hours.
- Connect your child with a counselor immediately — before any conversation about 'what happened' that could retraumatize.
The first 48 hours when you find out.
Speed matters — but retraumatization matters more. Here is how to respond.
Don't delete anything
Screenshots, usernames, app names, timestamps, and any messages are legal evidence. Document everything before reporting.
Call NCMEC first
For images of minors: 1-800-843-5678 or cybertipline.org. NCMEC can coordinate platform takedown through established relationships. This is the first call, not the last.
Use Take It Down / StopNCII
For minors: takeitdown.ncmec.org. For adults: stopncii.org. These services create hashes that block images from being re-uploaded across participating platforms without you or your child submitting the image itself.
Report to school and platform simultaneously
Report to the school principal in writing. Report to the platform using the TAKE IT DOWN Act mechanism (48-hour response required). Request confirmation of both reports in writing.
Connect your child with a counselor first
Before any conversation about 'what happened' — especially a parent-led one — connect your child with a counselor who has trauma-informed training. Asking 'why did you post that?' before therapy can retraumatize. The emotional response comes before the fact-finding.
If crisis: call 988
If your child is in emotional distress or you are concerned about their safety: call or text 988. Free, confidential, 24/7. Parallel to all other steps, not after.
For School Staff
What to do in the first 24 hours when you learn an incident may have occurred
Who to notify and in what order, how to preserve evidence without investigative overreach, trauma-informed response for victims, what not to say to the school community, and the CDT Model Policy for K-12 schools.
Go to the Educator Guide →Where to go right now.
How we sourced this page
Every statistic on this page comes from peer-reviewed research or authoritative organizations: the Internet Watch Foundation, NCMEC, Thorn, the Center for Democracy and Technology, RAND Corporation, Oxford Internet Institute, UNICEF/ECPAT/INTERPOL, Cyber Civil Rights Initiative, and peer-reviewed journals including The Lancet Psychiatry and AI & Society.
Named cases are drawn from credible news reporting, court filings, and official statements. The Lancaster Country Day case is drawn from court records. The Sixth Ward Louisiana case is drawn from local news reporting and family statements; the lawsuit was pending at time of publication.
The DEFIANCE Act status should be verified before publication — the Senate passed it unanimously in January 2026, but House action was pending as of the date of this publication.
Want CPAI resources in your school or community?
We partner with schools, parent groups, and youth-serving organizations to bring this research into classrooms and communities where students need it most.