Center for Practical AI
Educators Guide · Fluency Risk 5

Teaching The Fluency Trap

How processing fluency, confident framing, and surface-level citation create credibility signals that don't track accuracy in AI output.

The Fluency Trap is a specific form of the broader fluency heuristic: we tend to judge clear, confident, well-structured text as more accurate than vague or hesitant text. AI is almost always clear and confident — that's what language models are optimized to produce. This means AI output carries persistent credibility cues regardless of its factual accuracy.

The citation halo is the most dangerous specific form: when AI includes a real journal name alongside a fabricated study and author, the real journal name activates credibility even though the citation itself doesn't exist. Mata v. Avianca demonstrated the professional consequences. The problem isn't that lawyers are uniquely credulous — it's that the fluency of the hallucination overwhelmed the verification habit.

In research, journalism, policy, medicine, and law — any field where cited evidence carries weight — hallucinated citations cause real professional harm. Standard information literacy training wasn't designed for this failure mode.

  • 1Explain the processing fluency heuristic and why it fails to reliably track accuracy in AI output.
  • 2Identify specific credibility cues (specificity, citations, named studies, confident framing) as unreliable signals for AI text.
  • 3Apply a verification workflow to AI-generated citations and specific claims.
  • 4Recognize source amnesia risk: using AI-generated content and losing track of where specific claims came from.

Opening

Show two versions of a false claim — one vague ('some research suggests this might be true'), one with specifics ('a 2023 study of 4,700 participants found a 34% increase'). Ask: which seems more credible? Why? Is credibility the right signal to use here?

On hallucinated citations

In Mata v. Avianca, the AI generated citations that looked real because they used real journal names with fake studies. What made this so hard to catch? What would a reliable verification workflow for AI citations look like in your field?

Source amnesia

Brashier & Marsh (2020) documented that people forget the source of information but retain confidence in claims they've encountered. How does this interact with AI use? What happens when you use AI to research something and then write from memory?

The illusion of explanatory depth

Rozenblit & Keil (2002) found that people believe they understand complex systems much better than they actually do. How might AI — which can always generate a confident explanation of anything — make this worse? What's the test for whether you actually understand something?

Credibility Rater (class activity, ~15 min)

Run the Credibility Rater before discussing the Fluency Trap. Have students predict which passage type they'll rate highest. Reveal results. Discuss: did hallucinated passages with named studies and statistics seem more credible than accurate passages? Why?

Citation verification lab (~40 min)

Have students prompt an AI to produce a well-cited response to a research question in their field. Verify every specific claim: does the study exist? Do the statistics match the actual paper? Does the paper say what AI claims? Compile accuracy rates. This is high-impact learning by experience.

Fluency manipulation exercise (~20 min)

Take an accurate vague statement ("exercise is generally good for mental health") and have students rewrite it in AI style with specifics, named researchers, and journal titles — all invented. Share results. How hard is it to make false information sound credible? What does this tell you about using fluency as a credibility signal?

Source tracking exercise (~ongoing)

Have students use AI for a research task and maintain a log of every specific claim they use, with its source tagged (AI-generated, independently verified, unknown). At the end, ask: how many AI-generated claims did you verify? How many used as-is? This surfaces source amnesia risk in real time.

Misconception: 'AI wouldn't make up citations to real journals'

Reframe: AI doesn't 'make up' in the intentional sense. It generates plausible token sequences given the context. If the context includes the beginning of a citation, AI will complete it plausibly — and real journal names are more likely completions than fictional ones. This is precisely what makes citation hallucination hard to detect.

Misconception: 'Specific statistics are more reliable than vague claims'

Reframe: In human expert writing, specificity often does correlate with reliability — it suggests primary source access. In AI output, this correlation is broken. AI can generate arbitrarily specific false statistics with the same fluency as true ones.

Misconception: 'I'd catch a hallucinated citation in my field'

Reframe: Maybe — if you know the field well enough to recognize the study should exist and doesn't. But for cross-disciplinary research, or for papers in adjacent fields, the hallucination looks identical to a real citation. Verification must be a procedural habit, not a skill-dependent judgment.