The Fluency Trap
Fluency: Evaluation
In 2023, two attorneys submitted six AI-generated case citations to a federal judge. The citations were formatted perfectly — court name, year, volume, page, and quoted text. Most of the cases didn't exist. The attorneys couldn't tell. What the case reveals is what fluent text does to accuracy judgments — not lawyer negligence.
Processing fluency, illusory truth, source amnesia, and the citation halo — the cognitive science of why coherent text feels true.
6
The number behind this guide
fabricated citations. Formatted like real ones. Filed with a federal court.
In Mata v. Avianca (2023), the AI generated six plausible-sounding cases. Three didn't exist. The brief was filed before anyone checked.
Coherence is not correctness.
Processing fluency — the ease with which text is read and understood — activates the same cognitive shortcut that evolved to link fluency with accuracy. In a world where fluent text usually came from competent humans, this was adaptive. AI changed the conditions.
When information is easy to process — grammatically smooth, structurally clear, confident in tone — the brain tags it as more likely to be true. This is processing fluency, documented across decades of cognitive research. It evolved as a heuristic: fluent communication from other humans was usually evidence of knowledge and care. Fluency was a weak but real proxy for accuracy.
Generative AI produces fluent text as its primary output property. It is optimized, through training on vast amounts of human writing, to produce text that is grammatically correct, stylistically consistent, and tonally confident. This optimization is independent of the accuracy of the content. The model that produces a perfectly formatted, confident-sounding sentence about a case that doesn't exist is doing exactly what it was trained to do.
The result is a systematic inversion of the heuristic: the more fluent the AI output, the more it activates credibility judgments — even when fluency is evidence of nothing about accuracy. Users who understand this can partially correct for it. Users who don't are systematically misled by their own cognitive architecture.
fake court cases in Mata v. Avianca — all formatted flawlessly
Hassan & Barber: dose-response curves on illusory truth documented
Citation presence increases trust even when citations are fabricated
Source caveats retained shorter than the content they qualify (Brashier & Marsh 2020)
Illusory truth. The repetition loop.
Repeated exposure to a claim increases its rated accuracy — even for claims that are implausible. AI generates content at scale and speed. Repetition effects operate at AI scale.
Hasher, Goldstein & Toppino (1977) documented the illusory truth effect: statements rated as merely "plausible" on first reading were rated as more likely true after repeated exposure. The mechanism is processing fluency — familiar things are easier to process, and ease of processing feeds into truth judgments.
Hassan & Barber (2021) measured dose-response curves for the effect: it was measurable after as little as one prior exposure and grew with additional repetitions. Importantly, it held even for implausible claims — prior knowledge partially moderates the effect, but does not eliminate it. The effect does not require that the false claim be plausible.
In an AI context, this means: every time an AI system produces a claim (true or false), it creates a familiarity trace that will make that claim feel slightly more true the next time it's encountered — from any source. AI systems are generating vast amounts of content. The illusory truth effect scales with exposure.
Mata v. Avianca. Hallucination authority.
Six fictional cases, formatted like real ones, submitted to a federal judge. The attorneys couldn't tell they were fake. Neither could their client.
Roberto Mata v. Avianca Airlines
Citations that reduce verification.
HCI research from 2023–2025 documents a counterintuitive finding: including reference links in AI outputs can decrease verification behavior, not increase it.
The mechanism is the citation halo: the presence of a citation (real or not) signals that evidence was consulted, triggering a cognitive shortcut that bypasses the step of actually consulting it. Users who see citations feel that the verification work has been done — even when the citations haven't been checked, and even when they can't be verified without significant effort.
This is the AI version of a well-documented general finding: people rate claims as more credible when they include quantitative data or bibliographic references, regardless of whether those references support the claim. AI systems have learned to include plausible-looking citations as a fluency signal — generating the appearance of evidential support faster than users can verify it.
The Rozenblit & Keil effect
Rozenblit and Keil (2002) documented the illusion of explanatory depth: people systematically overestimate how well they understand complex systems. AI explanations amplify this — they produce detailed, step-by-step accounts that feel like genuine understanding. After reading an AI explanation of how something works, users report feeling more confident in their understanding — even when the explanation contained errors. Fluency produces the feeling of comprehension; comprehension requires actually verifying the parts.
Source amnesia. Content without context.
Brashier and Marsh (2020) reviewed decades of memory research and identified a consistent finding: people retain content longer than they retain source information. A claim read from a disreputable source is remembered without the source — and over time, is misattributed to credible sources.
In AI use: a plausible-sounding but wrong claim that isn't verified immediately gets stored without the context of its source. Later, the user remembers it as something they know — a fact without an origin. That is how AI-generated misinformation propagates into general knowledge.
Content warnings and model disclaimers ("I may be wrong") are processed in working memory and largely lost before the content reaches long-term storage. The claim persists; the caveat evaporates.
Can you tell what's true?
Five sets of passages — accurate, plausible-but-wrong, and hallucinated. Rate each on credibility before seeing the verdict. Then see how your ratings correlated with accuracy — and which surface features drove the gaps.
Open the credibility rater →Action for every level of influence.
For yourself
- Slow down on polished prose. If AI text reads smoothly and sounds confident, that's the fluency pull at work — it's a reason to verify more carefully, not less.
- Verify at least one factual claim per high-stakes AI interaction. Not because AI is always wrong, but because you need to know where it is — and you won't know unless you check.
- Read primary sources, not AI summaries of primary sources. Summaries inherit errors and add new ones; direct reading builds the calibration that catches AI hallucinations.
For professionals
- Treat AI-generated citations as unverified until you have the actual document. Never assume a citation is real because it's formatted correctly — Mata v. Avianca is the canonical counter-example.
- Domain expertise is a partial protection. The Rozenblit & Keil (2002) illusion of explanatory depth is strongest in domains where you think you know more than you do. Know your blind spots.
- Reference links in AI outputs can backfire — the 'citation halo' makes users verify less, not more. The presence of a citation is not confirmation of a source.
For organizations
- Establish source verification as a workflow requirement for AI-generated outputs used in documents, decisions, or publications. Format 'verified against: [source]' as a required field.
- Train staff on the citation halo effect: the intuition that a cited source is a real source is wrong in AI contexts. Citing a hallucinated paper with a real-looking citation is easy for AI.
- Design AI prompts to force uncertainty disclosure: 'If you are not confident in a factual claim, say so.' This produces less fluent but more calibrated output.
For educators
- Teach processing fluency as a concept. Students who understand why fluent text feels true are better equipped to resist the effect than students who are told 'check your sources.'
- Use the Mata v. Avianca case: fabricated citations formatted like real ones are a concrete, graspable example of how fluency hijacks accuracy judgment.
- Design assignments that require students to find the primary source, not just cite what the AI said the source said. This builds the verification habit that resists the fluency trap.
For Educators
Teaching AI evaluation and source verification?
Facilitation guide for the credibility rater, media literacy integration notes, and the Mata v. Avianca case as a classroom discussion anchor.
Research & further reading.
Want CPAI to deliver AI evaluation training to your organization?
We train teams on AI verification workflows, hallucination detection, and critical reading of AI-generated content.