Center for Practical AI
Educators Guide · Literacy Risk 2

Teaching Moral Offloading

Moral crumple zones, accountability gaps, and how AI changes the distribution of responsibility in consequential decisions.

Moral offloading is distinct from simply making a mistake. It's the structural change in how responsibility is perceived and assigned when an AI system is involved in a decision. Elish (2019) named the pattern: when an AI-assisted system fails, the human nearest the outcome becomes the "moral crumple zone" — absorbing accountability for a decision they may have had limited authority to override.

This is not primarily an individual psychology problem. It's a structural one. Courts have found it difficult to assign liability to AI developers when professionals misuse their tools. Institutions have used AI recommendations as shields against scrutiny while still holding individual practitioners accountable. Learners need to understand both the psychological pull (AI input diffuses perceived responsibility) and the legal/institutional reality (diffusion is often not recognized).

For applied fields like medicine, law, social work, and hiring — where AI is actively being deployed in consequential decisions — this risk has immediate professional implications.

  • 1Explain the 'moral crumple zone' concept and identify examples in professional contexts.
  • 2Distinguish between formal legal liability and perceived moral responsibility in AI-assisted decisions.
  • 3Recognize when AI input is appropriately informing a decision vs. inappropriately substituting for one.
  • 4Articulate personal accountability position for AI-assisted decisions in their own professional context.

Opening

Describe a decision you made partly based on AI output. If that decision turned out to be wrong, how much responsibility would you feel? Now answer the same question for a decision you made entirely on your own. Is there a difference? Should there be?

On the Mata v. Avianca case

The attorney in this case faced sanctions; the AI developer did not. Is this the right allocation? What would need to be different about the relationship between the attorney and the AI tool for the developer to share liability?

Applied to medicine

A physician under institutional pressure follows an AI discharge recommendation. The patient dies. The hospital administration tracked compliance with AI recommendations but did not track adverse outcomes associated with them. Where does responsibility lie, and who has the standing to investigate it?

Systemic

When many professionals use the same AI system and all follow its recommendations in cases where those recommendations are wrong, how should accountability be understood? Is this different from cases where a single professional makes an individual error?

Responsibility Mapper (class activity, ~30 min)

Project the Responsibility Mapper for the whole class. For each scenario, have students vote anonymously (show of hands or poll tool) on their allocation before revealing the aggregate. Discuss gaps between student allocations and how real cases were adjudicated.

Field-specific mapping (~40 min)

Have students identify one consequential decision in their own field that is increasingly being made with AI assistance. Map the four parties (developer, deploying organization, end professional, person affected) for that context. Who currently absorbs risk? Who should? What would change that?

Caspar et al. (2016) replication discussion (~20 min)

Describe the experimental setup: participants who followed another person's instructions to deliver shocks felt less responsible than those who acted autonomously. Have students predict: would AI instruction produce a similar effect? What's different between following a human's instruction and following an AI recommendation?

Misconception: 'I'm just using the tool — the company that built it is responsible'

Reframe: In most current legal frameworks, professionals who use AI tools in their practice absorb the liability for outcomes. This may eventually change, but it's not the current state. 'The AI recommended it' has not been accepted as a liability shield in most professional malpractice contexts.

Misconception: 'If I override the AI and I'm wrong, that's my fault; if I follow the AI and it's wrong, that's its fault'

Reframe: Courts have not accepted this framing. Professionals are generally held to the standard of an ordinarily prudent practitioner in their field, regardless of whether AI was involved in the decision.

Misconception: 'This is a future problem — AI isn't actually making consequential decisions yet'

Reframe: AI is actively being used in medical discharge decisions, loan approvals, hiring screening, criminal risk scoring, and child welfare investigations — right now. The Mata v. Avianca case happened in 2023.