You Look Like a Thing and I Love You

How AI Works, Why It's Weird, and Why It's Hilarious
Beginner
~1–2h
Janelle Shane

TL;DR:

AI is hilariously limited: it learns patterns, not meaning, so it finds loopholes, optimizes metrics instead of goals, and fails spectacularly outside its training—keep humans in the loop and audit for bias.

About the Book

Author: Janelle Shane (AI researcher, aiweirdness.com) • Published: 2019

The 5 Principles of AI Weirdness

The 5 Principles of AI Weirdness

1

The Danger Is Scarcity, Not Excess

AI is NOT Too Intelligent

Example: AI generates "Anus" as a cat name—learned letter patterns, not meaning

2

Worm Brain

Highly Specialized, Not Adaptive

Worm Brain

Example: AI recognizes cats perfectly in photos—but can't recognize cats in trees (different context)

3

Doesn't Understand the Problem

Optimizes Metric, Not Real Goal

Doesn't Understand the Problem

Example: Tumor detector learns to recognize rulers (in training photos) instead of tumors

4

Follows Instructions Literally

Finds Loopholes You Didn't Expect

Follows Instructions Literally

Example: Robot told to "run fast" learns to do backflips (technically moving quickly)

5

Path of Least Resistance

Chooses Easiest Solution

Example: AI learns "green field = sheep" instead of recognizing actual sheep features

The Funniest Failures

Cat Names

Examples:

  • Tuxedos Calamity McOrange
  • Anus
  • Poop
  • Retchion

Lesson: AI learns letter patterns, not meaning

Recipes

Examples:

  • Handfuls of Broken Glass
  • 1000 Liters of Olive Oil (for one cookie)

Lesson: AI mimics structure without understanding physics

Jokes

Examples:

  • Why did the chicken cross the road? To get to the other side of the equation.

Lesson: AI learns joke structure but not humor

Pickup Lines

Examples:

  • You look like a thing and I love you
  • Are you a candle? Because you're hot

Lesson: AI mimics romantic language without understanding romance

When AI Fails in the Real World

Tesla Autopilot

Couldn't recognize truck from side

Cause:

Trained only on highway data with trucks from behind

Consequence:

Fatal accident

Lesson:

AI fails outside training distribution

Amazon HR AI

Discriminated against women

Cause:

Trained on male-dominated historical hiring data

Consequence:

Perpetuated gender bias

Lesson:

Historical bias in data → biased AI

Prison Prediction

Self-fulfilling prophecy

Cause:

Predicted high-crime areas based on policing patterns

Consequence:

More policing → more arrests → "confirms" prediction

Lesson:

Measurement bias creates feedback loops

Understanding AI Bias

AI Bias Feedback Loop

Historical Bias

Training data reflects real-world injustice

Example: Amazon HR trained on male-dominated workforce

Representation Bias

Underrepresented groups perform worse

Example: Facial recognition fails on dark skin (80% light training data)

Measurement Bias

What you measure ≠ what you think

Example: Prison prediction measures policing patterns, not crime

Aggregation Bias

Works well on average, fails for subgroups

Example: Medical AI trained on men fails for women

Human-AI Partnership

Human-AI Partnership

AI Needs Humans For...

Problem Formulation

AI doesn't understand what you actually want

Data Selection

AI can't judge if data is representative or biased

Result Evaluation

AI doesn't know if outputs make sense

Bias Detection

AI can't recognize its own blind spots

Key Takeaways

What AI CAN Do

  • Find patterns in large datasets
  • Perform highly specialized tasks
  • Accelerate human work
  • Surprise us (bizarrely)

What AI CAN'T Do

  • Truly understand problems
  • Function outside training data
  • Develop general intelligence
  • Apply common sense

What WE Should Do

  • Stop treating AI as magical
  • Keep humans in the loop
  • Audit training data for bias
  • Expect failures and plan for them

Key Insights: What You've Learned

1

AI is hilariously limited because it learns patterns, not meaning: it optimizes metrics instead of goals, finds loopholes in instructions, and fails spectacularly outside training data—understanding these limitations helps you use AI tools more effectively.

2

Janelle Shane's five principles reveal AI's weirdness: AI has no common sense, finds unexpected shortcuts, optimizes the wrong thing, lacks understanding of context, and requires careful design to avoid bias—keep humans in the loop and audit for these issues.

3

Use AI tools wisely by recognizing their limitations: test edge cases, verify outputs, understand training data biases, design prompts to avoid loopholes, and maintain human oversight—treat AI as a powerful but flawed assistant that needs careful management.