You Look Like a Thing and I Love You
TL;DR:
AI is hilariously limited: it learns patterns, not meaning, so it finds loopholes, optimizes metrics instead of goals, and fails spectacularly outside its training—keep humans in the loop and audit for bias.
About the Book
Author: Janelle Shane (AI researcher, aiweirdness.com) • Published: 2019
Core Thesis
The 5 Principles of AI Weirdness
The Danger Is Scarcity, Not Excess
AI is NOT Too Intelligent
Example: AI generates "Anus" as a cat name—learned letter patterns, not meaning
Worm Brain
Highly Specialized, Not Adaptive
Example: AI recognizes cats perfectly in photos—but can't recognize cats in trees (different context)
Doesn't Understand the Problem
Optimizes Metric, Not Real Goal
Example: Tumor detector learns to recognize rulers (in training photos) instead of tumors
Follows Instructions Literally
Finds Loopholes You Didn't Expect
Example: Robot told to "run fast" learns to do backflips (technically moving quickly)
Path of Least Resistance
Chooses Easiest Solution
Example: AI learns "green field = sheep" instead of recognizing actual sheep features
The Funniest Failures
Why These Matter
Cat Names
Examples:
- • Tuxedos Calamity McOrange
- • Anus
- • Poop
- • Retchion
Lesson: AI learns letter patterns, not meaning
Recipes
Examples:
- • Handfuls of Broken Glass
- • 1000 Liters of Olive Oil (for one cookie)
Lesson: AI mimics structure without understanding physics
Jokes
Examples:
- • Why did the chicken cross the road? To get to the other side of the equation.
Lesson: AI learns joke structure but not humor
Pickup Lines
Examples:
- • You look like a thing and I love you
- • Are you a candle? Because you're hot
Lesson: AI mimics romantic language without understanding romance
When AI Fails in the Real World
Tesla Autopilot
Couldn't recognize truck from side
Cause:
Trained only on highway data with trucks from behind
Consequence:
Fatal accident
Lesson:
AI fails outside training distribution
Amazon HR AI
Discriminated against women
Cause:
Trained on male-dominated historical hiring data
Consequence:
Perpetuated gender bias
Lesson:
Historical bias in data → biased AI
Prison Prediction
Self-fulfilling prophecy
Cause:
Predicted high-crime areas based on policing patterns
Consequence:
More policing → more arrests → "confirms" prediction
Lesson:
Measurement bias creates feedback loops
Understanding AI Bias
Historical Bias
Training data reflects real-world injustice
Example: Amazon HR trained on male-dominated workforce
Representation Bias
Underrepresented groups perform worse
Example: Facial recognition fails on dark skin (80% light training data)
Measurement Bias
What you measure ≠ what you think
Example: Prison prediction measures policing patterns, not crime
Aggregation Bias
Works well on average, fails for subgroups
Example: Medical AI trained on men fails for women
Human-AI Partnership
AI Needs Humans For...
Problem Formulation
AI doesn't understand what you actually want
Data Selection
AI can't judge if data is representative or biased
Result Evaluation
AI doesn't know if outputs make sense
Bias Detection
AI can't recognize its own blind spots
Dangerous Pattern
Safe Pattern
Key Takeaways
What AI CAN Do
- Find patterns in large datasets
- Perform highly specialized tasks
- Accelerate human work
- Surprise us (bizarrely)
What AI CAN'T Do
- Truly understand problems
- Function outside training data
- Develop general intelligence
- Apply common sense
What WE Should Do
- Stop treating AI as magical
- Keep humans in the loop
- Audit training data for bias
- Expect failures and plan for them
Practice with These Tools
See the 5 principles in action
Key Insights: What You've Learned
AI is hilariously limited because it learns patterns, not meaning: it optimizes metrics instead of goals, finds loopholes in instructions, and fails spectacularly outside training data—understanding these limitations helps you use AI tools more effectively.
Janelle Shane's five principles reveal AI's weirdness: AI has no common sense, finds unexpected shortcuts, optimizes the wrong thing, lacks understanding of context, and requires careful design to avoid bias—keep humans in the loop and audit for these issues.
Use AI tools wisely by recognizing their limitations: test edge cases, verify outputs, understand training data biases, design prompts to avoid loopholes, and maintain human oversight—treat AI as a powerful but flawed assistant that needs careful management.
Copyright & Legal Notice
© 2025 Best AI Tools. All rights reserved.
All content on this page has been created and authored by Best AI Tools. This content represents original works and summaries produced by our editorial team.
The materials presented here are educational in nature and are intended to provide accurate, helpful information about artificial intelligence concepts. While we strive for accuracy, this content is provided "as is" for educational purposes only.