How to Think About AI

Master Richard Susskind's critical frameworks for AI evaluation. Essential thinking tools for the AI era.

By Richard Susskind • Published 2024 • Critical Thinking Foundation

TL;DR:

Think critically about AI using Susskind's frameworks: shift from "is AI intelligent?" to "what can AI do?", balance process vs. outcome thinking, evaluate across seven risk categories, and plan for multiple futures—move from hype to structured analysis.

Intelligence-to-Capability Shift

Judge AI by what it does (capability), not whether it "thinks" like humans (intelligence).

Intelligence-to-Capability Shift

❌ Traditional Focus:

Focus on consciousness, understanding, human-like reasoning

✅ New Thinking:

Focus on outcomes, performance, practical utility

Process vs. Outcome Thinking

The most powerful framework: Distinguish between trusting the process vs. trusting the outcome.

Process vs. Outcome Thinking

🔄 Process-Thinking:

  • Trust doctors because they went to medical school
  • Trust judges because they follow proper procedures
  • Worried AI won't replace them because "process is wrong"

🎯 Outcome-Thinking:

  • Want the diagnosis that's correct
  • Want legal research that finds relevant cases
  • AI can deliver outcomes even without human-like process

Automation, Innovation, Elimination

Three ways AI transforms work—each requires different strategy.

Automation, Innovation, Elimination

Automation

AI does existing tasks faster/cheaper

Example: Automated email responses, data entry

Innovation

AI enables new capabilities previously impossible

Example: Personalized learning at scale, drug discovery

Elimination

AI makes entire processes obsolete

Example: Traditional homework, manual translation

Five AI Futures Matrix

Five possible scenarios for AI development—plan for multiple futures, not just one.

Five AI Futures Matrix
FutureLikelihoodTimelineStrategy
HypeLow3-5 yearsDon't over-invest; question assumptions
GenAI+HIGHOngoingContinuous adaptation; invest strategically
AGIMedium-High2030-35Plan long-term; develop resilience
SuperintelligenceMedium2040+Existential questions matter
SingularityLow-Medium???Emphasize adaptability

Seven Risk Categories

Systematic framework for AI risks—move from vague anxiety to structured thinking.

Seven Risk Categories
Technical Risks: Errors, hallucinations, brittleness
Security Risks: Adversarial attacks, data breaches
Bias & Fairness: Discriminatory outcomes, representation gaps
Privacy: Data collection, surveillance, consent
Economic: Job displacement, inequality, market concentration
Social: Misinformation, manipulation, trust erosion
Existential: Misalignment, loss of control, catastrophic outcomes

AI Tool Evaluation Framework

Apply Susskind's frameworks when evaluating any AI tool:

✅ Questions to Ask:

  • Capability: What can this tool actually do?
  • Outcome: Does it solve my problem effectively?
  • Process: Can I understand how it works?
  • Type: Is this automation, innovation, or elimination?
  • Risks: Which of the 7 categories apply?
  • Future: How does this fit multiple AI scenarios?

❌ Avoid These Traps:

  • ❌ Asking "Is it intelligent?" instead of "What can it do?"
  • ❌ Only process-thinking OR only outcome-thinking
  • ❌ Treating all AI as just "automation"
  • ❌ Vague anxiety instead of structured risk analysis
  • ❌ Predicting one future instead of planning for multiple
  • ❌ Evaluating based on hype instead of frameworks

Frequently Asked Questions

What is the intelligence-to-capability shift?
Stop asking "Is AI intelligent?" and start asking "What can AI do?" Judge AI by outcomes and capabilities, not by whether it thinks like humans. This shift is essential for practical AI evaluation.
Why is process vs. outcome thinking so important?
Process-thinking trusts the method (e.g., "doctors went to medical school"). Outcome-thinking trusts the result (e.g., "diagnosis is correct"). AI can deliver outcomes without human-like processes. Evaluate tools from BOTH perspectives.
How do I identify automation vs. innovation vs. elimination?
Automation: AI does existing tasks faster. Innovation: AI enables new capabilities. Elimination: AI makes processes obsolete. Each requires different strategy—automation optimizes, innovation explores, elimination transforms.
Which AI future should I plan for?
Susskind recommends planning as if AGI is likely (most challenging scenario) while staying responsive to GenAI+ reality (current state). Don't predict one future—prepare for multiple scenarios with adaptive strategies.
How do I systematically evaluate AI risks?
Use the seven categories: Technical, Security, Bias, Privacy, Economic, Social, Existential. Evaluate each AI tool across ALL categories, not just one. This moves from vague anxiety to structured thinking.

Key Insights: What You've Learned

1

Think critically about AI using Susskind's frameworks: shift from "is AI intelligent?" to "what can AI do?", balance process vs. outcome thinking, evaluate across seven risk categories, and plan for multiple futures—move from hype to structured analysis.

2

Apply systematic evaluation by considering capability (what AI can do), risk (what could go wrong), governance (how to manage it), and futures (multiple possible outcomes)—structured thinking prevents both over-optimism and excessive fear.

3

Master AI evaluation by using these frameworks consistently: assess capabilities realistically, identify risks systematically, design governance thoughtfully, and plan for uncertainty—critical thinking about AI enables better decisions, better tools, and better outcomes.