The Coming Wave

Technology, Power, and the 21st Century's Greatest Dilemma
Intermediate
~5–6h
Mustafa Suleyman

TL;DR:

The coming wave of AI and synthetic biology is uniquely dangerous due to four features: asymmetry, hyper-evolution, omni-use, and autonomy—containment requires four pillars: infrastructure, knowledge, coordination, and institutions, balancing between chaos and authoritarianism.

About the Book

Author: Mustafa Suleyman (Co-founder DeepMind, Co-founder Inflection AI, Microsoft AI CEO) • Published: 2024 • Length: ~320 pages

The Coming Wave - Four Features

Four Features of the Coming Wave

1

Asymmetry

Small Actors, Powerful Technologies

Example: A lone individual could engineer a pandemic. Deep learning costs $100K instead of $100M. CRISPR kits cost thousands instead of millions.

2

Hyper-Evolution

Exponential Improvement

Example: ChatGPT took 2 months to reach 1M users. Capabilities expand monthly. Regulation takes years.

3

Omni-Use

Dual-Use Problem

Example: Facial recognition saves lost children AND enables totalitarian surveillance. Same technology, opposite outcomes.

4

Autonomy

Loss of Human Control

Example: Autonomous weapons target in milliseconds. No human could decide in time. Who's responsible if it targets civilians?

The Central Dilemma

The Narrow Path

The Narrow Path

We must navigate between two unacceptable extremes

Uncontrolled Proliferation

  • • Chaos and catastrophe
  • • Engineered pandemics
  • • Loss of human autonomy
  • • Catastrophic accidents

Authoritarian Control

  • • Surveillance dystopia
  • • Loss of freedom
  • • Stifled innovation
  • • Totalitarian control

Four-Pillar Containment Framework

Four-Pillar Containment Framework
1

Auditing & Red-Teaming

Independent testing before deployment

How It Works:

Mandatory audits for large models. "Red teams" attempt to break systems. Results disclosed to regulators.

Goal:

Make risks visible before deployment. Build "trust but verify" culture.

Challenge:

Keeping up with rapid innovation. Defining what constitutes "safe".

2

Licensing & Compliance

Regulate AI like aviation or finance

How It Works:

Licenses required for models above compute threshold. Mandatory documentation. Regular compliance audits.

Goal:

Prevent irresponsible actors. Create accountability. Ensure safety standards.

Challenge:

Could stifle innovation if too restrictive. International coordination needed.

3

Institutional Oversight

National and international bodies with authority

How It Works:

AI Safety Boards (like FDA). Authority to review, pause, or reject deployments. International coordination.

Goal:

Independent oversight beyond corporate interests. Quick response to risks.

Challenge:

Avoiding regulatory capture. Maintaining expertise and independence.

4

International Treaties

Binding agreements on highest-risk AI

How It Works:

Ban autonomous weapons. Restrict biological engineering AI. Verification mechanisms. Enforcement for violations.

Goal:

Break arms-race logic. Establish red lines. Create enforcement mechanisms.

Challenge:

Getting all nations to agree. Verification. Ensuring compliance.

Critical Risks

AI-Accelerated Bioengineering

Catastrophic

AI designs novel pathogens. Barrier to entry drops dramatically.

Surveillance AI

High

Enabling totalitarian control. Perfect tracking and prediction.

Autonomous Weapons

High

AI making kill decisions. Loss of human oversight.

Systemic Instability

High

Cascading failures from interconnected AI systems.

Key Takeaways

The Problem

  • Powerful dual-use technologies proliferating
  • Proliferation cannot be stopped—only managed
  • Current governance inadequate
  • Without action, catastrophic outcomes likely

The Solution

  • Four-pillar containment framework
  • International cooperation breaking arms races
  • Preserve innovation while managing risk
  • Find narrow path between chaos and control

Key Insights: What You've Learned

1

The coming wave of AI and synthetic biology is uniquely dangerous due to four features: asymmetry (small groups can cause massive harm), hyper-evolution (rapid capability improvement), omni-use (dual-use technologies), and autonomy (systems operating independently)—understanding these features is crucial for responsible AI development and use.

2

Containment requires four pillars: infrastructure (technical controls and monitoring), knowledge (understanding capabilities and risks), coordination (international cooperation), and institutions (governance frameworks)—successful containment balances between chaos (uncontrolled proliferation) and authoritarianism (over-restrictive control).

3

Navigate the coming wave by applying Suleyman's frameworks: recognize the unique risks of AI and synthetic biology, support containment efforts through responsible development and use, and actively participate in shaping positive outcomes rather than passively accepting whatever comes—the future depends on choices made today.