Superagency: What Could Go Right?

Reid Hoffman's optimistic AI vision. Learn the Bloomers framework and shape positive futures.

By Reid Hoffman & Greg Beato • NYT Bestseller • January 2025

TL;DR:

Superagency is optimistic AI agency expansion: be a Bloomer (not a Doomer, Gloomer, or Zoomer) by actively shaping positive futures through iterative deployment, permissionless innovation, and networked autonomy—focus on what could go right, not just what could go wrong.

Four Perspectives on AI

Doomers

Doomers

AI is existential threat—stop development

  • Development must stop immediately
  • No acceptable middle ground
  • Catastrophic risk outweighs benefits

Gloomers

Gloomers

AI inevitable but outcomes negative

  • Job displacement, disinformation, bias
  • Strict regulation required
  • Prohibition until safety proven

Zoomers

Zoomers

Benefits vastly outweigh risks

  • Minimal regulation needed
  • Unrestricted innovation = progress
  • Market forces solve problems

Bloomers

Bloomers

Balanced optimism with active engagement

  • Recognize potential AND risks
  • Broad public participation
  • Engagement > prohibition

Agency, Not Autonomy

Agency Expansion

AI debates are fundamentally about human agency—our ability to control lives and influence outcomes.

Printing press: Chaos → Democratized knowledge
Automobile: Job loss → Expanded mobility
Internet: Isolation → Global connection
AI: Disruption → Agency expansion (if shaped well)

Iterative Deployment

Iterative Deployment

Deploy gradually, gather feedback, iterate rapidly—don't wait for perfection.

Core Principles:

  • Release MVP with safeguards
  • Gather diverse user feedback
  • Identify real-world problems
  • Refine based on usage patterns
  • Safety improvements in months, not years

What Could Possibly Go Right?

Education

AI tutors personalizing for each student
Teachers focus on mentorship
Global educational access

Healthcare

Drug discovery: decades → years
Cancer detection +40% accuracy
Personalized medicine

Knowledge Work

Researchers process data rapidly
"Informational GPS" navigation
Humans: strategy, AI: optimization

Democracy

Amplify citizen voices
Enhanced participatory governance
Transparency tools

Frequently Asked Questions

What is a "Bloomer"?
Bloomers are balanced optimists who recognize AI's potential while acknowledging risks. Unlike Zoomers (dismiss risks) or Gloomers (only harms), Bloomers believe in active engagement and public participation.
Why "innovation is safety"?
If responsible actors pause while adversaries continue, the pause only benefits those unconcerned with safety. Safety requires both innovation AND governance.
What is iterative deployment?
Release systems with safeguards, gather real-world feedback, rapidly refine. Like software: MVP → feedback → iteration → improvement.
How does AI expand agency?
Historical pattern: Every technology initially threatened agency but ultimately expanded it. AI can expand through personalized education, informational GPS, scientific breakthroughs—if designed thoughtfully.
What is techno-humanist compass?
Dynamic guidance toward human agency. Two principles: (1) Develop through real-world engagement with ordinary people. (2) Create adaptive governance evolving with technology.

Key Insights: What You've Learned

1

Superagency is optimistic AI agency expansion: be a Bloomer (not a Doomer, Gloomer, or Zoomer) by actively shaping positive futures through iterative deployment, permissionless innovation, and networked autonomy—focus on what could go right, not just what could go wrong.

2

Hoffman's framework categorizes AI perspectives: Doomers fear existential risk, Gloomers worry about near-term harms, Zoomers focus on acceleration, and Bloomers actively build positive outcomes—choosing the Bloomer mindset enables constructive engagement with AI's potential.

3

Shape positive AI futures by embracing iterative deployment (learn and improve continuously), supporting permissionless innovation (lower barriers to entry), and building networked autonomy (distributed AI systems)—active participation in AI development creates better outcomes than passive acceptance or fear-driven restriction.