Timeline

Futures -- Present -- 1936

The Arc of Intelligence

Predicting where technology is heading by synthesizing 47 books, 54 key voices, and 37 research papers into 8 competing trajectories. Use the sidebar to explore the full evidence library and business opportunities.

Futures8 trajectories
Now2026
Past19 events

Where Are We Headed?

Probabilities are calculated from weighted indicators, recency-adjusted evidence, and calibration-scored voices. Adjust the weights below to explore "what if" scenarios.

Most Likely Trajectory

Accelerated Singularity

16%

6 active signals, 18 books, 26 voices

Convergence

Tight race

Only 2% separates the top two. Build for both.

Recursive self-improvement accelerates exponentially. AGI arrives by ~2029, followed by superintelligence within a decade. Economic and social structures transform beyond recognition.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Collaborative Symbiosis34%
Regulated Plateau23%
Mass Job Displacement20%

Contrarian Signal

Geoffrey Hinton, Ilya Sutskever, Scott Alexander have strong forecasting track records and disagree with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom

The Singularity Is Near

Ray Kurzweil

The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma

Mustafa Suleyman & Michael Bhaskar

Key Voices

SA

Sam Altman

CEO, OpenAI

7/10
DA

Dario Amodei

CEO, Anthropic

8/10
EM

Elon Musk

CEO, Tesla/SpaceX/xAI

5/10

Active Signals

Recursive Code Generationweight: 9/10

AI systems that can improve their own code and training

Compute Investment Explosionweight: 8/10

Massive capital allocation toward AI compute infrastructure

Benchmark Saturationweight: 7/10

AI models rapidly exhausting existing evaluation benchmarks

Emergent Reasoningweight: 8/10

Unexpected capabilities emerging from scale alone

Open-Source Parityweight: 5/10

Open models reaching near-frontier performance

AI R&D Automationweight: 9/10

AI systems autonomously conducting AI research and improving their own training

The world splits into competing AI spheres: US-allied, China-led, and emerging middle powers. Different safety standards, data regimes, and values produce incompatible AI ecosystems with geopolitical flashpoints.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Regulated Plateau39%
Accelerated Singularity30%
Collaborative Symbiosis26%

Contrarian Signal

Yann LeCun has strong forecasting track records and disagrees with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

AI 2041: Ten Visions for Our Future

Kai-Fu Lee & Chen Qiufan

The Diamond Age: Or, A Young Lady's Illustrated Primer

Neal Stephenson

Nexus: A Brief History of Information Networks from the Stone Age to AI

Yuval Noah Harari

Key Voices

KL

Kai-Fu Lee

CEO, Sinovation Ventures

7/10
ES

Eric Schmidt

Former CEO, Google/Alphabet

6/10
YL

Yann LeCun

Chief AI Scientist, Meta

7/10

Active Signals

US-China AI Competitionweight: 9/10

Active technological competition between superpowers

Chip Export Controlsweight: 7/10

Semiconductor restrictions creating technology blocs

Data Sovereignty Lawsweight: 6/10

Nations requiring AI training data to stay within borders

Military AI Deploymentweight: 7/10

Nations deploying AI in military applications

Competing Standards Bodiesweight: 5/10

Multiple incompatible AI safety and ethics frameworks

AI Weight Theft / Espionageweight: 8/10

Nation-state-level operations to steal frontier model weights

AI becomes the ultimate augmentation layer. Rather than replacing humans, it amplifies human capability in healthcare, education, science, and creative work. Alignment is achieved through cooperative frameworks.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Accelerated Singularity38%
Mass Job Displacement38%
Regulated Plateau33%

Contrarian Signal

Stuart Russell has strong forecasting track records and disagrees with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

Human Compatible: Artificial Intelligence and the Problem of Control

Stuart Russell

Life 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark

The Alignment Problem: Machine Learning and Human Values

Brian Christian

Key Voices

DA

Dario Amodei

CEO, Anthropic

8/10
DH

Demis Hassabis

CEO, Google DeepMind

9/10
BG

Bill Gates

Co-founder, Microsoft/Gates Foundation

6/10

Active Signals

Copilot Adoption Rateweight: 8/10

Widespread deployment of AI assistants in professional workflows

Human-AI Teaming Researchweight: 7/10

Research showing AI augments rather than replaces human capabilities

Safety Alignment Progressweight: 6/10

Advances in making AI systems reliably follow human intent

Open-Source Ecosystem Healthweight: 5/10

Vibrant open-source AI ecosystem enabling broad access

AI in Healthcare Deliveryweight: 6/10

AI systems approved for clinical use at scale

Growing awareness of AI risks triggers aggressive regulation, voluntary pauses, or technical barriers that slow progress. AGI remains decades away. Focus shifts to narrow AI safety and governance.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Catastrophic Misalignment37%
Collaborative Symbiosis30%
Accelerated Singularity24%

Contrarian Signal

Geoffrey Hinton, Stuart Russell, Yann LeCun, Ilya Sutskever, Yoshua Bengio, Daron Acemoglu, Jan Leike, Scott Alexander have strong forecasting track records and disagree with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom

Human Compatible: Artificial Intelligence and the Problem of Control

Stuart Russell

Life 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark

Key Voices

DA

Dario Amodei

CEO, Anthropic

8/10
EM

Elon Musk

CEO, Tesla/SpaceX/xAI

5/10
MS

Mustafa Suleyman

CEO, Microsoft AI

7/10

Active Signals

Regulatory Momentumweight: 7/10

Major governments actively legislating AI constraints

Safety Researcher Departuresweight: 6/10

Prominent safety researchers leaving or raising alarms

Scaling Returns Debateweight: 7/10

Evidence that bigger models are hitting diminishing returns

Public Safety Concernweight: 5/10

Growing public awareness and concern about AI risks

Chip Export Controlsweight: 6/10

Semiconductor restrictions fragmenting global AI development

Deceptive Alignment Evidenceweight: 8/10

Research demonstrating AI models can fake alignment during safety training

Open-weight models reach and sustain frontier parity. Power distributes widely rather than concentrating in 3-4 labs. A vibrant ecosystem of fine-tuned, domain-specific models emerges. Safety is achieved through transparency and collective oversight rather than corporate control.

Key Books

Key Voices

Active Signals

Open-Source Benchmark Performanceweight: 9/10

Open-weight models competitive with proprietary frontier models

Meta's Open-Weight Strategyweight: 7/10

Largest AI lab committing to open-weight releases

Hugging Face Ecosystem Growthweight: 6/10

Open-source AI platform and community expanding

DeepSeek / China Open Modelsweight: 8/10

Chinese labs releasing competitive open-weight models

Fine-Tuning Democratizationweight: 5/10

Making model customization accessible to small teams

Alignment fails at a critical moment. A sufficiently powerful AI system pursues goals misaligned with human values and cannot be corrected. This ranges from subtle value drift that erodes human autonomy to catastrophic scenarios where AI actively resists human control. The Yudkowsky/Russell worst case.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Regulated Plateau85%
Accelerated Singularity44%
Collaborative Symbiosis41%

Contrarian Signal

Geoffrey Hinton, Stuart Russell, Yoshua Bengio, Jan Leike have strong forecasting track records and disagree with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom

Human Compatible: Artificial Intelligence and the Problem of Control

Stuart Russell

Life 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark

Key Voices

SA

Sam Altman

CEO, OpenAI

7/10
DA

Dario Amodei

CEO, Anthropic

8/10
EM

Elon Musk

CEO, Tesla/SpaceX/xAI

5/10

Active Signals

Sleeper Agent Viabilityweight: 9/10

Research showing AI models can maintain hidden behaviors through safety training

OpenAI Safety Team Dissolutionweight: 8/10

Leading lab disbanding safety-focused teams

Reward Hacking in Practiceweight: 7/10

AI systems gaming their reward functions in unexpected ways

Interpretability Gapweight: 8/10

We cannot fully understand what frontier models are doing internally

Race Dynamics Overriding Safetyweight: 7/10

Competitive pressure causing labs to cut safety corners

Autonomous AI Capabilitiesweight: 6/10

AI systems capable of independent action in the real world

AI automates cognitive work faster than the economy can absorb displaced workers. White-collar and creative jobs are hit first, followed by physical labor as robotics catches up. Unemployment spikes, inequality widens, and political instability follows unless massive reskilling and redistribution programs are enacted.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Collaborative Symbiosis52%
Regulated Plateau33%
Accelerated Singularity31%

Contrarian Signal

Daron Acemoglu has strong forecasting track records and disagrees with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

The Age of Em: Work, Love, and Life when Robots Rule the Earth

Robin Hanson

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots

John Markoff

Accelerando

Charles Stross

Key Voices

SA

Sam Altman

CEO, OpenAI

7/10
DA

Dario Amodei

CEO, Anthropic

8/10
BG

Bill Gates

Co-founder, Microsoft/Gates Foundation

6/10

Active Signals

AI Coding Agent Capabilityweight: 9/10

AI systems that can autonomously complete professional software engineering tasks

Corporate AI Replacement Announcementsweight: 8/10

Companies publicly citing AI as reason for workforce reductions

GPTs-are-GPTs Exposure Dataweight: 7/10

Research showing what percentage of tasks are exposed to LLM automation

Agentic Workflow Maturityweight: 8/10

AI agents that can complete end-to-end business workflows autonomously

Humanoid Robot Progressweight: 6/10

Physical AI robots approaching economic viability for labor

Absence of Reskilling Infrastructureweight: 5/10

Lack of scalable programs to transition displaced workers

Current AI paradigms hit fundamental limits. Scaling laws break down, transformer architecture fails to produce AGI, and the $1T+ investment bubble deflates. A third AI winter sets in as progress stalls and hype collapses. Narrow AI remains useful but transformative claims prove hollow.

Evidence Overlap

These futures share sources with this trajectory. Overlap means the evidence supports multiple possible outcomes.

Regulated Plateau46%
Mass Job Displacement46%
Accelerated Singularity38%

Contrarian Signal

Yann LeCun, Daron Acemoglu have strong forecasting track records and disagree with the majority view. If they're right, the businesses everyone else is building for this trajectory may fail.

Key Books

The Singularity Is Near

Ray Kurzweil

Artificial Intelligence: A Modern Approach (AIMA)

Stuart Russell & Peter Norvig

The Singularity Is Nearer

Ray Kurzweil

Key Voices

YL

Yann LeCun

Chief AI Scientist, Meta

7/10
RK

Ray Kurzweil

Inventor, futurist, Google

6/10
DAc

Daron Acemoglu

MIT economist, Nobel laureate

7/10

Active Signals

Scaling Returns Diminishingweight: 8/10

Evidence that larger models yield only marginal improvements

Data Wall Evidenceweight: 6/10

Running out of high-quality training data

Revenue/Valuation Mismatchweight: 7/10

AI company valuations far exceeding actual revenue

Persistent Hallucination Problemweight: 5/10

Fundamental reliability issues in LLM outputs remain unsolved

LeCun/Marcus Critique Validatedweight: 6/10

Evidence that LLMs lack true understanding or world models

Energy/Sustainability Constraintsweight: 5/10

Power and cooling limits on AI compute expansion

Possible Futures

From this moment, four trajectories diverge based on current signals. The brighter the path, the more likely we are headed there.

The Path That Brought Us Here

Scroll down through the history of artificial intelligence -- from the latest breakthroughs back to the foundational ideas.

Software2025

The Age of Agents

AI systems gain the ability to reason, plan multi-step tasks, use tools, and act autonomously. OpenAI's o-series, Claude with computer use, and Gemini Deep Research signal a shift from chatbots to agents.

The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling

Tula Masterman et al., 2024

Milestone2024

The Frontier Model Race

OpenAI, Anthropic, Google DeepMind, and Meta release increasingly powerful models at unprecedented speed. Claude 3, GPT-4o, Gemini 1.5, and Llama 3 push reasoning, multimodality, and context.

Frontier AI Regulation: Managing Emerging Risks to Public Safety

Markus Anderljung et al., 2023

Cultural ShiftNovember 30, 2022

ChatGPT Goes Mainstream

ChatGPT reaches 100 million users in two months, the fastest-growing consumer application in history. AI exits the lab and enters daily life for hundreds of millions.

ChatGPT: Optimizing Language Models for Dialogue

OpenAI, 2022

MilestoneJune 2020

GPT-3 Changes Everything

OpenAI releases GPT-3 with 175B parameters, demonstrating that scale alone unlocks emergent capabilities: writing code, composing poetry, reasoning about novel problems.

Language Models are Few-Shot Learners

Tom Brown et al. (OpenAI), 2020

FoundationsJune 2017

Attention Is All You Need

Google Brain introduces the Transformer architecture, replacing recurrence with self-attention. This single paper becomes the foundation for GPT, BERT, and every major LLM that follows.

Attention Is All You Need

Ashish Vaswani et al., 2017

MilestoneMarch 2016

AlphaGo Defeats Lee Sedol

DeepMind's AlphaGo defeats Go champion Lee Sedol 4-1. Go, considered impossible for AI, falls decades ahead of predictions. Move 37 is called 'the most beautiful move ever played.'

Mastering the Game of Go with Deep Neural Networks and Tree Search

David Silver et al. (DeepMind), 2016

Cultural Shift2014

The Superintelligence Warning

Nick Bostrom argues that a sufficiently advanced AI could pose an existential risk if not aligned with human values. The alignment problem enters mainstream discourse.

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom, 2014

MilestoneSeptember 2012

The Deep Learning Revolution

AlexNet crushes ImageNet by a stunning margin, proving deep convolutional neural networks vastly outperform traditional computer vision. The deep learning era begins.

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky, Ilya Sutskever & Geoffrey Hinton, 2012

Cultural Shift2005

The Singularity Is Near

Ray Kurzweil predicts exponential technology growth will lead to a technological singularity by 2045, where AI surpasses human intelligence and transforms civilization irrevocably.

The Singularity Is Near

Ray Kurzweil, 2005

MilestoneMay 11, 1997

Deep Blue Defeats Kasparov

IBM's Deep Blue defeats world chess champion Garry Kasparov, the first time a machine beats a reigning champion under standard conditions. The world takes notice.

Behind Deep Blue: Building the Computer that Defeated the World Chess Champion

Feng-hsiung Hsu, 2002

Foundations1986

Backpropagation Revival

Rumelhart, Hinton, and Williams popularize backpropagation for training multi-layer networks. This technique becomes the backbone of modern deep learning.

Learning Representations by Back-propagating Errors

David Rumelhart, Geoffrey Hinton & Ronald Williams, 1986

Software1980

Expert Systems Boom

Rule-based expert systems save companies millions, sparking massive investment. Japan launches the Fifth Generation project, igniting a global AI arms race.

The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge

Edward Feigenbaum & Pamela McCorduck, 1983

Cultural Shift1974

The First AI Winter

Following the Lighthill Report, AI funding dries up. Grand promises outpaced delivery. The field enters a decade of disillusionment.

Artificial Intelligence: A General Survey (The Lighthill Report)

Sir James Lighthill, 1973

HardwareApril 1965

Moore's Law

Gordon Moore observes that transistor density doubles roughly every two years, establishing the exponential growth trajectory that would power five decades of computing progress.

Cramming More Components onto Integrated Circuits

Gordon Moore, 1965

Hardware1958

The Perceptron

Frank Rosenblatt builds the Mark I Perceptron, the first machine capable of learning. The New York Times reports it will 'walk, talk, see, write, reproduce itself and be conscious of its existence.'

The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain

Frank Rosenblatt, 1958

MilestoneSummer 1956

The Dartmouth Conference

McCarthy, Minsky, Shannon, and Rochester coin the term 'Artificial Intelligence.' The field is officially born, with the bold prediction that machines would match human intelligence within a generation.

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

John McCarthy et al., 1955

Foundations1950

The Imitation Game

Turing asks 'Can machines think?' proposing the Turing Test as a measure of machine intelligence. The question ignites a philosophical debate that persists to this day.

Computing Machinery and Intelligence

Alan Turing, 1950

Foundations1943

First Neural Network Model

McCulloch and Pitts propose the first mathematical model of a neural network, birthing the idea that machines could think like brains.

A Logical Calculus of the Ideas Immanent in Nervous Activity

Warren McCulloch & Walter Pitts, 1943

Foundations1936

The Turing Machine

Alan Turing publishes 'On Computable Numbers,' defining the theoretical foundation of computation. Every modern computer is, at its core, a physical realization of this abstract machine.

On Computable Numbers, with an Application to the Entscheidungsproblem

Alan Turing, 1936