Ai 2027

ai-2027.com/, April 3, 2025

Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean

We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

We wrote a scenario that represents our best guess about what that might look like.1 It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.2

PDF

Home

What is this?

The CEOs of OpenAIGoogle DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.”3

What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.

We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy.4

We encourage you to debate and counter this scenario.5 We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. We’re planning to give out thousands in prizes to the best alternative scenarios.

How did we write it?

Our research on key questions (e.g. what goals will future AI agents have?) can be found here.

The scenario itself was written iteratively: we wrote the first period (up to mid-2025), then the following period, etc. until we reached the ending. We then scrapped this and did it again.

We weren’t trying to reach any particular ending. After we finished the first ending—which is now colored red—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises. This went through several iterations.6

Our scenario was informed by approximately 25 tabletop exercises and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work.

Why is it valuable?

“I highly recommend reading this scenario-type prediction on how AI could transform the world in just a few years. Nobody has a crystal ball, but this type of content can help notice important questions and illustrate the potential impact of emerging risks.” —Yoshua Bengio7

We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the US military to game out Taiwan scenarios.

Painting the whole picture makes us notice important questions or connections we hadn’t considered or appreciated before, or realize that a possibility is more or less likely. Moreover, by sticking our necks out with concrete predictions, and encouraging others to publicly state their disagreements, we make it possible to evaluate years later who was right.

Also, one author wrote a lower-effort AI scenario before, in August 2021. While it got many things wrong, overall it was surprisingly successful: he predicted the rise of chain-of-thought, inference scaling, sweeping AI chip export controls, and $100 million training runs—all more than a year before ChatGPT.

Who are we?

Daniel Kokotajlo (TIME100NYT piece) is a former OpenAI researcher whose previous AI predictions have held up well.

Eli Lifland co-founded AI Digest, did AI robustness research, and ranks #1 on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen founded the Center for AI Policy and did AI safety research at the Machine Intelligence Research Institute.

Romeo Dean is completing a computer science concurrent bachelor’s and master’s degree at Harvard and previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.

Scott Alexander, blogger extraordinaire, volunteered to rewrite our content in an engaging style; the fun parts of the story are his and the boring parts are ours.

Mid 2025: Stumbling Agents

Late 2025: The World’s Most Expensive AI

Although models are improving on a wide range of skills, one stands out: OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)16 and their US competitors.

Early 2026: Coding Automation

Mid 2026: China Wakes Up

Chip export controls and lack of government support have left China under-resourced compared to the West. By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the US-Taiwanese frontier, China has managed to maintain about 12% of the world’s AI-relevant compute—but the older technology is harder to work with, and supply is a constant headache.8

Late 2026: AI Takes Some Jobs

AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants. The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree, but people who know how to manage and quality-control teams of AIs are making a killing. Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume. Many people fear that the next wave of AIs will come for their jobs; there is a 10,000 person anti-AI protest in DC.

January 2027: Agent-2 Never Finishes Learning

February 2027: China Steals Agent-2

March 2027: Algorithmic Breakthroughs

April 2027: Alignment for Agent-3

May 2027: National Security

June 2027: Self-improving AI

August 2027: The Geopolitics of Superintelligence

September 2027: Agent-4, the Superhuman AI Researcher

October 2027: Government Oversight

Slowdown

Race

November 2027: Superhuman Politicking

December 2027: The Agent-5 Collective

2028: The AI Economy

2029: The Deal

Humans realize that they are obsolete. A few niche industries still trade with the robot economy, supplying goods where the humans can still add value.9 Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income. Everyone knows that if the AIs turned on humans, they would be completely overpowered. Not that most humans would even resist, the political institutions are too thoroughly captured. But it doesn’t seem like this is happening. Every week that goes by with no dramatic AI treachery, is another week that confidence and trust grow.

2030: Takeover

For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.10


About

The AI 2027 scenario is the first major release from the AI Futures Project. We’re a new nonprofit forecasting the future of AI. We created this website in collaboration with Lightcone Infrastructure.


  1. We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode. See our timelines forecast for more details. 

  2. One author, Daniel Kokotajlo, did a lower-effort scenario exercise in 2021 that got many things right including the rise of chatbots, chain of thought, inference scaling, sweeping AI chip export controls, and $100 million training runs. Another author, Eli Lifland, ranks #1 on the RAND Forecasting Initiative leaderboard. 

  3. It’s tempting to dismiss this as just hype. We think this would be a grave mistake—it is not just hype. We have no desire to hype AI ourselves, yet we also think it is strikingly plausible that superintelligence could arrive by the end of the decade. See our timelines forecast for more. 

  4. Sometimes people mix prediction and recommendation, hoping to create a self-fulfilling-prophecy effect. We emphatically are not doing this; we hope that what we depict does not come to pass! 

  5. Feel free to contact us if you’re writing a critique or an alternative scenario. 

  6. It was overall more difficult, because unlike with the first ending, we were trying to get it to reach a good outcome starting from a rather difficult situation. 

  7. Full Professor, Université de Montréal and Founder and Scientific Advisor, Mila - Quebec AI Institute. Also the world’s most-cited computer scientist

  8. There are now 3M H100e in China, up from the 1.5M they had one year ago in mid 2025. See the compute supplement’s distribution section for more details. We expect smuggling efforts to secure around 60K GB300s (450K H100e), with another 2M Huawei 910Cs being produced (800k H100e) and a mixture of ~1M legally imported chips (such as Nvidia’s H20s or B20s) making up the last 250K H100e. 

  9. E.g. by finding old and unused equipment and taking it to collection sites to sell for scrap. 

  10. Arguably this means only a few people actually died. Arguably. 

Notes mentioning this note