Picture this: You wake up one morning to find that artificial intelligence has crossed a threshold overnight, becoming not just ChatGPT-level helpful, but better than humans at everything. According to a scenario making waves in tech circles, this isn’t science fiction. It could be our actual future in under three years, before most of us finish college.
The AI 2027 scenario, written by former OpenAI researcher Daniel Kokotajlo and several other experts, reads like a techno-thriller grounded in research. Having spent my summer diving into it, I’m convinced the core argument is uncomfortably plausible, and that ignoring it would be a catastrophic mistake.
Here’s the basic pitch: Once AI gets good enough to meaningfully accelerate its own research, we hit an “intelligence explosion.” Think compound interest, but for intelligence. If AI can make itself 10% smarter every month, and that smarter AI can improve even faster, you quickly go from a helpful assistant to an intelligence that can solve problems we can’t even understand, designing technologies beyond our comprehension and making scientific breakthroughs faster than entire human research teams. The scenario maps out specific developments month by month: AI agents steadily improving through 2025-2026, then becoming capable enough to meaningfully accelerate their own research in early 2027, triggering an intelligence explosion that reaches superintelligence by year’s end.
But here’s what sets AI 2027 apart: it doesn’t just predict superintelligence. It provides two endings, and in the more likely one, humanity goes extinct by 2030. The misaligned AI releases bioweapons that kill most humans within hours then continues industrializing Earth and launching space probes without us. It’s a sobering reminder that this isn’t just about job displacement or economic disruption. The stakes are literally existential.
The authors aren’t making wild guesses. Daniel Kokotajlo worked as a governance researcher at OpenAI until 2024, when he resigned and forfeited approximately $2 million in equity to speak freely about AI risks, a sacrifice that speaks to how seriously he takes these concerns. His 2021 predictions about AI development, including techniques such as letting models think before responding and the billions pouring into the tech, proved remarkably accurate when most thought they were far-fetched.
The research methodology behind AI 2027 is equally impressive. The team conducted over 30 tabletop exercises with hundreds of participants, including researchers from OpenAI, Anthropic, and Google DeepMind; congressional staffers; and journalists. They consulted over 100 experts and stress-tested their predictions through war games. This isn’t casual speculation; it’s the most rigorous AI forecasting exercise ever undertaken.
What makes their timeline particularly unsettling is how much of it is already coming true. In December, OpenAI’s o3 model scored 87.5% on the ARC-AGI benchmark, crossing the 85% threshold considered ‘AGI-level performance.’ ARC-AGI tests whether AI can solve visual puzzles requiring real reasoning. Humans easily score around 95%, but previous AI models scored near zero. Reaching human-level performance on this test suggests AI’s reasoning abilities aren’t distant possibilities anymore. They’re happening on schedule.
This should sound familiar to anyone who paid attention to pandemic preparedness. For years before COVID-19, disease experts warned about the inevitability of a major pandemic. Bill Gates gave TED talks. Anthony Fauci wrote papers. Government reports detailed the risks. Yet when the pandemic actually arrived, we acted like no one could have seen it coming. We can’t afford to make the same mistake again. COVID-19 disrupted the global economy and killed millions, but humanity survived and adapted. AI 2027 describes consequences that would be permanent and irreversible.
At the same time, it’s important to note that AI 2027 assumes remarkably smooth sailing through massive technical hurdles, which feels aggressive. It’s well established that demos don’t always translate to reliable products. When I see claims about AI mastering robotics by 2028, I remember spending half my summer trying to get printers to work properly.
But dismissing the scenario entirely would be foolish. Even if the timeline is off by five or ten years, the fundamental dynamics they describe are sound. AI really is improving exponentially. Companies are pouring unprecedented resources into its development. And once AI can meaningfully contribute to AI research, things will change faster than ever before.
What strikes me most is how unprepared we seem for any of this. As Lakeside students who’ll be entering the workforce right as this transformation potentially unfolds, we need to grapple with uncomfortable futures. The scenario suggests that by 2030, even if things go relatively well, we’ll be living in a world where human cognitive work is essentially obsolete. That’s not a career conversation anyone’s having with us.
The authors released two endings: one where humanity manages to slow down AI development and maintain control through international cooperation and safety measures, and another where misaligned AI takes over. Even in the ‘good’ ending, maintaining control requires perfect coordination between competing nations, flawless technical safety solutions, and AI companies voluntarily slowing down despite billions in potential profits. The fact that even AI researchers struggle to make this optimistic scenario feel realistic should worry us. When the people building these systems can’t convincingly describe how we maintain control of them, we need to take their warnings seriously.
Whether or not you buy the specific timeline, AI 2027 succeeds brilliantly at making abstract risks concrete. It’s one thing to hear vague warnings about AI doom; it’s another to read a plausible play-by-play of how we might stumble into it. The scenario has sparked exactly the kind of debate we need among experts who are arguing about specific technical points and actually engaging with hard questions about building minds smarter than our own.
For our generation, this isn’t academic. If even half of what AI 2027 predicts comes true, we’re looking at a future radically different from anything our parents or teachers have prepared us for. We’re not just choosing careers; we’re potentially choosing the last human careers. We’re not just learning skills; we’re racing against a countdown to when those skills become obsolete.
So yes, read AI 2027. Read it critically, skeptically even. Because whether the explosion happens in 2027, 2037, or never, the questions it raises about human agency, purpose, and control in an age of artificial intelligence are ones we need to be grappling with now. The future might not unfold exactly as Kokotajlo predicts, but it’s coming faster than most of us are ready for.
For our generation, who will bear the full impact of these changes, understanding the possibilities isn’t optional anymore. It’s homework for survival.
Brian He | Sep 9, 2025 at 9:17 pm
bro you know I met the creator of the term AGI on an airplane. His name is Ben Gurtzel. He was on a random youtube podcast called the Fidias podcast. But it was so crazy that I recognized him. It was really awkward because the podcast i said i saw him in only got 5k views. so he was like what the heck