Monday, January 19, 2026

AI Expert Predicted AI Would End Humanity in 2027—Now He’s Changing His Timeline

Daniel Kokotajlo predicted the end of the world would happen in April 2027. In “AI 2027” — a document outlining the impending impacts of AI, published in April 2025 — the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity. The authors, however are going back on their predictions. Now, Kokotajlo forecasts superintelligence will land in 2034, but he doesn’t know if and when AI will destroy humanity. In “AI 2027,” Kokotajlo argued that superintelligence will emerge through “fully autonomous coding,” enabling AI systems to drive their own development. The release of ChatGPT in 2022 accelerated predictions around artificial general intelligence, with some forecasting its arrival within years rather than decades. These predictions accrued widespread attention. Notably, JD Vance, U.S. vice president, reportedly read “AI 2027” and later urged Pope Leo XIV — who underscored AI as a main challenge facing humanity — to provide international leadership to avoid outcomes listed in the document. On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded “AI 2027” as a “work of fiction,” even calling various predictions “pure science fiction mumbo jumbo.” As researchers and the public alike begin to reckon with “how jagged AI performance is,” AGI timelines are starting to stretch again, according to Malcolm Murray, an AI risk management expert and one of the authors of the “International AI Safety Report.” “For a scenario like ‘AI 2027’ to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities,” Murray said. Still, developing AI models that can train themselves remains a steady goal for leading AI companies. Sam Altman, OpenAI CEO, set internal goals for “a true automated AI researcher by March of 2028.” However, he’s not entirely confident in the company’s capabilities to develop superintelligence. “We may totally fail at this goal,” he admitted on X, “but given the extraordinary potential impacts we think it is in the public interest to be transparent about this.” And so, superintelligence may still be possible, but when it arrives and what it will be capable of remains far murkier than “AI 2027” once suggested. BY LEILA SHERIDAN

No comments: