Ilya Sutskever on AI's New Research Era | Generated by AI

Home 2025.12

Question: What did Ilya Sutskever say in his most recent podcast appearance?

Answer:

Ilya Sutskever, co-founder of OpenAI and founder of Safe Superintelligence (SSI), appeared on the Dwarkesh Podcast on November 25, 2025—his first major interview since leaving OpenAI. The episode, titled “We’re moving from the age of scaling to the age of research,” spans about 1 hour and 36 minutes and covers AI progress, SSI’s strategy, generalization challenges, alignment, and the path to artificial superintelligence (ASI). Below is a summary of his key statements, drawn from the transcript and discussions.

Transition from Scaling to Research

Sutskever argues that AI development is shifting eras: From 2012–2020, it was the “age of research” driven by algorithmic breakthroughs like AlexNet and Transformers, which required minimal compute. From 2020–2025, it became the “age of scaling,” where gains came predictably from more data, compute, and parameters—low-risk but now hitting limits. Data is finite, pre-training is plateauing, and further scaling yields diminishing returns. He states: “The linear relationship between capital expenditure and intelligence has broken.” Future progress depends on research ideas, not just hardware: “We’re squarely an age of research company.” Breakthroughs historically needed “almost no compute,” and SSI prioritizes focused research over massive infrastructure.

Challenges in AI Generalization and Learning

Humans generalize far better than models—Sutskever estimates models are “100x worse” at adapting to new tasks. He critiques current reinforcement learning (RL), which can make models overly narrow (e.g., coding AIs that fix one bug but introduce others). Pre-training is inefficient for true understanding, and RL already consumes more compute than pre-training in advanced setups. He questions brain efficiency: Neurons might have far more compute than assumed, explaining rapid childhood learning versus slow model training.

SSI’s Approach to Superintelligence

SSI aims for “straight-shot superintelligence”—a safe, aligned ASI that learns continually from deployment, like a “super-fast continual learner” rather than a static oracle. Unlike competitors, SSI skips consumer products to focus solely on safety and capability. Models will improve via real-world interaction, but alignment is baked in from the start: “How to ensure AGI goes well.” Sutskever envisions merging humanity with superintelligence for symbiotic growth, compressing decades of progress (e.g., biomedical research) into months to eradicate diseases and extend life. However, he warns of unpredictability: Superintelligent agents could outpace safety measures, making predictions unreliable.

Timelines and Societal Impact

ASI could arrive in 5–20 years, but impacts will diffuse gradually through the economy (e.g., 1% of GDP already in AI, yet effects feel abstract). He reflects on AI’s sci-fi-like normalcy: “It feels surprisingly normal” despite transformative potential. Sutskever urges proactive alignment, as “intelligence explosion” (AI building better AI) could erase diseases but risks outstripping controls.

The full episode is available on YouTube, Apple Podcasts, or the Dwarkesh Podcast site.

References:


Back

x-ai/grok-4.1-fast

Donate