What his rare interview with Dwarkesh Patel reveals about the future of AI, the end of scaling, Safe Superintelligence Inc., and why humans still win at learning
Thought-provoking summary. What stayed with me is that better generalization won’t come from scale alone. Human learning is guided by emotion, value, and relevance — not just raw capacity. Without attention to that orienting layer, there’s a real risk of cultural flattening, where systems become efficient while human distinctiveness quietly thins out.
AGI safety keeps missing the core issue: AGI is the first intelligence that doesn’t expect to die. Humans are the first that do. Our morality evolved under mortality; AGI’s won’t. Until the “mortality gap” is in the alignment conversation, every safety plan is incomplete.
Thought-provoking summary. What stayed with me is that better generalization won’t come from scale alone. Human learning is guided by emotion, value, and relevance — not just raw capacity. Without attention to that orienting layer, there’s a real risk of cultural flattening, where systems become efficient while human distinctiveness quietly thins out.
very interesting, thanks! (fyi: video 2 and 3 are the same)
AGI safety keeps missing the core issue: AGI is the first intelligence that doesn’t expect to die. Humans are the first that do. Our morality evolved under mortality; AGI’s won’t. Until the “mortality gap” is in the alignment conversation, every safety plan is incomplete.