3 Comments
User's avatar
Neural Foundry's avatar

Exceptional synthesis. The framing of AGI as a superintelligent learner rather than an omniscient system is clarifying in a way most discussions miss. What stands out most is Sutskever's emphasis on the generalization gap, this is the crux that separates benchmarks from real intelligence. The point about humans arriving with evolutionary priors and continuous emotional feedback as an internal value system offers a concrete explanation for why current models stumble outside their training distribution. SSI's bet that ideas will beat scale feels directionally correct given the diminishing returns we're seeing. The five to twenty year timeline isn't speculative anymore, its within institutional planning horizons,which means alignment and deployment strategy need to be worked out now, not later.

Expand full comment
Defcon's avatar

very interesting, thanks! (fyi: video 2 and 3 are the same)

Expand full comment
ArtGaz's avatar

AGI safety keeps missing the core issue: AGI is the first intelligence that doesn’t expect to die. Humans are the first that do. Our morality evolved under mortality; AGI’s won’t. Until the “mortality gap” is in the alignment conversation, every safety plan is incomplete.

Expand full comment