3 Comments
User's avatar
Lucy Ryder's avatar

Thought-provoking summary. What stayed with me is that better generalization won’t come from scale alone. Human learning is guided by emotion, value, and relevance — not just raw capacity. Without attention to that orienting layer, there’s a real risk of cultural flattening, where systems become efficient while human distinctiveness quietly thins out.

Defcon's avatar

very interesting, thanks! (fyi: video 2 and 3 are the same)

ArtGaz's avatar

AGI safety keeps missing the core issue: AGI is the first intelligence that doesn’t expect to die. Humans are the first that do. Our morality evolved under mortality; AGI’s won’t. Until the “mortality gap” is in the alignment conversation, every safety plan is incomplete.