Discussion about this post

User's avatar
Neural Foundry's avatar

Exceptional synthesis. The framing of AGI as a superintelligent learner rather than an omniscient system is clarifying in a way most discussions miss. What stands out most is Sutskever's emphasis on the generalization gap, this is the crux that separates benchmarks from real intelligence. The point about humans arriving with evolutionary priors and continuous emotional feedback as an internal value system offers a concrete explanation for why current models stumble outside their training distribution. SSI's bet that ideas will beat scale feels directionally correct given the diminishing returns we're seeing. The five to twenty year timeline isn't speculative anymore, its within institutional planning horizons,which means alignment and deployment strategy need to be worked out now, not later.

Expand full comment
Defcon's avatar

very interesting, thanks! (fyi: video 2 and 3 are the same)

Expand full comment
1 more comment...

No posts

Ready for more?