Demis Hassabis named his AGI year. Here are the 10 things every founder needs to do before 2030.
The Nobel Prize winner who built AlphaGo and AlphaFold sat down with Y Combinator. He gave a specific date, a specific checklist, and a specific warning for anyone building deep tech today.
Chess prodigy at 13. Hit game designer at 17. Nobel Prize in Chemistry in 2024.
Demis Hassabis is not predicting what AI will do. He is reporting from inside the lab where it is being built.
He sat down with Y Combinator for 41 minutes. Named his AGI year. Described the exact gaps between today’s models and true reasoning. Told founders precisely what kind of company survives what is coming.
Here are the 10 that matter 👇
📢 Quick note before we get into it.
Hassabis says AGI arrives in 2030. That gives you 4 years to build the skills the people who survive are going to need.
The AI Skills 2026 Virtual Conf is May 14. 30 speakers from Meta, Google, AWS, DeepMind, Spotify, Databricks, Scale, OpenAI Forum, and Bolt. 100% free. 100% online. 8am SF / 11am NYC / 4pm London.
Topics include:
▫️ The AI skills every professional needs in 2026
▫️ How corporations actually decide which AI tools to adopt
▫️ The 2026 AI stack for founders and small-business owners
4,000+ professionals are already registered.
That is one Zoom link. Hassabis’s clock keeps ticking either way.
1. The AGI Architecture Is 90% Solved. The Last 10% Is Where Careers Get Made.
The current stack is not getting replaced. That is the most important thing Hassabis said in this interview.
“I can’t see a world in which we will realize in a couple of years this was a dead end. That does not make sense to me. But there still might be one or 2 things missing on top of what we already know works.”
Pre-training works. RLHF works. Chain-of-thought works.
What remains is a short list:
▫️ Continual learning
▫️ Long-term reasoning
▫️ Consistent memory
▫️ Stable performance across all domains
His honest estimate on whether those gaps require genuinely new ideas: 50/50. At Google DeepMind, he runs both bets simultaneously.
He does not think it is more than 1 or 2 missing pieces.
The floor is solid. The ceiling is unknown.
Anyone building on today’s stack is building on the right foundation. The gaps that remain are precisely where the largest near-term product opportunities surface the moment they close.
For the strategic context on the trillion-dollar race this implies, see Anthropic just passed OpenAI in revenue, spending 4x less and Dario Amodei and the long game of safe AI.
2. A Million-Token Context Window Is Duct Tape. The Brain Solved This Problem During REM Sleep in 2013.
This is not a storage problem. It is a relevance problem.
“We are kind of using duct tape right now. Shove it all in the context window. This seems a bit unsatisfying, right?”
Hassabis studied the hippocampus for his PhD. He knows how biological memory actually works: selective, prioritized, consolidated during sleep, replayed for relevance. DeepMind’s first Atari program in 2013 borrowed that exact mechanism: experience replay.
Current AI memory does none of those things.
Processing live video naively fills a million-token window in 20 minutes. Understanding someone’s life over a month requires far more. The problem is not size. Everything enters the context with equal weight, and retrieving the right piece for a specific decision is non-trivial even with perfect storage.
3 things remain genuinely unsolved:
1️⃣ Selective consolidation
2️⃣ Relevance-weighted retrieval
3️⃣ Graceful integration of new knowledge without degrading what the model already holds
Until continual learning is cracked, agents complete parts of tasks. Not whole tasks.
Every agent product built today hits this wall. Memory architecture is the unsolved infrastructure problem of this cycle. The company that owns it well becomes the layer everything else runs on.
For the practical engineering implications, see stop blaming the model, fix the architecture, prompt engineering is dead, context engineering is what matters now, and everyone is talking about AI agents, most people have no idea how to build one.
3. The AlphaFold Recipe Has 3 Ingredients. Most AI-for-Science Pitches Are Missing at Least One.
This is the only explicit breakthrough checklist in the interview. Run every new project through it before committing.
“The problems I like to look for are great if the situation can be described as massive combinatorial search space. No brute force or special case algorithm will solve it. And then you have a clear objective function. And then enough data and or simulator that can generate lots of in-distribution synthetic data.”
Hassabis has produced 2 of the most consequential AI-driven scientific breakthroughs in history. He knows the pattern precisely.
The 3 conditions:
1️⃣ Massive combinatorial search space. More configurations than atoms in the universe. Go moves and protein configurations both clear this bar.
2️⃣ Clear objective function. Minimize free energy. Win the game. Define it precisely enough to hill climb toward it.
3️⃣ Sufficient data or a simulator that can generate in-distribution synthetic data at scale.
Go passed all 3. Protein folding passed all 3. Drug discovery passes all 3.
The test also rules things out:
▫️ Small search space → brute force works → no AI moat
▫️ Undefined objective → cannot hill climb → no platform
▫️ One failed condition → demo, not company
Before committing to any AI-for-science project, run all 3 conditions. One failure is a product. All 3 passing is a potential AlphaFold. The checklist is fast. The mistake it prevents is not.
For the broader VC thesis on where deep tech capital is moving, see where VC money is going in AI, $80 billion in 3 months: Q1 2026’s record-breaking fundraising, and what top-tier VCs actually look for in 2026.
4. Agents Win IMO Gold Medals and Then Blunder Chess Moves They Already Know Are Wrong.
Brilliant on certain dimensions. Brittle on the ones immediately adjacent.
“Sometimes it will consider a move, it will realize it’s a blunder, but it can’t find anything better. So it kind of goes back to that move and does it anyway. You just should not be seeing that happening in a very precise reasoning system.”
Foundation models win IMO gold medals. They also make arithmetic errors a 10-year-old would not make.
That is not a knowledge gap. It is a metacognition gap.
Chess is the perfect debugging environment for Hassabis. The state space is understood. Errors are provable. The thinking trace is fully inspectable.
What he sees: a system that evaluates correctly and cannot act on that evaluation.
He calls it jagged intelligence.
The system identifies the blunder. It plays it anyway. Because search failed to surface an alternative in time.
This is not a quirk being patched in the next update. It is a structural gap in how these systems reason about their own reasoning.
Map every step in your product where consistent precise reasoning is critical: medical diagnosis, legal analysis, financial modeling. Those are where jagged intelligence surfaces before the benchmarks show it. Human review at critical breakpoints is architecture, not caution.
For the operational version of this discipline applied to engineering, see the AI code review checklist that prevents the next $1M production incident and why ChatGPT and Claude keep disappointing you.
5. Agents Cannot Do Full Tasks Until Continual Learning Is Solved. “Fire and Forget” Is Still Fiction.
“Not having continual learning is one of the things holding back agents from doing full tasks. That is the missing piece for them being really fire and forget.”
Right now agents are useful for parts of tasks. Every session starts cold.
Hassabis has seen people running dozens of agents for 40 hours. He has not seen the output justify the input yet.
Agents need to:
▫️ Accumulate context across sessions
▫️ Adapt to their deployment environment
▫️ Learn the specific preferences of the user they work for
None of that is solved at the infrastructure level today.
Products built on agents today have a ceiling rooted in the absence of persistent, adaptive learning. Whoever cracks that interface builds the infrastructure layer for the next decade.
For production-grade agent architectures, see the 5-agent sales team you can build this weekend and the Claude Code system that replaces a 5-person team.
6. The First Hit Game Built With AI Tools Does Not Exist Yet. Hassabis Says It Arrives in 6 to 12 Months.
“I can prototype Theme Park in half an hour now, which took me 6 months back when I was 17. But it still needs craft and human soul and taste. Something is still somehow missing.”
No vibe-coded game has topped the charts. No AI-assisted app has produced a hit at scale.
The gap is not generation quality. The gap is craft and taste.
AI generates the components. A human still supplies the judgment about what is worth playing.
The sequence Hassabis expects:
1️⃣ A person with deep taste and genuine tool mastery produces something excellent
2️⃣ That first hit proves the model works
3️⃣ More of the production gets automated
That first step arrives in 6 to 12 months.
Generation quality is not the bottleneck. Keeping craft and taste in the loop is. The products that solve that interface problem win the cycle.
See 25 Claude Skills that give your startup a marketing team it cannot afford yet and the SaaS defense playbook for the AI era.
7. AGI Arrives in 2030. A Deep Tech Company Started Today Sees It Arrive Mid-Journey. Here Is What to Build for That.
“If you start off on a deep tech journey today, usually you are talking about a 10-year journey. So now you have to consider AGI appearing in the middle of that journey.”
Deep tech journeys take 10 years. A 2025 start means a roughly 2035 exit.
AGI, on Hassabis’s timeline, arrives in 2030. Inside your lifecycle, not at the end.
His architecture prediction:
Gemini, Claude, or a general system will call specialized tools like AlphaFold as external APIs. The right design is a general orchestrator calling specialized expert systems.
What to build:
▫️ Specialized tools a general AI would want to call, not general tools replicating what general AI already does
▫️ Physical infrastructure, specialized sensors, domain-specific hardware
▫️ Products that grow more valuable as the orchestrator improves
Design for the version of your product that a 2030 AGI calls as a tool. That is the company worth building today.
For the founder mindset, see the AI agent that thinks like Jensen Huang, Elon Musk, and Dario Amodei and 70 startup ideas YC wants you to build.
8. Google DeepMind Invented Distillation. Frontier to Edge in Under a Year. No Theoretical Limit Visible Yet.
“Distilling and packing that power into smaller and smaller models very quickly. We have got to serve more than a dozen billion-user products.”
The numbers:
▫️ Gemini Flash runs at approximately 95% of frontier capability
▫️ Gemma hit 40 million downloads in 2.5 weeks
▫️ No visible theoretical limit to how far distillation can compress capability
The edge deployment thesis is specific. Android, glasses, robotics. Surfaces that process personal data. Open models on device beat closed cloud models there.
One year after a frontier model ships, its capability lives in an edge model. That gap closes every cycle.
3 implications:
1️⃣ Privacy and cost advantages of local deployment are real right now, not eventual
2️⃣ Cloud-only architectures will require rebuilding when edge becomes default
3️⃣ The window to get ahead of this is open. It will not stay open.
The infrastructure decision you make today determines your cost structure and data exposure for 5 years. Edge-first is a competitive position.
For the picture on AI infrastructure economics, see the complete guide to AI coding in 2026 and Coatue’s 18-chart AI report.
9. The Virtual Cell Is 10 Years Away. The Virtual Nucleus Ships First. One Hardware Problem Blocks Everything.
“Eventually you want a whole virtual cell. A full working simulation that you can perturb. I think we are about 10 years away from something like that.”
AlphaFold solved one layer of a very deep stack.
The sequencing is deliberate. Virtual nucleus first. It is relatively self-contained.
The technical blocker is live-cell imaging. You cannot image a live cell at nanometer resolution without killing it. Static images exist at that resolution. They are not enough.
Solve the imaging problem and it becomes a vision problem. DeepMind knows how to solve vision problems.
2 paths run in parallel:
1️⃣ Hardware path. Imaging technology for live-cell dynamics
2️⃣ Modeling path. Learned simulators of cellular dynamical systems
Isomorphic Labs is already working through the adjacent biochemistry. Big announcements are coming, Hassabis says.
Any startup generating training data for live-cell dynamics sits on a decade-long tailwind that has barely started.
10. He Would Have Worked on AI From a Garage for 50 More Years If It Had Never Worked.
“No one believes in it, which is why you have got to work in things you are genuinely passionate about. I would have worked on AI no matter what happened. I would still be working on AI today, even if we were still in a little garage somewhere and it still was not quite working.”
In 2010, investors told Hassabis AI was a dead end. Academia treated it as a niche subject the field had already tried.
He started DeepMind anyway. Same way Dario started Anthropic. Same way every unicorn deck was first dismissed.
The conviction was specific. He had a precise reason why this time was different. That specificity separates conviction from delusion.
He also chose the hardest problem he could find. Not because hard problems are noble, but because they are not meaningfully harder to work on than shallow ones.
If you only have one career, the calculation is obvious.
Every important company gets built in a period when the consensus says it will not work. The founders who build it anyway have a specific reason the consensus is wrong, and they are willing to be early by years.
See Dario Amodei and the long game of safe AI and what Sam Altman and Greg Brockman finally said out loud.
What this means for you
Hassabis spent 30 years on a single thesis:
Solve intelligence. Use it to solve everything else.
He is reporting from inside the lab where it is happening. The playbook he described in 41 minutes is more specific than most founders get from a year of reading.
Founders
Atoms create moats. Bits do not. Build specialized tools a general AI would want to call as APIs by 2030.
▫️ The AI GTM playbook for 2026
▫️ 70 startup ideas YC wants you to build
▫️ 50 AI agent startup ideas for 2026
Investors
The 3-condition test is a real screening tool. One failed condition is a product. All 3 passing is a platform.
▫️ What top VCs check in due diligence
▫️ How do VCs really make decisions
▫️ The ultimate investor list of lists
Builders
Jagged intelligence is real, documented, and not resolved in the next release. Memory and continual learning are the unsolved infrastructure problems.
▫️ The AI code review checklist
▫️ The Claude Code system that replaces a 5-person team
▫️ Build your own stock analyst with Claude
Everyone else
Every major field of science is approaching its AlphaFold moment. Hassabis says the results are coming in the next 2 years.
▫️ The single best productivity decision you can make with Claude
▫️ Anthropic just showed us which jobs AI is replacing
The 2030 deadline is specific.
The gaps Hassabis names are the opportunities.
Someone reading this builds the thing that fills one.
Full interview: How to Build the Future: Demis Hassabis on Y Combinator.
If this breakdown saved you an hour, share it with one founder or investor who needs to see it.




Oh, wow! Is this the same guy that developed AlphaZero? Because that kicked off a big chess boom when that science paper was released, and paved the way for numerous incredible chess engines such as Integral and Leela.
I would just like to say, thank you so much for all the work that you do, Demis. Neural networks are an incredible contribution to the game of chess, helping grant it a further depth beyond anyone's expectations.
loved the chess story. highlights some really interesting issues in this space.