Sam Altman and Greg Brockman finally talked. 10 things matter
First joint podcast in 10 years. The question that kills bad products. 3 economic futures. The breaking point with Elon nobody had said on the record until now.
90 minutes. Sam Altman and Greg Brockman on Core Memory with Ashlee Vance and Kylie Robison. Their first joint media interview ever.
10 years of OpenAI history. The Elon trial. The Erdős problem. The night someone came to Sam’s home. A product strategy most analysts are still misreading.
Most of what got reported afterward was the gossip layer.
The substance underneath is what actually matters for founders, investors, and operators building right now.
A lot of it lines up with what the best GTM operators have been quietly figuring out for the last 18 months.
A quick note before we get into the 10 things:
This is how the top 1% of startups are scaling right now.
GTM Atlas is a new resource from Attio, the CRM of the modern GTM stack. Real frameworks from operators at Framer, Lovable, and Vercel. Not opinions. Not playbooks from people who have not shipped. Actual systems from teams scaling right now.
What is inside Entry One:
▫️ ICP, outbound, and retention frameworks from operators who built them in production
▫️ The qualification signals that actually predict conversion at the deal stage
▫️ Conversion plays that work without a pitch deck
Mapped by operators. Curated by Attio.
Free. No signup wall.
1. Greg's one question that killed a hundred bad products
The person who stops things needs more authority than the person who starts them.
“One of the things that Greg has done the best, which is not my instinct, is really just push to focus on the most important thing. There have been times where I have wanted to do more things and Greg has just said, ‘Is this the most important thing?’” — Sam Altman
Sora got cut. The social network got killed. The robotics team got redirected. In each case, one person with real authority asked one question early enough to stop momentum before stakeholders accumulated to defend it.
Sam raises the target. Greg applies the filter. 5 short calls a day, for 10 years.
Most organizations reward the people who start things. The person who stops things needs genuine authority or the question never lands. If that person does not exist in your organization, your strategy document is decoration.
Try this: name the last 3 projects your team killed. If you cannot, your filter does not exist.
For more on the founder mental models that separate operators from idea-generators, see Jensen Huang: 10 lessons from the CEO building the most important company in history.
2. GPT-5.4 Pro just solved an Erdős problem. Terence Tao noticed.
The bottleneck in research has never been compute. It has been hypothesis quality. That just shifted.
“Just in the past couple of days, our AI solved this longstanding Erdős problem... Terence Tao also is saying that this looks like there’s maybe a connection between different fields of mathematics that this AI has discovered. And we’re starting to see real beauty come out of these machines.” — Greg Brockman
Greg compared it to AlphaGo’s Move 37, which revealed territory in Go that no human player had imagined existed.
The bottleneck in drug discovery, materials science, and theoretical physics has always been the same: which fields to connect, which conjecture to pursue, which analogy is worth testing. The Erdős result is the most concrete evidence yet that AI is starting to drive discovery rather than assist it.
If you are building in any research-intensive vertical where pipelines slow at question formation, that bottleneck is now in motion.
For the broader picture on where AI is replacing knowledge work first, see Anthropic just showed us which jobs AI is actually replacing and Dario Amodei: AI is closer than you think (and society is not ready).
3. The personal AGI is the platform. ChatGPT is just the on-ramp.
Every session with current AI requires re-establishing context a competent colleague would already hold. That is the gap OpenAI is closing.
“We are no longer that far away from a model that just knows all of your context. It knows about you. It knows about your life. It knows what you are doing. It knows what you care about. It knows about the people in your life. It has access to your computer and your browser. And that is going to be a complete change to what it feels like to use a computer and what it feels like to use AI.” — Sam Altman
Greg framed the shift more precisely: the skill stops being knowing how to prompt and becomes knowing how to delegate.
Every product OpenAI is building (Codex, memory, the operator layer) points toward that end state. So does every product Anthropic is building.
Track this: count how many times a user in your product re-establishes context the system should already hold. Each instance is the gap the personal AGI fills. That number is your exposure to displacement.
The earliest production version of this infrastructure already exists. See Anthropic just solved the hardest part of building AI agents and Claude Cowork: the tool that triggered a $285 billion software selloff.
For the practical version of “knowing how to delegate,” see how to use Claude like the top 1% of users.
4. AI Writing Still Feels Hollow. Here Is the Actual Reason.
The problem is not the model. It is the absence of a reward signal precise enough to train on.
“I was promised before that I would have good writing by now. The writing feels... there is no soul, you know? There is something missing.” — Ashlee Vance
“The hard part is, how do you judge? How do you decide if something was thumbs up or thumbs down? In math and science, much easier than in some of these more open-ended fields.” — Greg Brockman
A proof either checks or it does not. Reinforcement learning needs a reward signal, and for open-ended writing that signal does not yet exist in precise enough form. The model averages across a vast and contradictory distribution of human taste because that is the only signal available.
Greg added a second constraint: the writing you want is profoundly different from the writing most users want, and OpenAI trains one model serving both populations.
Try this: give the model what it cannot infer. 3 examples of writing you are proud of. 3 examples of what you reject. Your audience, your constraints, your voice. Open prompts produce averaged output by design. That is a choice you are making every time you skip the setup.
For the practical guide to making AI writing not sound like AI, see how to make AI writing sound human.
5. Sam sees 3 futures. One has 10 trillionaires. He is worried about it.
Who controls compute at scale is the variable that determines which world materializes.
“I can see one where the floor comes way up. Everybody gets subjectively like 10 times richer. But also in that world, these people who really learn how to use agents and get a lot of compute together, we have some trillionaires. Maybe 10 trillionaires. So the floor comes way up, but inequality gets worse. That is one world I can see.” — Sam Altman
The second world has less total prosperity and less inequality. The third is Greg’s vision: compute access becomes universal and AI becomes the most extreme meritocracy ever constructed, through access rather than redistribution.
Altman laid out all three without ruling any of them out.
That is the real policy question underneath every AI regulation debate. Every infrastructure bet being made right now is a vote on the answer.
For where serious capital is actually betting on the answer, see where VC money is going in AI and the most valuable VC-backed startups in the world.
6. OpenAI's business model is simpler than anyone says.
“Our business is extremely simple. We rent or buy compute, and then we resell it at a margin. And as long as we have some positive margin on it, then it is scalable, because the demand is just unlimited.” — Sam Altman
Demand for inference is not bounded the way demand for cloud storage is bounded. Every unit of compute becomes directly monetizable as models improve and agents scale.
The buildout is purchasing inventory for a product that already sells.
Sam raises the number. Greg operationalizes it.
The contrast with Anthropic’s choices on the same compute race is worth understanding. See Anthropic just passed OpenAI in revenue, spending 4x less and everything Claude has shipped in 2026.
Everything Anthropic shipped in 2026 shows what a different set of choices about that same compute race produces:
7. The Model Is No Longer the Product. The Layer Around It Is.
The companies that own the orchestration layer win regardless of which foundation model wins underneath them.
“The models have shifted from being the product to being a part of the product. We used to have these very thin layers of software on top of them. But now it is a very fat layer... It is kind of like we have this in the form of the model and now we are building the body. Both are hard, they have to be co-designed together.” — Greg Brockman
Codex is positioned as an agent management platform, not a coding model. The orchestration layer is where the customer relationship lives. Memory, integrations, workflow logic, trust and verification: none of these change when the underlying model does.
Try this: ask which parts of your product survive a model swap. What remains is your actual moat. What disappears is your exposure.
For the practical version of building the body Greg is describing, see the complete guide to AI coding in 2026, everyone is talking about AI agents but most people have no idea how to build one, and the Claude Certified Architect curriculum.
How to build AI agents in 2026:
8. “We Built a Bomb”: Sam’s Case Against Fear-Based AI Marketing
The framing a company chooses shapes its market, its customer relationships, and its regulatory posture.
“It is clearly incredible marketing to say, ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for a hundred million dollars. You need it to run across all your stuff, but only if we pick you as a customer.’” — Sam Altman
Sam named the argument Anthropic makes and did not misrepresent it. His counter is iterative deployment with mitigations built in stages, released carefully to broader audiences over time rather than held back until a smaller group decides the world is ready.
The positioning decision is unavoidable for anyone building in security or dual-use AI. How you frame your technology determines who buys it, who regulates it, and who trusts it.
For the technical picture underneath this debate, see the AI model that can hack anything, and why you cannot use it and everyone is talking about AI but the people actually winning are using Claude.
9. Sam's worst week: what he actually said
Someone showed up at his home.
“The first day I was sort of in this kind of adrenaline shock about it... And then the day after I was just like, there is going to be more stuff like this and it is incredibly disheartening. And I went through a real depressive cycle about it. I think the way Anthropic talks about OpenAI does not help. And I hope that cooler times will prevail.” — Sam Altman
Altman is making a specific argument: the language AI companies use in public about existential risk produces real-world consequences beyond boardrooms and earnings calls. The night someone came to his home is the context he is speaking from. It gives that argument weight it would not otherwise carry.
How founders talk about their technology in public has become a safety consideration in a way it was not 3 years ago. The AI debate has moved from academic to physical.
That changed the scope of what every founder and operator in this space is responsible for.
10. Sam wants the Elon trial to happen. Greg said something on the record nobody had heard before.
Of the original 26 claims Musk asserted, only 2 remain: unjust enrichment and breach of charitable trust.
“My fear at this point is he decides to drop the case right before the trial and we do not get to do all this.” — Sam Altman
“Elon was like, you need majority equity. To be CEO, you need full control. Absolute control over OpenAI... That was the breaking point. That was the thing that caused us to say no.” — Greg Brockman
Greg named the breaking point for the first time on the record. Absolute control was the ask. The mission was the answer. The diary entries Musk’s team weaponized in court are, according to Greg, the best material the opposition found after years of searching.
The verdict will be cited in term sheets. Every mission-driven AI company that restructured from nonprofit to for-profit is watching it.
For the structure underneath this story, see OpenAI’s cap table just leaked, here is what is actually inside.
The full OpenAI cap table breakdown covers the structure underneath this story:
What you do with this
The personal AGI end state is not a prediction. It is a product roadmap.
Every company building in AI is either accelerating toward it or building something the next layer will absorb.
For founders. The orchestration layer is the defensible position. Build memory, integrations, and workflow logic that survive a model swap. The companies selling agents will be commoditized. The companies operationalizing context will not. For where the next fundable companies sit, see the 70 startup ideas YC wants you to build.
For investors. Who controls compute at scale is the variable that determines which of Altman’s 3 worlds materializes. For where serious capital is actually moving, see the a16z AI playbook for 2026 and Q1 2026 fundraising hit $80 billion.
For builders. The productivity multipliers are real, the context setup is what separates averaged output from useful output, and the architecture matters more than the model. See the complete guide to AI coding in 2026, how to use Claude like the top 1% of users, and the 5-agent sales team you can build this weekend.
The full interview is one of the most honest things either of them has said publicly in 10 years.
Full podcast: The Great Reset at OpenAI, Core Memory Episode 67.
If this breakdown saved you 90 minutes, share it with one founder or investor who needs to see it.



