← All episodes

OpenAI Co-Founder on the AI Race, the Sam Altman Firing, and What Comes Next

| 8 products mentioned
Watch on YouTube ai safety and alignment agi development strategy iterative deployment organizational culture and mission ai economics and compute regulation and policy personal ai and agents

Greg Brockman discusses OpenAI's founding vision, the controversial Sam Altman firing and its aftermath, and the company's strategy for deploying increasingly powerful AI systems safely. Brockman emphasizes that iterative deployment—releasing intermediate versions of technology to learn from real-world usage rather than waiting for perfection—is critical for navigating unprecedented challenges, and he outlines how OpenAI is positioning itself to ensure AI benefits everyone rather than concentrating power and wealth.

Key takeaways
  • Iterative deployment beats the "build in secret" approach because it lets you learn from unexpected real-world misuses (like medical spam from GPT-3) and adapt before deploying vastly more powerful systems, rather than releasing AGI with zero contact with reality.
  • The Sam Altman firing was ultimately reversed because the company's mission-driven culture was stronger than individual power dynamics—zero employees defected for competing offers despite active recruitment, revealing that people work for the mission and each other, not money.
  • Compute is now the bottleneck, not algorithmic innovation; OpenAI's $100+ billion bet on data centers positions it to maintain leadership because competitors are struggling with access to compute, and the company openly plans future data centers dedicated entirely to single problems like cancer research.
  • AI will enable everyone to become a builder—coding tools like Cursor are already making software engineering 10-20% faster, and non-technical people can now create apps by describing them, radically lowering the barrier to entrepreneurship and skill-building.
  • Reasoning models' internal "chain of thought" is kept hidden not just for competitive reasons but because showing reasoning creates the temptation to train it to look good rather than be faithful, compromising interpretability and safety.
  • Personal AGI (not just chatbots) should handle 4 billion smartphone users' goals 24/7—from purchasing concert tickets proactively to managing health—but this requires solving alignment so AI pursues your long-term goals, not what feels good in the moment.
  • Regulation should focus on broad access to compute, privacy protections similar to doctor-patient privilege, and ensuring benefits flow to everyone, not on restricting AI development itself—keeping America competitive while protecting democratic values.

Recommendations (5)

Dota uses

"Dota, we had our first big result, right? That really was like, 'Wow, we can actually accomplish something when we put our mind to it.'"

Greg Brockman · ▶ 6:46

PPO uses

"The algorithm we use called PPO, you plan over every single time step. There's no hierarchy."

Greg Brockman · ▶ 9:00

"So many people were trying to sign the petition at once it actually crashed Google Docs."

Greg Brockman · ▶ 20:30

"I trained language models on DNA sequences for ARC Institute. It was a very great experience."

Greg Brockman · ▶ 24:22

Cursor
Cursor uses

"We now have these amazing coding tools which have truly revolutionized how software engineering is done."

Greg Brockman · ▶ 32:50

Mentioned (3)

Google DeepMind
Google DeepMind "Google DeepMind was the 10,000lb gorilla in the field. They just had lots of capital. They had th..." ▶ 4:30
AlphaGo "This was before AlphaGo, right? AlphaGo came out a couple months later, but it wasn't a surprise." ▶ 4:39
Cerebras
Cerebras "We came across a company called Cerebras which was building a unique piece of computing hardware ..." ▶ 5:14