← All episodes

OpenAI's Codex: This Model Is So Fast It Changes How You Code

| 7 products mentioned
Every Every host
Tibo Andrew guest
Watch on YouTube ai-assisted coding developer tools ai agents model performance automation software development real-time collaboration

Tibo Andrew and Andrew (of OpenAI's Codex team) discuss how the newly released Codex 5.3 model is dramatically faster and more capable than previous versions, fundamentally changing how developers interact with AI-assisted coding. The episode explores the strategic shift from Codex being a tool for professional engineers to a broader platform, the surprising decision to build a GUI instead of doubling down on terminal interfaces, and how extreme speed opens new possibilities for real-time collaboration between humans and agents. The hosts also dive into practical use cases like automations, skills, and how this speed advantage could reshape software development workflows entirely.

Key takeaways
  • Codex 5.3 is significantly faster than previous versions and is already being optimized further, potentially enabling 2-3x additional speed improvements and fundamentally changing how developers code in real-time.
  • The Codex app team intentionally chose to build a dedicated GUI over a terminal interface because it offers better visibility and control over multi-agent systems, supports multimodal outputs (voice, images, diagrams), and creates a superior experience compared to forcing everything into a TUI.
  • Automations and skills are emerging as powerful use cases—developers are using them for everything from keeping PRs mergeable and fixing bugs automatically, to generating custom children's books by chaining image generation and PDF creation together.
  • The next major bottleneck after speed is verification and code review: models can now generate features so fast that humans struggle to keep up with testing and validating the output, requiring new approaches to quality assurance.
  • Personality tuning is now a feature—Codex offers both pragmatic and friendly personalities, with plans to let users customize how the model behaves, acknowledging that different developers prefer different interaction styles.
  • The team is exploring mid-turn steering, allowing developers to interrupt and redirect agent work in real-time via voice or text while it's executing, creating a more conversational and fluid development experience.

Recommendations (3)

Linear
Linear uses

"we integrate with Linear, Slack and you know of course you know also need to be able to read the code"

Tibo Andrew · ▶ 16:50

Slack
Slack uses

"we integrate with Linear, Slack and you know of course you know also need to be able to read the code"

Tibo Andrew · ▶ 16:50

Cerebras
Cerebras uses

"the model is powered by Cerebras and we've talked about the partnership there"

Tibo Andrew · ▶ 30:15

Mentioned (4)

ChatGPT
ChatGPT "We will bring a similar experience to ChatGPT at some point which will have different properties." ▶ 8:08
VS Code
VS Code "Hey, should we have done a fork of VS Code as well?" ▶ 14:07
Cursor
Cursor "this is so much better than being in Cursor when surf or whatever" ▶ 16:09
Emacs
Emacs "Greg lives in Emacs." ▶ 16:00