Nathan Lambert
FollowEverything Nathan personally uses, recommends, or has created — plus things they don't recommend — sourced from their own show and appearances on other podcasts.
Created by Nathan
Top picks
"This is why I like the ChatGPT app, because it gives the AI a home on your computer where you can focus on it, rather than just being another tab in my mess of internet options."
"And then for code and any sort of philosophical discussion, I use Claude Opus 4.5. Also always with extended thinking."
"And then sometimes use Grok for real-time information or finding something on AI Twitter that I knew I saw and I need to dig up."
"I will regularly have like five Pro queries going simultaneously, each looking for one specific paper or feedback on an equation or something."
"I don't know, Exa is my preferred search provider, but somebody else might care for a different search startup."
"Gemini 3 is a fantastic model, and I still use it. It's just kind of differentiation is lower."
All products
Software & tools
Books & media
Showing 24 of 35 recommendations
Clear filters"Largely because the margin on NVIDIA chips is insane, and Google can develop everything from top to bottom to fit their stack and not have to pay this margin."
"Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI."
"The hype over Anthropic's Claude Opus 4.5 model has been absolutely insane, which is just... I mean, I've used it and built stuff in the last few weeks, and it's... it's almost gotten to the point where it feels like a bit of a meme in terms of the hype."
"And then DeepSeek are the people that did the training breakthrough, which is, they scaled the reinforcement learning."
"The likes of Z.ai with their GLM models, Minimax's models, Kimi Moonshot, especially in the last few months, has shown more brightly."
"The likes of Z.ai with their GLM models, Minimax's models, Kimi Moonshot, especially in the last few months, has shown more brightly."
"The likes of Z.ai with their GLM models, Minimax's models, Kimi Moonshot, especially in the last few months, has shown more brightly."
"Personally, I have very mixed reviews of GPT-5, but it must have saved them so much money with the high-line feature being a router where most users are no longer charging their GPU costs as much."
"Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI."
"Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI."
"Although when Grok 4 came out, the Grok 4 SuperGrok Heavy, which was like their pro variant was actually very good and I was pretty impressed with it."
"On my blog, we scrape Hugging Face so we keep download numbers for every dataset and model over time, so we have them."
"Qwen might be the one— Oh, yeah. Qwen was the obvious name I was gonna say."
"When I was writing about OpenAI's open model release, they were like, 'Don't forget about GPT-2,' which I thought was really funny 'cause it's just such a different time."
"Hugging Face has SmolLM, which is very popular."
"With OpenRouter, it's easy to look at multi-model things. You can run DeepSeek on Perplexity."
"That's the older term for it coined in Anthropic's Constitutional AI paper."
"I think you can kind of take this in order. I think you could view it as what made o1, which is this first reasoning model, possible, or what will the latest model be?"
"If we look at the GRPO equation, this one is famous for this because essentially the reward given to the agent is based on how good a given action—an action is a completion—is relative to the other answers to that same problem."
"I think there's a seminal paper from a Meta internship. It's called something like 'The Art of Scaling Reinforcement Learning with Language Models.' What they describe as a framework is Scale-RL."