← All episodes

The AI Model Built for What LLMs Can't Do

| 3 products mentioned
Every Every host
Eve Bodnia guest
Watch on YouTube ai architecture energy-based models llm limitations formal verification mission-critical systems deterministic ai spatial reasoning

Eve Bodnia, CEO of Logical Intelligence, explains why energy-based models (EBMs) represent a fundamental alternative to large language models for mission-critical applications like code generation, chip design, and data analysis. Unlike autoregressive LLMs that guess one token at a time and can hallucinate, EBMs construct energy landscapes that allow AI systems to see all possible solutions upfront, verify their own reasoning in real-time, and operate with deterministic, constrained behavior—solving the correctness and cost problems that plague LLM-based systems in high-stakes domains.

Key takeaways
  • LLMs are fundamentally limited for tasks requiring correctness because they predict the next token sequentially (like navigating a map with tunnel vision), making them prone to hallucinations and unable to course-correct once they've committed to a wrong path; EBMs solve this by providing a bird's-eye view of all possible solutions, allowing the system to choose different routes when it detects errors.
  • Energy-based models are non-autoregressive and token-free, which eliminates the expensive guessing game at the heart of LLMs and makes them dramatically cheaper and faster for spatial reasoning, data analysis, and engineering tasks that have nothing to do with language.
  • EBMs are inherently interpretable and controllable—you can open them up during training to see what's happening inside, and you can constrain them to follow explicit rules, making them suitable for safety-critical applications like autonomous vehicles or medical systems where you need formal verification, not hope.
  • Latent variables in EBMs function as knowledge storage, capturing the underlying rules and relationships in your data (not just surface-level patterns), which allows the system to generalize to new scenarios and infer correct behavior with less training data than LLMs require.
  • The industry's massive capital commitment to LLMs creates structural inertia against adopting superior alternatives—even as LLM progress plateaus on tasks requiring reasoning and verification, investors continue funding incremental LLM improvements rather than funding fundamentally different architectures that could capture B2B enterprise markets currently untapped by consumer-focused models.
  • EBMs can coexist with LLMs rather than replace them, acting as a specialized layer for tasks involving spatial reasoning, numerical analysis, and verification while LLMs handle language tasks, allowing enterprises to reduce costs and extend AI into mission-critical workflows where LLMs alone are insufficient.

Recommendations (2)

Codex
Codex uses

"Dan Shipper is currently hard at work testing the latest Codex and Opus models."

Every · ▶ 35:10

Opus
Opus uses

"Dan Shipper is currently hard at work testing the latest Codex and Opus models."

Every · ▶ 35:12

Mentioned (1)

Lean
Lean "They attach the external verifiers to it such as languages like Lean 4, which is a proof machine-..." ▶ 5:08