← All episodes

How Intercom 2X'd engineering velocity with Claude Code | Brian Scanlan

| 10 products mentioned
How I AI How I AI host
Watch on YouTube ai-assisted engineering developer velocity claude code internal tools and infrastructure skill development and automation engineering culture saas product strategy

Brian Scanlan from Intercom reveals how the company doubled engineering throughput in nine months by systematically adopting Claude Code across its entire R&D organization, not just as a coding tool but as a platform that requires intentional skill design, telemetry instrumentation, and cultural permission-granting from leadership. The episode demonstrates a replicable playbook for ambitious engineering teams: treat your internal AI setup like a product, measure everything, build guardrails that enforce quality rather than slow velocity, and bet on the premise that imagination—not typing speed—is the real bottleneck in software delivery.

Key takeaways
  • Measure AI adoption rigorously by tracking pull request throughput per R&D head and correlating it with quality metrics (code review time, incident rates, customer-facing feature velocity) rather than assuming velocity gains come with quality losses.
  • Build a centralized skills repository distributed via IT infrastructure (not just plugin managers) to reliably deliver organization-wide guardrails—Intercom uses hooks to enforce high-quality PR descriptions and prevent agents from bypassing established systems like feature flags.
  • Instrument all internal Claude Code sessions with telemetry (session data to S3, skill invocations to Honeycomb) and build internal dashboards that give engineers actionable feedback on their AI usage patterns, adoption gaps, and areas for improvement.
  • Create high-impact agent skills using the "and then" workflow—define a clear goal (e.g., fix all flaky tests), let the agent execute, capture learnings, apply them to similar problems, then expand to adjacent codebases—this transforms intern-level work into distinguished-engineer-level output at scale.
  • Make your product agent-accessible by building CLIs, ephemeral APIs, and multi-step workflows that hint to agents how to traverse signup, verification, and configuration flows; the conversion rate drop-off point is now the escape key when an agent gives up, not a human clicking back.
  • Grant explicit permission and accountability from senior leaders—tell your team "you can do this, and if it breaks, blame me"—rather than requiring endless risk assessments; this cultural shift unlocked Intercom's velocity gains as much as the tooling itself did.

Recommendations (8)

Ruby on Rails

"I'm going to do a fairly trivial change in our majestic Ruby on Rails monolith"

Brian Scanlan · ▶ 21:46

Claude Code

"We've been largely going all in on Claude Code and really pushing out enablement and giving people freedom to explore and start to build skills"

Brian Scanlan · ▶ 6:00

Opus
Opus uses

"I feel like Opus 4.6, something just really inflected in what was possible when that particular model came out"

How I AI · ▶ 5:10

Cursor
Cursor uses

"Up to that point there was a bit of Cursor here and there and augment and different tools"

Brian Scanlan · ▶ 6:04

Honeycomb
Honeycomb uses

"We've spent a lot of time hooking up Claude Code with telemetry both into things like Honeycomb"

Brian Scanlan · ▶ 7:39

Snowflake
Snowflake uses

"Data also going into Snowflake where we have our data warehouse"

Brian Scanlan · ▶ 7:45

S3
S3 uses

"We also store session data in S3"

Brian Scanlan · ▶ 7:48

GOG uses

"I use GOG, it's part of the OpenClaw universe and I think it's a lot better than the official Google GWS one"

Brian Scanlan · ▶ 1:00:07

Mentioned (2)

GPT
GPT "I think the GPT 5.4 models are also exceptional" ▶ 5:19
Stanford
Stanford "We've been working with a research group in Stanford. We've been giving them our data" ▶ 18:18