Paul Ford. Every illustration/'AI & I.'

Anthropic’s Newest Model Blew This Founder’s Mind—And Made Him Uncomfortable

Entrepreneur Paul Ford on why Claude Opus 4.5 is a turning point and why we need more disclosure from AI labs

12

TL;DR: Today we’re releasing a new episode of our podcast AI & I. Dan Shipper sits down with Paul Ford, the cofounder of Aboard, a platform that helps enterprises build software with AI. They discuss Anthropic’s newest model, Claude Opus 4.5, which Dan has deemed a “paradigm-shifting model on the coding end.” Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

If this episode gets you excited about learning more about Opus 4.5, Every is hosting a Code Camp on Opus 4.5 exclusively for paid subscribers this Friday at 12 p.m. ET. Learn more and sign up.


A week and a half ago, in the midst of the Thanksgiving rush, Anthropic released Claude Opus 4.5—and the world changed.

Dan Shipper and the Every team recognized the magnitude of the release immediately, and so did Paul Ford, this week’s guest on AI & I. Ford is the cofounder of Aboard, where he helps enterprises build custom software by pairing them with an AI platform and a team of experts to help them use it to put into production. He’s also a prolific writer, having spent decades writing about technology for Wired, Bloomberg, and his own blog.

Ford has been on the front lines of AI coding—using it inside Aboard, watching what breaks and what holds. The speed and ease with which Opus 4.5 writes code has Ford dropping his jaw.

Like Dan, Ford spent much of his Thanksgiving weekend playing with Anthropic’s latest model to vibe code little apps like a news tracker and a musical synthesizer. The experience of coding with Opus 4.5 has left him feeling wonderstruck, excited—and, somewhat unexpectedly, a little unsettled. Dan and Ford spend some time lingering in this discomfort on the show. As two lifelong tech optimists, they talk about the valid concerns an enthusiast might harbor as the ground shifts under our feet.

Here is a link to the episode transcript.

You can check out their full conversation here:

Here are some of the themes they touch on:

Know when AI is reflecting your own thinking back at you

Ford prompted Claude Code to produce a “mild-bearish” forecast for the management consulting industry. The model responded with bleak graphs and revenue projections that all landed on the same conclusion: The industry was doomed because its core value proposition—“structured thinking, applied to ambiguous problems”—happened to be exactly what AI is good at.

This leads Dan to point out that there are two categories of problems in the world: those with a single right answer—four is the only answer to two plus two—and those that don’t have a single solution, but a spectrum of plausible answers. Forecasting the future of consulting belongs to the second kind—and therefore you need to be mindful when you ask AI a question like that. If you tweak Claude’s prompt (say, drop the “bearish” framing), it would likely generate a persuasive case for the opposite outcome.

Ford says that Claude’s consulting forecast was basically a “mirror of [his] anxiety at the moment.” The back-and-forth reminds him of a difference in the way traditional software and LLMs work: We’re used to typing a search query into Google and then scrolling through results that answer our question. When we prompt a chatbot with a question, it translates our idea into another form—more specifically, into something that tends to reflect what was already assumed in our prompt. It might feel “suspiciously like an answer,” but it’s not quite the same.

Ford is wary of how chat interfaces encourage us to anthropomorphize the model, and wonders if users would be served better by another format that is more transparent about how a LLM reaches an answer.

(To really understand this point, I recommend listening to Ford narrate Claude’s savage vignette about a fictional ex-McKinsey striver, Alexandra Torres—who “did everything right” and still ends up in a mid-cap industrial strategy role.)

Don’t forget to ask what’s inside your LLM

Ford wants AI labs to make more disclosures about data they’re using to train LLMs. “I want some ‘nutritional guidelines’ as to what’s in my Anthropic LLM…and where that data came from,” he says. He points to Google as a model for how this can work in practice: Google’s search crawler generally follows the instructions in a site’s robots.txt file—a simple, public file that a website can publish to inform automated bots what pages they’re allowed to access and which to leave alone. LLM makers, he suggests, should offer similarly legible information about what data they use, so users aren’t blindsided later by scandals like copyright fights.

Make room for the emotional load that AI brings to the fore

Ford points out that the software industry has always been a social system, with a hierarchy and status markers—“I’m a frontend engineer,” “I’m a designer,” “I’m a product manager”—that people use as a way to signal their identity, and find a sense of comfort in where they fit. LLMs blur those categories, and when the old rules don’t hold, the destabilization can feel, in his words, “overwhelming.”

What he’s pushing back on is the impulse—common among early adopters—to treat that reaction as irrational, or as mere lagging skepticism to be corrected. Even though he’s been a “software person” for decades, he admits there are moments when the changes land as “a fricking smack across the face.” Instead of rushing past that feeling, he tries to stay with it, and sit in the discomfort long enough to understand what it’s pointing at.

What do you use AI for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you.

Here’s a link to the episode transcript.

Timestamps
  1. Introduction: 00:01:57
  2. How Claude Opus 4.5 made the future feel abruptly close: 00:03:28
  3. The design principles that make Claude Code a powerful coding tool: 00:08:12
  4. How Ford uses Claude Code to build real software: 00:10:57
  5. Why collapsing job titles and roles can feel overwhelming: 00:20:12
  6. Ford’s take on using LLMs to write: 00:22:56
  7. A metaphor for weathering existential moments of change: 00:24:09
  8. What GLP-1s taught Ford about how people adapt to big shifts: 00:25:45
  9. Why you should care what your LLM was trained on: 00:49:36
  10. Ford prompts Claude Code to forecast the future of the consulting industry: 00:52:15
  11. Recognize when an LLM is reflecting your assumptions back to you: 00:59:18
  12. How large enterprises might adopt AI: 01:12:39

You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links are below:

  1. Watch on X
  2. Watch on YouTube
  3. Listen on Spotify (make sure to follow to help us rank!)
  4. Listen on Apple Podcasts

Miss an episode? Catch up on Dan’s recent conversations with founding executive editor of Wired Kevin Kelly, star podcaster Dwarkesh Patel, LinkedIn cofounder Reid Hoffman, ChatPRD founder Claire Vo, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.

If you’re enjoying the podcast, here are a few things I recommend:

  1. Subscribe to Every
  2. Follow Dan on X
  3. Subscribe to Every’s YouTube channel


Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to sponsorships@every.to.

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!