Transcript: ‘Anthropic’s Newest Model Blew This Founder’s Mind—And Made Him Uncomfortable’

‘AI & I’ with Aboard’s Paul Ford

2

The transcript of AI & I with Paul Ford is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:01:57
  2. How Claude Opus 4.5 made the future feel abruptly close: 00:03:28
  3. The design principles that make Claude Code a powerful coding tool: 00:08:12
  4. How Ford uses Claude Code to build real software: 00:10:57
  5. Why collapsing job titles and roles can feel overwhelming: 00:20:12
  6. Ford’s take on using LLMs to write: 00:22:56
  7. A metaphor for weathering existential moments of change: 00:24:09
  8. What GLP-1s taught Ford about how people adapt to big shifts: 00:25:45
  9. Why you should care what your LLM was trained on: 00:49:36
  10. Ford prompts Claude Code to forecast the future of the consulting industry: 00:52:15
  11. Recognize when an LLM is reflecting your assumptions back to you: 00:59:18
  12. How large enterprises might adopt AI: 01:12:39

Transcript

(00:00:00)

Dan Shipper

Paul, welcome to the show.

Paul Ford

It’s great to be here. Thank you.

Dan Shipper

I am so excited to get to interview you. For people who don’t know you, you’re the co-founder of Aboard, which is an AI-powered software delivery platform for businesses. But closer to my heart, you are a fantastic writer. Thank you. You wrote a piece like when I was in college that just, it was like, it’s like the piece when I think of that era called What is Code for Bloomberg.

Dan Shipper

I would love to revisit that piece in a second, but—

Paul Ford

No, when you were in college a mere 10 years ago. Oh, Dan, that’s fine. You drink some milk and talk to me here then that’s great.

Dan Shipper

I have to do stretches now. I didn’t have to do stretches before.

Paul Ford

Oh yeah, yeah. At least you can do the stretches, dude. Enjoy it.

Dan Shipper

So super excited to talk to you, but the, I think the thing that we are both super excited about is Claude Code and in particular—

Paul Ford

Well, wait, wait. I’m super excited about my own product. But yeah, Claude Code. Let’s talk about it. Dude. What the hell just happened?

Dan Shipper

Yeah. The world changed last week and I think people—

Paul Ford

People don’t know yet. People, it’s like they just don’t know it changed. Can you articulate it? I have my own thesis, but can you, what do you think it is? Its Opus 4.5 and Sonnet 4.5 inside of Claude Code was a step change. How would you describe it?

Paul Ford

Very similar. So I’ve been, we have a tool that we built, if you go to Aboard.com you can use it on the web, like you build software for businesses at the prompt. And we’ve been trying to wrap guardrails around the chaos of vibe coding, because it doesn’t finish things. The last mile’s really long. It tends to leave a lot of loose ends. And so we’ve been very, very involved in the space and stayed really connected to it. And then about two weeks ago, right, like something changed and they sort of released their models. And I think what I would say is that Claude Code is, I would go so far as to say it’s the first true product built on top of an LLM. There are a lot of products and, you know, I want to believe that we’re in there too and so on. But what we’re all trying to do is build constraints and systems and kind of recursive methods of understanding what the output is and making it better, and making the LLM actually work the way people expect it to without all the strange endings. And Claude Code feels like they took that seriously. And in a funny way, I think it doesn’t represent some giant step change in the capability of an LLM. It feels like Sonnet and Opus are better, but they’re not like 9,000 times better. But they added in a layer of kind of agent-style thoughtfulness to the product. So it’s constantly evaluating its own outputs and then improving them, which leads to these really, really complex outcomes when it comes to writing code.

And so I’m in the same boat. I have a set of benchmark projects. There’s one called, there’s this document. It’s got a terrible, not documents, it’s a database. It has a terrible name. It’s called IPEDS. And a friend of mine asked if I could work with it like a year ago using AI, and it’s a government-produced database of every college they have to fill out. And it’s like, what are their majors and what’s the gender and race breakdown at the school and what is tuition? And so on and so forth. And then, it’s grizzly, it’s Microsoft Access databases and huge data dictionaries. And it’s the sort of thing that literally I wouldn’t have touched at an agency without hundreds of thousands of dollars to staff a team of engineers and really, like, it was a horrible, horrible programming problem to sort of take this, transform it and put it on the web in a sort of modern way. And man, I just knocked it out. It wasn’t easy. Like I still had to kind of know a lot of stuff, but it did a really great job and it built me a nice visualization with smart search. And I had to create an AI-enhanced search tool. I’ve been using it to set up a pipeline to build little musical synthesizers just to see how that could work. And today I was like, hey, clone a TR-808 drum machine, and it did it in 20 minutes, right? And it’s just sort of like, now I spent whole days creating that pipeline, right? But that used to be like the work of a company. And so I think what’s tricky, I don’t know if you have this experience, what’s really tricky is you go, wow, I’m powerful. And then you realize like, no, this is everybody now. Like you feel like you’ve captured something. Like you got the ultimate Pokémon, but everyone’s getting the same Pokémon shoved into the mailbox. This is me trying to come up with an analogy that connects with you as someone who’s a lot younger.

Dan Shipper

Thank you for being so relatable. Yeah, no, this is part of my job. I think I totally agree. My experience, I’m, I’ve been, I actually did a whole like presentation for our team this morning on what I think has changed about programming. And I would be curious, I think you’re the perfect person actually to talk about this.

The thing that is really interesting, the design principle that I think made Claude Code, makes Claude Code so powerful is that anything that you can do on your computer Claude Code can do. And it has a set of tools that are below the level of features. They’re like, they’re low-level tools. They’re like files, they’re command line tools. It’s bash, it’s grep, it’s like all this stuff. And what that allows you to do is it creates this system that is very composable and very flexible that you can build on top of and use in ways that they couldn’t necessarily predict. And what’s also really important is what that means, that the programs or the features of Claude Code are actually just prompts, they’re slash commands and subagents. So you can write features in English, which lets you iterate faster as a company and also lets your users make their own features. And I think that there’s, I think that that is a general principle that you can start to apply to any AI-based application as a product principle, which is anything in our application, anything that a user can do, AI can do. And generally we’re trying to move what used to be product functionality that is written in code into prompts that the agent uses low-level tools to accomplish the feature outcome. And that opens up all of these interesting, cool new doors for software development.

Paul Ford

I agree. Look, I think the patterns in this kind of programming, this kind of thinking are really, really different. So I’ll give you some Claude examples. But frankly, as someone, as a company that’s like building a tool along these lines, I think the patterns are emerging kind of for everybody in sort of all the LLMs. It is just that Claude Code just really bundled them up very, very efficiently and it kind of hit its core audience of engineers, which is them like, just like right across, it’s just a slap across the face. Because they literally were like, here it is. Here’s the future. It’s going to look like this. And we all went, yeah. All right, man. Okay. You got it? Yes, yes, yes, yes. Mr. Claude.

There’s a few patterns, right? So yeah, everything you’re saying, like you’re bundling stuff up as sentences. There’s another aspect of, and it integrates with the existing system, so it’s not like, it’s not this world apart. It’s an and. And actually what I found over the last week is where I normally would go to a command line and start typing. I start typing in English and forgetting I haven’t gone into Claude, right? Like I’m just, it’s so immediate because it’s so much better at building and orchestrating. And you know, it’s funny. It really, I’ll give you an example. I wanted to deploy something I built, that weird database I was talking about earlier. And so I went to like Fly.io, which is a very fast deployment environment, and I was like, because I bet it’ll be able to coordinate well here. And then I just was like, wait a minute, I have this random-ass server just like sitting somewhere that I use for scratch projects. Can you just SSH into that and just deploy this thing for me? And it was like, yeah, no problem. And it just jumped onto the box and looked around. It was like, oh, there’s no, it’s an Ubuntu server. Yeah, let me update your Nginx. Ooh, you need to get the certificate installed here. Let’s go ahead and do that. And 10 minutes later, and then the killer was, I was at Thanksgiving and my friend’s dad was like, boy, I really need to make a searchable index of this one politician’s newsletter for oppo research. And I was like, man, that’s something. He is like, yeah, I’ve been cutting and pasting into Google Sheets and I’m like, is it all available on the web?

(00:10:00)

And he is like, it sure is. I opened up Claude Code on my phone and literally between turkey and dessert, I built and shipped it. It was SQLite on the backend. It works just fine. He’s going to do his oppo research, don’t worry. He’s on the right side. And like, so like, I shipped a pretty complicated search-based, full-text search. Like I know that whole architecture really well. So it was really easy to instruct it, but off we went. And it also is good with dealing kind of like, I didn’t have to use all the new custom fancy stuff. I could just use an old server that was sitting around because it knows. And so there’s all of that going on and I think that as I’ve been working with it, what I’m finding is you gotta think not just like in terms of solving the problem, but in terms of like one level of abstraction up. Like I built a little, I had it build a little musical synthesizer for me that emulated like a Moog synth, something I know a reasonable amount about. And it did like an okay job and it had a lot of caveats and the remaining work on it would be hard. And I didn’t do it, but then I was like, okay, one level up. You need some more information about digital signal processing. So I’m going to go spider some books that are available for you online and I’m going to put them into a database. Whenever you have a question, search this little tiny SQLite database and refer to it. So then I gave it a reference source and then I was like, wait a minute, you keep writing code, Claude, you have to calm down because your code’s okay, but it’s not that great. I want you to go find all the open source libraries that are really good about digital signal processing, which is really edge-case-y. And I want you to make a list of them and I want you to only build based on those things. You should adapt and create a library, and then you should implement it based on that library. And as like five or six things, five or six things at that level of abstraction unfolded, I’m now able to say, hey, make me a synth like this and come back 20 minutes later. And that is a lot, actually. It’s a little emotional and confusing to process after 200 years as a software person. But if you work at that level, and I think that’s the skill that’s going to be emerging.

Dan Shipper

Yeah, I agree. I wanna stop you there at the, like, the sort of emotional level of 200 years as a software engineer. And I think that there’s probably, there’s just a lot of people who are professional software engineers who love the craft of code and who maybe are pretty skeptical of AI because they’re like, well, it can’t write like the well-crafted code that I can write. You know, it does all these things that are, the code is not efficient and it’s maybe not as DRY as it needs to be. There’s all this like stuff, right. Also if someone like you uses it, you can move to this level of abstraction where to some degree that code doesn’t matter or it doesn’t matter as much as it used to. How do you square that sort of craftsman mindset about code with what is now possible?

Paul Ford

Damn, man, I don’t know. I don’t know this week. I mean, I think two weeks ago I would’ve been able to be like, but like, I gotta tell you, I mean, I’ve been watching all this stuff real closely and I’ve been, I know how LLMs work, I did the homework and so on and so forth, and I kind of knew we were headed in this direction. But again, it’s like a step change in product. It’s not a step change in technology. Like the technology is still roughly the same. It just feels like, but there’s also this element of like, one of the things we haven’t talked about yet is you can instruct it to get better. You can be like, hey, if you were Claude, if you were, I would be like, if you’re a really good engineer at Anthropic, take a look at this code base and tell me how to make it more efficient. And it’s like, well, I would do these things and get this stuff out of this file and put it over here and make this more searchable and let’s make a command over here and let me write you some code. And so it’s self-referential, which means it can accelerate. And so what I’m getting at is I no longer feel I can in good faith say, hey, calm down, and take it as it comes. Humans are, human skills are going to be relevant. I don’t know if this is going to be a really good time for everybody because you’ve got 600,000 jobs like Accenture alone, there’s like 50 million devs in the world. There’s a glass case to be made, which is, hey, everybody can clean up their roadmap. And it’s a really great time for engineering to capture the value here and bring that acceleration to the organizations that they service. And everybody can have their thing. And that is really exciting and motivating. And I think that would be one way to look at this and be like, hey, you can take your craft, you can evaluate the outputs of this, and you can make sure that the people in your world are getting good stuff faster, but also make sure it’s safe and on rails. But that just isn’t how humans work, man. Humans are just like, humans want to type in the box and get a thing and if it kind of works, they’ll be like, I did it just like you with your app, or me with my apps. Like they might be crap, you might be looking at this and you might have like app glaze all over it, just like we see with images and text, but you can’t see it yet because it’s so shocking except that it’s software and it’s like, it’s not like there’s no API glaze. It’s like it pulls from the database where it doesn’t. So it’s just this very confusing moment where it’s doing really practical, really difficult things that used to be really expensive. All I can tell people to do is like somebody on Bluesky, I don’t know who it was just was like, which you know, Bluesky doesn’t love this stuff, was like, I don’t think anyone should have any opinions on AI until they spend two hours in the Opus room. And I think that’s right. Like you gotta just give it two hours and see where you get and then you can be as grumpy as you want, but like, you gotta give it a go.

Dan Shipper

I agree. I think and I would love to get to some of the social implications, but I’m mostly interested at first because I think the only way to, or I think the best way to understand the larger implications is to understand the implications on yourself. Like how it is changing how you process the world and how you think about yourself. And so I’m curious about that for you.

Paul Ford

You know, it’s funny, I’m building an AI company with a wonderful business partner I’ve worked with forever. I’m looking out, we have a nice office and we have a great team and we have clients and we work with them and we’re doing what I just described. We are moving their roadmap along and we’re bringing them tools much more cheaply and much more quickly than we used to be able to. And I think it’ll get faster, right? Like we want to drive that value out. And so in some ways things are pretty normal in that I come to work on the train every day. And in some ways they’re not in that there was so much friction built in for good reasons into the software development process. And the software development process is social. You know, like engineers say no a lot and they say no for good reason. And I used to train them to say no because clients would ask for things and it would blow up the scope, and then the whole project wouldn’t ship. And then they’d call me on a Saturday and yell at me. And I didn’t want that to happen. And so I’d be like, we gotta say no upfront. And my co-founder has a wonderful maxim, which is there’s no bad news 90 days out. If you see something failing and you tell somebody, hey, like, I think we’re going to have a problem. I’m not going to be able to build your thing. But it’s three months ahead and you say, let’s work together to find a solution. And people tend to be very accommodating and understanding. It’s only like three days before when you’re like, we’re going to miss the deadline that they freak out. And so my whole life has been architected around the fact that everything I do is exhausting, takes time and involves some of the most difficult people who’ve ever existed on the face of the earth, who usually hate me and each other. Okay. And like that is my day to day, and I’m pretty good at it. And everybody thinks lots of thoughts about me and themselves and their disciplines, and people are very, very anchored to their disciplines, right? Like, I’m a front-end engineer, I’m a full-stack engineer, I’m a designer, I’m a product manager. And to see all of those categories blur and all of those rules change and all of the things that allow people to say where their value is, is frankly really overwhelming. And I don’t want to devalue that emotional response because I’ve been kind of coming in and being like, hey, let’s all do this together and let’s move forward. But boy, I don’t know about you, but there are elements of this that are just a fricking smack across the face.

Dan Shipper

It’s interesting. I’ve definitely, I’ve had moments of that both on the writing side and on the coding side, but I think that we’re so in the center of just figuring out, okay, what do we do now that it has quickly shifted to like, there’s so much to do. So it’s, I think I’m familiar with the emotional experience.

Paul Ford

Well you chose to jump in, right? You’re like, I’m going to build infrastructure and community in order to address this change. We built a lovely office. You should come visit, literally. Because we know that New York City is not ready for AI and we’re like, okay, let’s at least have a place where people can like, and we’ve been having not-for-profits in and lots of folks who are going to get ignored so that we can talk about this. So I think that part feels really good. I think it’s just like, it’s a lot of change. Like we’re coming on, we got the GLP pandemic and now this. Writing is funny for me too because I’m like, I actually see the writing is because like, it doesn’t write for me, I kind of don’t get it to write for me. It just, it can’t be me. Like I’m, I just am what I am as a writer, but I see a lot of people who aren’t writers and my God, it’s good for them. And I’m like, it gives them access to a world and to kind of enter into a more formal style of communication that they didn’t have before. And so, like to me, writing is supposed to empower and like if the robot helps you, that’s good. If the robot thinks for you, that’s bad.

(00:20:00)

Dan Shipper

Yeah, I think I’ve been trying to sort of process like, okay, what are the, what are those moments where I have that existential freakout? What is that like? Because I had that a few times sort of during this process, and each time once I got over it felt like, okay, there was something there that I missed and I’m trying to like, update my intuition or my analogies for like, so I can understand those experiences better. And there’s that moment where the present sort of collapses into the past and everything that you used to know looks really old and you’re like, what’s next? And the intuitive experience that I think matches this most closely is that before we had really good sea travel, we used to think that if you went into the ocean, there would be an edge that everything would fall off. There’s an edge of the world and that’s our intuitive notion in a lot of ways of what happens when you get to the horizon. And what we found when we got to the horizon is that there’s more horizons. And my experience, I think that that maps pretty well onto my experience with AI is like, each time I encounter this new thing, I’m like, oh my God, I’m at the edge of the world and there’s like, it’s a cliff and it’s just going to drop off. And then each time I sort of step over the horizon and I’m like, whoa, there’s this whole new territory. Which is not to say that there are no bad effects and there’s not like complicated social issues to work out. But it is to say that I’ve learned to catch that edge of the world intuition and try, and I’ve tried to update it with there’s probably not an edge. There’s just a new horizon.

Paul Ford

That’s a good way to look at it. I agree with that. I think for me, I don’t think human beings are going to change. I don’t know if society will completely reorder itself, although in a way it seems to be trying to, so that part’s tricky. But I think what’s wild to me is learning how hard it is for humans to metabolize change. For me, the moment, the one that blew my mind, the last time I felt this way, just like exactly like this was when my doctor put me on Manjaro very early. I needed it. And what’s Mounjaro? It’s like Ozempic. It’s a GLP-1. Okay. Okay. So suddenly after a lifetime of not being able to lose weight, I lost like 70 pounds in a hurry. And I was very dangerously big. I’m still pretty big, but my health has changed. And it was really after a lifetime being told like, this is how this works. This is the only way it works. You can only do surgery. There is willpower and so on. So all these rules and this whole social system and things that I heard from doctors, and it was one day they went, eh, and it was really confusing.

It was real. I’m an adult man and it was really confusing to go from, this is the system of the world. This is what weight is, this is what obesity is, and these are the only ways that things can change, and then to hear the next day that actually it kind of was a medical condition, whoops. And then knowing that this would push through the world and this would change the way that we talk about our bodies completely. And it did. Like I just, like I knew in that moment like, oh, we’re not going to put this back in the box. This is going to be very different. People are going to have very strong opinions about it. Oprah’s going to do a special, and here we go.

And I feel that way about this, not that we can’t process the change, but just a year or two, which is how long it’s going to take for like the idea that you can just have code by typing in a box and it’s pretty advanced and it does things like ship apps is nowhere near enough time to process that. Like it’s just nowhere near and it’s actually going to look like that horizon. It might take a couple years for people to figure out that they can have any software they want anytime. I use the concept a lot, I call it latent software, like PDFs that describe procurement forms or Google spreadsheets that are floating around. Like my company Aboard is all about taking latent software and making it real and getting into people’s hands. And so we’ve been trying to coach people along and they’re very confused. And now you’re about to see that OpenAI is going to build their own and you know that Copilot’s going to get smarter and you know that there’s going to be Super Bowl ads, if not this year, then next year, about how you can have anything you ever wanted. And we just rebuilt the whole society over the last 30 years around software, right? Like software is eating the world was this whole idea and now it’s eating itself. And so like I do, look, you’re right.

Like are we going to be okay as a species? Eh, about as okay as we ever are. Will there still be jobs? Yes, right? Like I’m not actually a pessimist, but I am after the pandemic and GLP-1s and Trump and everything, I’m just very nervous about the human ability to tolerate change. And we’ve created the ultimate change engine that sits in the middle of our global economy and spews out change at an unbelievable rate. And we just created the number one change accelerator possible, which is to move software much, much faster. And so I don’t think we’re going to see, it is not going to be familiar. Parts of it will be very familiar, but I think parts will be very, very weird and it’s going to be really, really strange to watch.

Dan Shipper

I love the GLP-1 example. And it’s interesting that you listed GLP-1s with Trump and the pandemic, but I assume, which in my world, those are two pretty negative things, but GLP-1s, I assume you have a positive experience with them, so it’s sort of interesting.

Paul Ford

Change is hard, man. I was in client services for 20 years. It is hard. I am still in it. I have a really good product that can really help people. I have an organization that can really help people. I see Claude Code showing up and I am showing it to people in my world because similar to you, I’m like, whoa. And they’re like, well, hold on a minute. And I’m like, no, and it’s not me saying, I want you to use this, I literally just want to say, and it was like this when I was writing, I just want to show you so that you can figure out what to do next. And what I have found over and over in the course of my life is that merely by showing people, they tend to panic. They don’t want this change.

And they say they do. They want the output, they want the value. Everybody wants to be an app developer, but what they want is it to run the way it used to. I don’t know if you’ve noticed this, but every product manager you know is now building their own app. And every engineer is building their own app without product managers. And the product managers are building without engineers and the designers are trying to figure out how to ship. And they’re all really happy to get everybody out of their world, right. And they’re pretty sure they’re going to be able to capture the value of the revolution and they wanted to follow the rules that used to be there. But it won’t. No, it won’t. And so you can be, I like, I don’t know what we are. Are we all pipeline builders? Are we all coders now or are we all app builders?

And like everybody’s having the experience you and I are having, who is deep in this, but we’re about to find that everything we created is probably more disposable and less exciting than we thought it was two weeks from now. And so I am puzzled by that. I think this is going to be a rough one, deep down, an exciting one with an enormous amount of good things. And I can’t, I’m so excited for everybody to have all the software they ever wanted because that’s always been my dream. But now that it’s here, I’m a little scared.

Dan Shipper

Isn’t that interesting? Like, I’ve been thinking about that too a little bit as if I took a step back and like to re-rode like seven years or 10 years and I said, there’s just going to be a thing where you type into it and it just makes whatever you want. Yeah. I would’ve been like, that’s great. That’s definitely not scary.

Paul Ford

Well, it finally happened. Yeah. They’ve been promising this, they have been promising this for 70 years.

Dan Shipper

And then it just happened and then you’re like, like, it makes me question if anything could happen that would be an unalloyed good.

Paul Ford

Hmm. No. That’s been the lesson for the last like 15 years. No is the answer. And that’s, I don’t know, like that’s also the lesson of adulthood, right. And it’s also the lesson of working with people. When you work with people, their best qualities are always their worst qualities. You know, I’m good at thinking big thoughts, but often terrible at delivery. So you have to pair me with somebody who’s good at delivery. Yeah. Because I get distracted. You know what’s funny though? Tangential to that. The promise of software, if you go back to the Xerox PARC days, even before Lisp programming language and so on, is that we would have sets of composable objects that could interact and that an average human being would be able to learn the system and build whatever they wanted. Okay. That was the whole point of like Alan Kay and the Dynabook in the seventies. If you don’t know what this is like, it’s very legible. It’s essentially like a laptop that kids can use to build any software they want. Proposed in the seventies at Xerox PARC. Go look at the Wikipedia page. It’s kind of what we thought, and then we thought that was going to be the iPhone, right?

(00:30:00)

Paul Ford

We thought that was going to, and particularly the iPad, to the point that Steve Jobs and Alan Kay were kind of talking about that when the iPhone was being rolled out. Hey, I think we’re getting closer. You know? And the idea was you’d manipulate code in ever more abstract ways. And what happened is LLMs, computers continued to suck and suck and suck and be horrible and never work. And our solution was actually to simulate humans so that they could do it for you rather than make the computer really, really usable or figure out how to make really, really robust code. And there’s good reasons for that, but I don’t want to go into them right now.

But like people have been trying for decades, and so suddenly we have it, we have the fantasy of the seventies. I could sit, I can train anybody I think at this point to think algorithmically and structurally enough about applications and there’s going to be a lot of retooling around how we educate people about what software does. But I think in about two weeks you could start to build really, really meaningful stuff. And I think in about two years you can probably build just about anything. And that used to be the work of 20 years. That is great. And it is great. I don’t want to freak out too much. I just spent it all, it’s Thanksgiving weekend as we’re, like it just ended and I just spent too much time on the computer.

Dan Shipper

I think so, but I wanna stick there because I love it. That’s the story of adulthood because you’re absolutely right. And that is my problem with a section of the AI discourse, that is, I would say more the mainstream section, which has this hidden underlying assumption that anything that could have negative effects is bad. And so is looking for only those, more or less, as opposed to like a little bit more like in adulthood you’re like, there’s some really good stuff here and there’s some problems here. And it’s sort of like a wonderful and terrifying mix of things. And our job is to acknowledge the good stuff and deal with the bad stuff as best as we can. And I think that’s what’s difficult to access when you’re at the edge of the world is like—

Paul Ford

Oh, okay. I know exactly what you’re talking about here. I see it differently. So you’ve got a variety of discourses, right? So let’s take one, which is, and the one you’re talking about is like very left-adjacent, very much shows up on Bluesky, right. In some ways, that’s kind of my home base. Like that’s my family, the way I was raised. You’ve got one group that is like AGI is coming, get ready. The computer is God. Okay? And so like we’ve all kind of learned to make our peace with them. They don’t live here in New York City. We’re just going to, like, they seem good. It’s a lot of guys, a lot of polyamory and good for them. I wish—AcroYoga, you know? Yeah. And they also really, like, they’ve also kind of all shut up about AGI because there’s so much money to be made. Like Sam Altman cracks me up, right? Because he wants to be Steve Jobs, but he is Steve Ballmer. He just kind of got the wrong Steve. And it’s just like, here we go. Okay. Commerce, capitalism.

Dan Shipper

That is a hot take. I mean, am I wrong though? Tell me if I’m wrong. I would love for you to unpack that. I think that’s a great line.

Paul Ford

Do I even need to? He’s a really, really good salesman. He is a really good deal guy. He told us we were headed towards AI Jesus, and now we’re getting shopping, right? Like he’s a commerce guy. I don’t actually, I think he’s good at that. You know, I think Anthropic is funny if you compare the two companies, like OpenAI is very much Microsoft. Whatever you want, whatever you want, we’re going to sell this to you and you’re going to have it. God, yeah. Let me give you more. And Anthropic is Google. And it’s actually funny because look where they’re buying their chips. Anthropic is literally buying Google TPUs. Like they’re—

Dan Shipper

I thought you were going to say Anthropic is—

Paul Ford

Apple. No, nobody’s Apple because nobody’s really, Claude Code is great, but it has nothing to do with human beings. It has to do with, it’s still for engineers. You can’t put anyone, you can’t put a civilian in front of that interface. It makes no sense. That’s true. You just can’t. Now, could they get there? Maybe. I just don’t think they even want to. I think they want to just accelerate, accelerate, accelerate engineering and let everybody go run off, and then they’ll figure out how to productize along the way. Whereas like, I think OpenAI wants to make a play for the whole shebang. They want to be the operating system. And the Apple in the middle, the people like, what’s it going to look like? The thing about Apple is it made the computer disappear. So who’s going to make the LLM disappear, right? And just sort of align it with what people want to do today. And I don’t know if we’re even there yet with this technology.

Dan Shipper

I don’t think so either.

Paul Ford

Oh, so wait, so that’s group one. Okay. We got group one, and then here is my, I’ll actually give some advice, which is Silicon Valley in particular dropped this absolutely bizarre thing, told everybody it would solve every possible social ill and didn’t really come with a plan. And there were real harms that emerged and people panicked. And the harm frameworks weren’t clear. And I think what we gotta do, because I’m in there too, man, I love this stuff. I use it every day and then I go on Bluesky where like 80% of my feed is people saying how much they hate everything that I’m touching all day long. And I get it. I get it. Because they also hated the tech industry. I think you gotta just let them burn it out. There will be people who just hate this shit for the rest of their life. And what you’ll find, because I’ll tell you, here’s what’s wild, and this is actually as someone who’s very much kind of on their, I feel them on their side. I got my kind of progressive type literary types from, I used to be an editor at Harper’s Magazine, right? So like there’s a whole world there for me where those people want nothing to do with this. They want their prose untouched by a robot and they want a certain world and a certain vision of the world to persevere and this is all noise and distraction from that. Just like everything is just like the tech industry is just like the web is, like blogging was, and they’re just like, please let me get back to my purity and please get out of my hair. And okay. Like that’s what they want. Then I think there’s a, but then there’s this very tricky thing going on. There’s a lot of people who are like, this is just an OID evil and we have to reject it. And at the same time, I’m sitting here in my nice office in New York City, but I’m hearing from and working with children’s health charities and scientists and real do-gooders and climate types who are like, this can accelerate our roadmap and we want to do it. We want to use these tools to achieve our mission. And their mission is OID what I believe to be positive in the world. They see the value. They’re often coming to it like scientists, they see the risks and they’re like, let’s please use it in order to get that done. They are not, software is not the star of the show for them. Their work is their community, their donors. And they’re like, what can we do to aggregate the data or deploy the platform or manage the content or do this stuff in such a way that we can do more of the other things we want to do, which is what we believe in, good for the world. And they’re super excited and motivated. And so what I see when you’re talking about that stuff, there’s actually a strange fork. There’s a group of people who are like, I believe that I have a really good ethical model for what humans need, and I believe we have to reject this outright. And then there’s another group that is like, I believe that and it’s my day-to-day job.

And group A is like, keep this out of everything. Group B is like, I can’t wait to use more of this. And it’s very, very confusing. And I think that tension is going to just keep rising. And at the same time, there are people who are like, I’m a professor, I teach research methods. I don’t want this near my students. I need their brains to work. And I get that. I actually think that’s right. Like good, okay, draw that line. Make them figure it out. They’re going to go use it anyway. They know that. But if you want to put them in a box for a minute so that they actually learn the history of how to think and what to do, and you feel that that’s important as an educator, I’m not going to second-guess you. I respect that. So I think it’s trying to find a balance in all this, but ultimately the balance is like you’re there with that prompt and it does something for you that’s really useful. And kind of knowing what’s good and what’s bad about it, and then going on with your life. Because if you even try to engage with any of the discourse around this technology, you’re just in hell. I mean, I’m glad I didn’t start a business totally focused on that problem.

Dan Shipper

This is why I stay off of Bluesky. I can’t imagine, I just can’t imagine being you on Bluesky. It sounds like it sucks.

Paul Ford

I get a funny hall pass with this stuff because I’m an old and, you know, and just like, I just, I still get yelled at on a regular basis, but like, yes, yesterday I was just like Simon Willison, who I’m guessing many of your listeners should know is just like, wow. He sort of stirred the hornet’s nest by talking about how AI was changing coding. And I just did, like, he’s right. You should listen to the post. And you know, like half the people, what happens is everybody comes out and they’re like, yep, yep. And then the other half are like, no, there’s this one time and it’s this and it’s that. And let them fight, man. Let them fight.

Dan Shipper

In your mentions. I think this is actually a very typical basic reaction to a paradigm shift. Yeah. And to some degree, people who have, who are like, know how they do things and want to keep doing it that way, are just going to keep doing it.

(00:40:00)

Paul Ford

And it’s the same thing. It’s so sad. You also got people coming in from the West Coast telling you how it must be done forevermore. Yeah. And that feels really bad. And they just dismiss your concerns, right? We’re used to it. We’re tech nerds. And we’re used to nerds just kind of like stumbling in. Nerds never actually fully acknowledge how much power they have in a room. And so they’re like, whoa, why is everybody so obsessed? It’s just really cool technology. And then it’s like, whoa. Because I was going to make my living as an illustrator and I was going to send my children to a, like we were going to go on vacation once. And they’re like, well, whatever, UBI. And like that whole thing, that’s how that comes across. It’s just this tin ear on the West Coast. And it is pretty hard for people, I think, to be told over and over how they’re, it’s okay that they’re being devalued without being celebrated in any way. And so you end up with stuff like Anthropic having to pay $1.5 billion to publishers, right? Because like all that stuff, it’s just like these, they feel vulnerable and then they feel attacked and then they’re going to use what power they have. And one of the powers they have is to just complain. And I don’t know, I think you gotta, we have to own that because we got to keep all the money.

Dan Shipper

Well, let’s, let’s unpack that a little bit.

Paul Ford

Okay? First of all, let’s bring in some employees to unpack it with us as the leaders of our companies, right?

Dan Shipper

Hey guys, come on in. Let’s talk about how we, what I want to understand, what I want to understand, like, I love them, they feel vulnerable. And if you feel vulnerable and something new comes along, it’s like, it’s an obvious immediate reaction to be like, this is bad. I don’t, I don’t like this, I don’t want this, right? But it doesn’t help that—

Paul Ford

They all went to the White House and Kumbaya with Donald Trump, including Jensen Huang. I mean, it doesn’t help the vulnerable people feel less vulnerable. Yeah. Let’s just, just putting that out there. Anyway, go on.

Dan Shipper

That lost my mom, which is for real. Yeah. Yeah. But—

Paul Ford

I think Dan, whatcha doing in league with Satan?

Dan Shipper

You’re replaying my Thanksgiving conversations. No, my mom is much, she’s very proud. She shouldn’t be.

Paul Ford

She should be very proud, but she wants—

Dan Shipper

Me to be careful. Oh, you better be careful associating with the league. I think but let’s, let’s annoy ourselves in-between-worlds type people. Okay. Where we like the tech stuff. And then we also care a lot about writing and the humanities. And so ideally because we’re amazing New York tech people, we can kind of be the bridge that’s missing between these two camps. And what I want to understand, let’s say we’re trying to explain, unless—

Paul Ford

Literally we, you and I can do an event, have a nice space. We can bring them all here. It’ll be good.

Dan Shipper

I would love that. That would be amazing. All right, we’re going to do that. We’re going to do an event where humanities people can come yell at us.

Paul Ford

I didn’t sign up for that. You’re the one on the Bluesky. You get the—no, no, no. We’re doing it. We’re going to bring in the angriest overpaid professors from the most expensive schools in America to tell us how bad we are.

Dan Shipper

Now here’s what I want to understand. Let, like, let’s just, let’s take the balance perspective for a second. Okay. And say we want to examine the arguments of the people on the left who are loudest about this being bad. And like what are, what do you think are the actual real bad things that have happened or are happening or will happen that a reasonable person who loves this technology should care about?

Paul Ford

That’s a very good question. Let me think for a second about running my mouth because I think, look, there are a lot of stories and narratives about specific harms. You see them in the paper. And you know, it’ll be ChatGPT encouraging suicide in teens. And I think there’s an element, I have a tricky reaction to that because, as a technologist I’ve watched and I’m 51, right? So I’ve watched like two or three generations of internet technology and these harms just spill out at scale. And it’s really not, stopping the harm is not always possible. You have a new technology. You see ways that, and I think what happens is you see these orgs, they get a narrative of their own importance in the world because they’re getting constant positive feedback. The money’s pouring in.

People are saying, my God, this really helped my daughter, this really helped my son. This is, we’re using this in all sorts of exciting scientific ways. And then they’re shocked when something bad happens, right? Because there’s so much good pouring in and it’s coming with so much money and they’re shocked. And then they do like a full-court press. And then you end up in this bizarre cycle where it always ends up with somebody getting really into MMA as like a CEO, right? It’s just sort of like, that’s, no, but I really think that that’s like they assert, they feel so attacked and they feel so vulnerable because people keep telling them that they’re kind of evil.

That they’re like, I’m going to become a fricking cage fighter and that’s going to show them. And, and you know, it’s like, kite surfing is like the gateway drug to that. And like, there’s just like a whole thing that happens. So you’ve got this whole cultural dynamic playing out inside of giant tech orgs as the money pours in and it’s like a whole thing. And then you’ve got the press desperately seeking for very specific harms to get a story that can turn into a narrative that can be a little bit broader. And you smash those two things together and it’s pretty hideous. And the only way that you resolve that is through regulation and oversight. But our society is at least a little bit collapsing and it just doesn’t seem interested in that. And so now what, what would be a way to do this?

First, first of all, I don’t want to, what would be a thing to do here? First of all, I don’t want to put LLMs back in the box. I would say that when we’re talking about harms, not specific harms, the lack of provenance is bad. I would like to know what goes into my meat. Okay. Like I want some nutritional guidelines as to what’s in my Anthropic LLM and what it’s using and where that data came from. I don’t want to be surprised by huge copyright cases. I shouldn’t be, I should know what I’m using. I know that Google is the web roughly, and Google doesn’t go into secret parts of the web and it honors robots.txt. That is a contract that Google made with the web, and when it doesn’t honor it, it’s really bad. And in fact, there have been technologies where Google kind of like tried to sidestep the open web and people got really upset. Like AMP pages and things like that. Oh, you and I are, you’re drinking a Spindrift, tropical lemonade.

Dan Shipper

I love it. Looks like I am too. Great minds.

Paul Ford

Spindrift, the brand of New York liberal tech nerds. God, it’s so bad. So bad. It’s a terrible place to be in technology in New York City. So anyway, coming back to it, right? Like what is the harm that’s been done? We won’t know the real harm, not the specific harm, but the broad. I don’t see it as a harm. I just see it as changing. What kind of society do we want to have to deal with the kind of change that is coming?

A 50 million-person underpinning of the entire global economy, the tech industry. You’ve got giant consulting firms, you’ve got tech integration firms and software companies. Their core product has been radically devalued. What do we think about that? Who gets to talk about that? Like who is going to, the AI folks are going to be like, it’s great. It’s the best thing ever. Everybody gets their software. I’m going to say that because I’m building a product along those lines, but like, if we’re going to have this level of change, it almost feels like you’re not even, what I think is going to shock people is how people see it coming, but then don’t really plan for it. Like everybody.

And that’s what actually panics me a little bit, Dan. I like, because people are like, well you’re still going to need engineers for this and you’re still going to, everybody is like, well, but when they see this new technology, and I think we have to start internalizing actually, horizons aside, this will change a lot of the ways that people do things. And it might change the way they make money and it might change what their lives are like. So what’s that going to look like? And ironically, I had Claude make me a prediction model for the future of the consulting industry and write me little stories. What did you get?

Dan Shipper

What’d you get? What did it say?

Paul Ford

Oh, dude, they were really sad. I was like, no, because I literally was like, okay, you know what, Paul? You get a little cynical. Just say mild bearish. Mild bearish. Okay. And it was like Rahul thought that he had made a good choice by going to computer science. Like it was just one after the other. It was like how to draw a Sankey chart. I can share it with, you can share it with like, it’s, I published it as an artifact. Here, let me just give it to you. Let me show you this thing. Please hold on. Because I want you to, I want you to see it.

(00:50:00)

Paul Ford

One sec. There we go. You see that?

Dan Shipper

Yep, I do.

Paul Ford

Okay, so I didn’t give it this title, and in fact I tried to really hedge. I was like, “Hey, it looks like AI might really change the consulting industry,” and I want you to make a Sankey chart. You know what a Sankey chart is? It’s one of these—

Dan Shipper

The AI chart?

Paul Ford

Yeah, yeah. Stuff comes in on the left, and it gets turned into work on the right. Financial services clients feed in and then they make this much money off of consulting. So right now we’re looking at Deloitte, a giant consulting firm, and it does audit and assurance and consulting and tax and legal. Let me zoom in a little bit. Oh wait, I just zoomed in on you. Here we go.

Okay, so I said “mild bearish case.” It does seem like this could really affect these industries. Just showing me kind of what might happen if AI was going—and it’s kind of ironic to ask Claude. So I was like, let’s look at McKinsey. Everybody loves McKinsey, everybody’s favorite company.

So $16 billion in revenue, 45,000 employees, headquarters New York City. In 2024 their revenue’s about $16 billion. Now I didn’t have to do deep research. It was just very hand-wavy. So I’m guessing all this is kind of wrong. Let’s be clear—it’s not precise. But it says that by 2035, McKinsey’s revenues, if it loses digital services, are gonna get down to $4 billion. And so you can see that here. If we switch to $4 billion, the whole chart shrinks.

Right now we’re making our money through corporate strategy, operations, and so on. So I had to write employee stories for each company. And so there’s Alexandra—Stanford undergrad, Harvard MBA, McKinsey Associate, 27. She was on the partner track, billing $800 an hour to tell Fortune 500 CEOs what they already suspected but needed external validation to act on.

I gotta say, Claude just decided to burn the shit out of McKinsey. I’m not grinding an axe here. I was just like, “Just write little stories about what’s up.”

The dirty secret of strategy consulting was that the frameworks weren’t magic. They were structured thinking applied to ambiguous problems, and structured thinking turned out to be exactly what AI was good at. By 2027, a CEO could upload their company’s data, describe their strategic question, and get a McKinsey-quality analysis in an hour, complete with market size and competitive dynamics and three options with trade-offs. It wasn’t as polished, didn’t come with the McKinsey name, but it was 90-95% as good at 1% of the price.

McKinsey tried to go upmarket. “We don’t sell analysis. We sell judgment,” the partner said. “We sell access, relationships, and implementation support.” But implementation was getting automated too, and relationships only mattered if you had something valuable to offer.

Alex became a partner in 2029, just as the firm started its long contraction. She was one of the last. By 2032, McKinsey was a quarter of its former size, serving only the largest clients who needed the brand for board cover. Again, damn Claude. She left for a client—chief strategy officer at a mid-cap industrial company. Less prestigious, more stable. She actually got to see her decisions play out, which was novel. Sometimes she missed the intellectual intensity, the feeling of being the smartest people in the room. Then she remembered that the smartest thing in every room now was the computer.

Dan Shipper

Incredible.

Paul Ford

Incredibly great. Scary. I’ll share this with you so you can share it with your listeners, but it’s a Claude Code artifact that I built yesterday for fun. And I shared it with somebody who works at one of the firms and they’re like, “Eh, they got the numbers a little bit wrong.” And then they were just really quiet for a minute and they went, “Interesting.”

So, but yeah, Accenture—Rahul had spent 15 years... armies got smaller, they disappeared, and then at the end he took a buyout at 45. Started a small consultancy helping mid-market companies with the human side of AI adoption, change management, the squishy stuff the AI couldn’t do. “Well, it’s a living. Some weeks he almost believes he is adding value.” Mild bearish.

Anyway, is it a little ridiculous that I’m using AI to explore this particular part of the world? Sure. Do I buy this? I do. Your horizon thing is real. Nobody knows what’s on the other side. The mild bearish case is that an economic contraction won’t have a sudden flowering of new opportunity and that people won’t figure out what to do next. And they’ll just be captured in this kind of shrinking world while robots do more for the rest of their lives. And that’s not actually how humans and societies work, but I do think it is a change at that level of magnitude that we’re gonna have to react to.

Dan Shipper

I agree. I loved that. I think that’s so interesting and I think it’s actually a good example of why language models are so powerful and what makes them sort of special. And that is an interesting example of why I think consulting firms, oddly, are going to still be valuable and important.

Paul Ford

Let’s bring that in. Let’s hear what you got.

Dan Shipper

Okay, great. So the thing that it seems to have picked up on in its mild bear case is that you can get the analysis and the judgment for 1% of the cost. And obviously the thing is like, “Oh, it’s not buying analysis and judgment or whatever,” but I wanna just stick with the “analysis and judgment for 1% of the cost” because I have done this too. I have put all of our company financials into Claude and had it write our investor update, and it did a fucking phenomenal job.

Paul Ford

Yeah. I mean, that’s so good. And anything kind of bureaucratic, it’s just magical.

Dan Shipper

Yeah. And I’ve also done a lot of strategy stuff with it, and I think you can—one way to break up human thought or just ways of solving problems into two broad categories:

In one category, there’s a right answer and it’s extremely rare, but it’s a needle in a haystack, which is what traditional programming is actually quite good at. It’s math, it’s logic, it’s all that kind of stuff.

Paul Ford

Give me an example. Like Excel spreadsheets?

Dan Shipper

Exactly. Yeah, yeah. You know, “How profitable were we this quarter?” There’s a set of rules that you can apply and there’s one right answer because you have very precise definitions of what “right” is. “Make me a pie chart of what we’re selling.” Exactly.

And on the other end—and the literary analogy is the Borges story, “The Library of Babel,” where every book is there, but there’s infinitely many books between where you are and where you wanna get to, and they’re all nonsense. So you’re just always in nonsense, basically, unless you’ve artificially constrained the search space.

The other branch of human thought or way to think about the way the world is: instead of this infinite library where you’re sitting in this sea of nonsense, but you know that if you get through enough nonsense, you’ll get to the right answer—and there is a right answer because there’s only so many pieces of hay between you and the answer, you’re gonna get to the needle in the haystack—on the other end is a library where every single book is meaningful and has a story, but there are infinitely many books between where you are and where you want to be.

It’s not countably infinite, it’s just—you’re just sort of in this enchanted forest of stories that you can go read. And each of them has a plausible-sounding answer, and you have to use your own human intuition or judgment and feedback from the world to move your way to generally the right area, but there’s no right answer.

And when we’re thinking about a question like “What’s gonna happen to consulting businesses?” or “What strategy could consulting businesses take?” or “What’s the mild bear case?”—I think we’re much more likely to be, especially if we’re looking at a Claude answer, in the regime of “there’s infinitely many meaningful stories and we’re looking at one of them,” but sort of treating it like it’s the other one where there is a right answer and Claude just found the right answer. Because if you change your prompt slightly, Claude could write you a great story about why consulting businesses are gonna do really, really well.

Paul Ford

That’s right. It was a mirror of my anxiety at the moment, but you’re absolutely right. I was literally five minutes away—and if it hadn’t told me that I was running out of Opus credits, I probably would’ve done it. Which I wasn’t, by the way—a little product problem there in case any people from Anthropic are listening. I had 20% left and it’s like, “Hey, we’re almost done here.” And I’m like, “Really? Because I have a problem, but it’s not that profound.”

So anyway, but yeah, you’re right about the mirror of it, and that’s the tool of it. And that’s a really hard thing to convey because what people are used to is putting words in a box, like with Google, and getting a response and being able to trust and evaluate that response. Instead, you’re putting words in a box and it’s translating your idea into another form.

(01:00:00)

And that is simply gonna mirror what was inherent in the idea as according to the rules of the LLM as opposed to. Actually being an answer to your question, but it’s suspiciously like an answer. And so this is such a subtle thing and, and again, this is where I get, if you ask me kinda what going back to harms, the greatest harms that the LLM companies do, and I actually think that philanthropic does a better job here, is to anthropomorphize the bots that has caused, like the fact that it looks like it’s answering rather than statistically translating a question into an answer and then that answer into code and then that code into other code.

If they had emphasized translation as opposed to chat, I think we’d be in a much better place with this technology and I think we’d have a better understanding of it.

Dan Shipper

What would that look like? And from a UI that in a way that would make sense.

Paul Ford

You know, I think what would be useful is instead of a—it’s a good question. I don’t have an immediate answer, but my instinct is you would keep, you know, I mean, this will be really nerdy, but more like a GitHub commit log. Like you put this in and then I—and actually this is what Claude Code and other things end up looking like, which is here was our state, and then I evaluated it and I did a bunch of queries in my internal database and I transformed it into this new state.

I’ve saved the old state in case we wanna go back to it, but here we are now. So we have a whole new kind of context and we’ve actually changed the way that we’re working. Where do you want to go from here? Well, I wanna do this and I wanna do that. Great. I’m gonna update the state again, and I’m gonna keep a really clear log and I’m gonna keep the relationships between where I was when we started doing this and where I am now.

I’ll keep that explicit so that you can learn how this works and how to do this and how to do it repeatedly and how to do it on guardrails and how to do it in such a way that you have confidence that it will be the same today as it was yesterday. And if you gave me that, what does an average human being really want? I don’t know, but I do. Right. And is that gonna work better than ChatGPT? No, probably not. It probably won’t get you 700 million users. But I think that, like LLMs, they are complicated. They’re, it’s really hard to learn how they work. I actually had ChatGPT write me a medieval quest in which a magic spell was said, tokenized, and sent through the different layers of the LLM. I highly recommend it, like find an analogy that works for you and then make it explain LLMs in the context of a quest or a journey. Because otherwise you don’t, there’s a lot of things that just go missing. Like the fact that there’s zillions of layers happening and each layer is kind of like talking back and forth to the other layers and sort of your, it’s not like your question is being answered. Your question is being broken up and spread across sort of like a zillion meta databases that are then coming back and forming something that looks like an answer, but without consciousness. And like that I don’t know how to explain that to people just yet.

Dan Shipper

I gotta stop you there. I, well, there’s, I have someone, we could do a whole other podcast on this, but I just want to, let me respond and then I’m curious what you think and then I think we should definitely do a part two of this conversation.

Paul Ford

Alright. Anytime.

Dan Shipper

But what I hear, what I’m hearing is—we do a live event too. That’d be fun. We could record—

Paul Ford

Let’s do it. Yeah.

Dan Shipper

Let’s, we can, we can invite all the liberals and they can yell at us as you said. Yeah. So what I hear you saying or almost yearning for sounds like traditional code. You know, you know what you’re gonna get. You know, if you do it today, it’s gonna be the same as it was yesterday or tomorrow. It’s gonna be the same as it was today. It’s very traceable. And I also hear a little bit of like, it’s not actually giving you an answer, it’s more of like a stochastic parrot type thing. A little bit, but I hear a little of that.

Paul Ford

Keep going. I’ll—fine. Keep going.

Dan Shipper

Yeah. And my feeling about this is actually we are extremely well equipped to work with the way language models work. And we’re much better equipped than we are to work with code and for people who are non-experts. And that’s because the—and I think it’s actually a good thing that they’re anthropomorphized because we have models, very advanced models for how to deal with human beings. And human beings are like this. They are squishy. They do not necessarily give you the same answer today as they did yesterday. And there are specific kinds of people that are particularly like language models. So people pleasers—as a people pleaser, I’m very much like a language model.

Paul Ford

I have a lot of empathy for my language models. That’s true.

Dan Shipper

Yeah. And you, you get that sense from a people pleaser where like other people pleasers in my life. Like I can just see when they’re kind of like doing that thing where they’re just telling me what I want to hear and I’m like, stop. I just wanna know what you think, you know? Mm-hmm. And so I think we have a lot of basically innate biological machinery for dealing with this kind of interaction. And that yes, there’s an adjustment period. And yes, like for example, if you, we should be detecting if you’re in a delusional state and ChatGPT should not talk to you, or it should at least not like you know, go along with your delusions. Right? But I think people will very naturally learn because there’s a really close analog. They’ll very naturally learn to use it and then very naturally learn to separate it from other types of things and put it in its own sort of category. And I think that’s why I think it is actually kind of genius that it is a chat and it is a little bit anthropomorphized and it is interacting in that kind of way.

Paul Ford

I don’t know. I see it. I get it. I just don’t know if we can handle this, man. I don’t know. I think humans are pretty. When I’m talking about making it reproducible, that’s me as a kind of programmer outliner type. I get that. But I think what’s tricky and what’s thorny is when you talk to businesses and orgs, ones that really wanna use it, not ones that are just trying to figure out what generative AI means. That lack of reproducibility is really scary because they need to know that something, you know what I think, here’s what it sounds like you’re saying to me and push back on this, Paul, it sounds like you wanted to work like a computer. But it doesn’t work like a computer. It works like a new thing and you should get used to new things instead of expecting it to work like a computer.

Dan Shipper

It is not quite right, it’s close. The slight change I would make is it works like a new thing that is very close to something that is older and more innate for you to interact with than a computer. And that gives you a lot of innate biological, cultural-like machinery for how to deal with new things productively in the way that you actually did not have with computers. And it comes with costs. It’s not cost free. You may confuse new things with people. Uh, but it is also part of its power and beauty and part of the reason why it has been adopted so heavily and makes me optimistic that we will also start to naturally separate out new things into a clearly new category that we know how to deal with. Because we know how to do that with people. We know how to do that with the people in our lives who like to act a certain way. We know we have to deal with it. And that’s why actually I think some of the outrage or some of the news articles or whatever is productive because it’s the only way to, or it’s not maybe the only way, but it is one good way to get people to just like pay attention and just be like, okay, I gotta like be a little bit suspicious of ChatGPT, but I’m still gonna use it. And so I think that’s—I would write the articles differently. I would write the headlines differently, but I think what we’re trying for is some way to differentiate between a person and a new thing. But I think that’s a productive process that’s gonna happen.

Paul Ford

I mean, interesting. Okay. I’m puzzling it out. Because what is my actual cri—My criticism here is that, but here’s what I want, what I want. If I am a business or a not-for-profit or I manage a lot of electronic health records, if you want me to use something new, I need to know that it works like a computer. Because I trust computer, computer is encrypted and saves the data and it’s good.

(01:10:00)

And you’re telling me the new thing will let me have more of this, but I need to know that it’s gonna be the same today as it was yesterday, as an interface, as a way to get to that stuff. And maybe the way to get to that stuff is you get the new robot to write the code and as a result you have this very reproducible environment.

Maybe it can stand up to things that repeat, but that ambiguity, and it’s not really just my ambiguity, like I think that’s the ambiguity that a lot of organizational thinkers are dealing with, right? Like, how do I trust this? I know I can do stuff with it, but how do I trust it? And what you are giving back to me is like at some level it feels like you’re saying you can’t because it’s like people. And companies run with people.

So this is a fantasy. Right. So take a second if we have a second and tease this out. Because I think this is really important. The fantasy of this technology, which I think I agree with you, is not actually what it’s for, is that it will give me the interface to human beings, but the discipline and predictability of the computer and that isn’t working yet. Absolutely not. And what’s happening, I do think that like OpenAI is saying, just give us a minute. Just give, we’re gonna get you that, we’re gonna get you the people that you don’t have to pay that do exactly what you tell. We just need a little more time. And at some level, I feel like that’s where AGI has landed as a concept, like a cohort of disciplined bots. Where do you think we’re going and what, like, I’m saying this, I’m sort of watching your face do funny things. Like what are you thinking?

Dan Shipper

I think, well, I have a whole AGI to take. But the really important thing is to do exactly what you tell them and exactly what you tell them—that’s the whole, that’s the whole ball game. What are you gonna, what are you gonna tell them? And I think the way that our intuition fails us is, well, if it does exactly what I say, it’s gonna be the perfect thing. And that’s actually just not true because you often don’t know what to say. It’s a process, it’s a creative process of figuring out what to say with experience with other people, with the machine.

Paul Ford

There are organizations that are real, right? That is the actual value of this thing is that it generates constructive confusion and that you have to address it with it, but then it can actually, you can iterate through confusion and get to goals. Yeah. And that is very, very real. And it is not saleable. That’s not what anybody wants to buy.

Dan Shipper

I think that there’s, so we do a lot of consulting too with big companies, and I think there is room for AI inside of big companies. However, I think it may actually be that, and this is, this should actually be a positive thing if you’re afraid of AI adoption being too quick, if I don’t think that you can, it’s very hard to be totally AI native retrofitting into a big company. I just don’t think that that happens, really.

Paul Ford

Explain that, because that’s, I mean, literally that’s kind of where we’re trying to build that bridge. And it’s hard, and I think I know what you’re talking about.

Dan Shipper

The exact thing that I’m talking about is exactly what you’re saying, exactly what you’re saying, which is like, well, they want it to be predictable and do the same thing today as it did yesterday. And that’s just not how this technology is.

Paul Ford

At its best, like, I think there is a way to make things very predictable. But you’re saying like, at its best—

Dan Shipper

Yeah, at its best. That’s not what it’s—okay. And so big companies can use this and can start to adopt it, but because they have all these forces and constraints that make it difficult to use things that can’t totally be trusted and are totally new, it’s difficult to use it to its maximal extent. But I think, so I think that will lead to less change than might be intuitive to those of us who are sitting around at Thanksgiving being like, holy shit. Like o1 just changed the entire world.

Paul Ford

I have this debate constantly with my business partner. Because I’m like, man, that’s it. Death is coming. Just shut up. Right? Like, yeah. Have you seen bureaucracy? And he’s—

Dan Shipper

Right, right.

Paul Ford

Yeah. Like I’ve worked with some of the largest bureaucracies in the world. It takes a long time. Once, we were up for a project with America. Like years and years ago, the Obama era. And they’re like, eh, God, if you guys could do it, we could give you 20 grand if you could just take an Amex. And we’re like, we’ll do it. We’ll help America. And they went and then like a week later they called back and they’re like, nah, we’re just gonna give the Navy $2 million. And it was for like a, essentially a glorified RSS feed reader. It wasn’t like, no, it would’ve been like a $50,000 project we were gonna take a hit. But it’s madness. Right? And so like the largest bureaucracies have never had a sense of value being and money sort of like, and the actual delivery being all that way, all that connected as much as like an individual developer might feel. So I think, excuse me, I think you’re not gonna change that. I think that is right. The only thing that I think though, Dan, is like—

Dan Shipper

I want to, I wanna finish though, because I think there’s this other, there’s this other component to that, which is pace of change is slower, but companies like ours that are right now are about 20 people, like sub-20 people that are growing up in this world where every single person is using Claude Code across the organization for every single thing. You’re creating all these new primitives for how to work with this squishy technology that is not about how do we make it so predictable that it doesn’t take risks. It’s like how do we do the most we possibly can with it because we’re small enough and young enough that we can take those kinds of risks and those kinds of companies, I think there’s only a small number right now, but they are going, they’re going to be a lot of companies like that over the next five or 10 years, and they’re going to become big companies and be acquired by big companies. And so that’s the other side of it is instead of trying to make the technology legible to someone who’s like running a multi-billion dollar company, you’re actually gonna get the best out of it by making it like the most useful thing for this small group of early adopters that are figuring out like how do we use the squishiness to our advantage?

Paul Ford

Yeah. I mean, I think there’s going to be, it’s like anything, there’s, it’s so big and this space is so big. It was already so big. And then we’re dropping such a big change into it that it’s gonna express multiple different ways. Like I completely buy that there will be lots of AI native orgs, especially now that Claude, like, I’m seeing Claude Code and like the actual promised future of accelerated delivery is here. Like you can, I mean, our thing too, like you can build a business app in like five minutes and it used to be five months.

And so like, and that’s true of 3D rendering. And that’s true of all these categories that were really, really complicated before. And so I think there’ll be this huge layer of acceleration from relatively small organizations that can deal with that, take it in, learn it, and apply it and have a desire to share the value they want to like, do more, get paid less, but move faster. I think like there’s huge opportunities there. I think where people are screwed is if they’re like, cool, now I can engineer 10 times faster. I’m gonna go on vacation and I’ll just get all my work done in like five minutes and nobody will know—that is gonna come bite you. But I also do, I think though it’s too big of a change and people are gonna want some of that for themselves. Like I’m just sort of thinking about really big orgs I’ve worked with where the engineers just say no all the time and the CEO is really frustrated. But that’s just life. That’s just how it goes. That’s what it’s always been like. And then somebody shows up and they’re just like, it doesn’t have to be that way. You know, you can have everything—that’s gonna feel so good, it’s gonna feel so good, and they’re gonna throw it by the wayside. It’s gonna be like a live, laugh, love kind of trip to Italy for them.

They’re gonna just be, they’re gonna abandon their family because they can suddenly, like the supply chain, SAP integration that was scheduled for 36 months now takes three. Oh my God. And I just got—the other, the other thing too, and I’m sorry to get corporate with it, but SMBs can’t afford big enterprise software, but they also like, don’t have CTOs. Like they still know what to do in the middle, and they can have really good tools now. They can like, which means for them that instead of implementing Salesforce, they can buy a summer home. It’s like that, that’s sort of where that equation plays out. So I don’t think—because what you’re saying here is all true up until the point that you realize that a vast amount of spend on technology goes to like five companies and everybody kind of hates those five companies. Like unless they make money from them, they hate them. Like they come to us and they say, I hate this company and I will do anything to never work with their software again. And so, like, given that being out there, I think there’s a lot of drama ahead as people decide if they wanna spend millions of dollars on SaaS or not and sort of heavy enterprise builds.

(01:20:00)

Paul Ford

So I think it’s kind of a yes to everything as well as the status quo. Because it’s such a big space, it’s not gonna change, but I think we gotta watch the margins. I think stuff is gonna shift really weirdly in ways that we weren’t expecting.

Dan Shipper

I agree. And I think that’s a great place to leave it. Paul, fantastic conversation, it’s really great to get to chat with you.

Paul Ford

Yeah, let’s hang out Dan. I would love to do that.

Dan Shipper

Uh, if people are looking for you, where can they find you on the internet?

Paul Ford

They should check out our website aboard.com. We have a really, really nice, think of it as like super pro Webflow vibe coding platform that lets you build stuff, but we build it with you. We don’t just give you a tool. We make sure that like we have good product managers, we call ‘em solution engineers who listen and they will help you out. So that’s enough shilling. You can send me an email paul.ford@aboard.com. You can find me on LinkedIn, you can find me on Blue Sky, off of Twitter. All the regular places. I’m pretty easy to find.

Dan Shipper

Awesome. Thanks Paul.

Paul Ford

Yep. Anything you need, let me know.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to sponsorships@every.to.

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!