Transcript: ‘How to Prepare for AGI According to Reid Hoffman’

‘AI & I’ with the Linkedin cofounder and early OpenAI board member

27

The transcript of AI & I with Reid Hoffman is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:01:29
  2. Patterns in how we’ve historically adopted technology: 00:02:50
  3. Why humans have typically been fearful of new technologies: 00:07:02
  4. How Reid developed his own sense of agency: 00:13:25
  5. The way Reid thinks about making investment decisions: 00:20:08
  6. AI as a “techno-humanist” compass: 00:29:40
  7. How to prepare yourself for the way AI will change knowledge work: 00:35:30
  8. Why equitable access to AI is important: 00:41:39
  9. Reid’s take on why private commons will be beneficial for society: 00:45:15
  10.  How AI is making Silicon Valley’s conception of the “quantified self” a reality: 00:47:23
  11. The shift from symbolic to sub-symbolic AI mirrors how we understand intelligence: 00:52:14
  12. Reid’s new book, Superagency: 01:03:29

Transcript

Dan Shipper (00:01:30)

Reid, welcome to the show.

Reid Hoffman (00:01:31)

It’s great to be here and great to be doing this in person.

Dan Shipper (00:01:35)

Yeah, I love that. It makes it a much different experience to be face-to-face. We’re actually very close to each other.

Reid Hoffman (00:01:40)

Yeah, exactly—and actually even philosophically.

Dan Shipper (00:01:42)

Yeah, definitely. So, I'm excited to have you for a number of reasons, but one of the big ones is you have a new book coming out. By the time this is out, it may be out. It's called Superagency. I read it. I actually really liked it. Professionally, I have to say that, but—

Reid Hoffman (00:01:59)

It happens to be true!

Dan Shipper (00:02:01)

It happens to be true. And the reason I like it is I think you're writing it in response to the— There’s a prevalent fear of AI and maybe AGI socially. And I think you're examining this moment and saying, if we want to understand what to do in this moment and how things might play out, a good way to do that is to look at history and look at other historical technology changes to see how we reacted and whether we were wrong or not. And you go much further than that in the book, but I want to start there because I love that history angle because I think we forget how many of the things that we're familiar with now used to be really, really scary. I wonder if you could go into some of those so we can understand them a little more.

Reid Hoffman (00:02:49)

Well, it goes all the way back to the written word with Socrates, although there are some different scholarly interpretations of the Socratic remarks or the challenges. So why don't we start with the printing press? 

When the printing press was introduced, a lot of the public dialogue was very similar to the dialogue we have around artificial intelligence. This will lead to the collapse of our trust in human cognition. It’ll lead to widespread misinformation. It will lead to the collapse of the solidity of our knowledge and society and what we're doing. And who knows what people are going to do with this technology that could really erode things? 

And on one hand, here, many centuries later, we are obviously deeply indebted to the printing press. We can't have science without the printing press, because you can't get to that spread of information. You can't have widespread literacy and education. You can't have the progress of knowledge in middle-class institutions, universities, etc. All of which— Well, universities existed before the printing press, but getting to the many universities of vigorous strength required the printing press. Now, all of that shows that it's great despite all the fear. Now, that being said, the other thing to track with the printing press is there was nearly a century of religious war. So as human beings when we get these new technologies, the transition periods of what we get in can be very challenging. But that's to some degree, that's an opportunity, not just a notion of fear, because the world is what we make it. So let's make this transition much better than the earlier transitions.

Dan Shipper (00:04:40)

Do you think that— Because one of the overriding points in your book is in these kinds of transitions, we have some amount of control over them, but don't have total control. And we can get more into all the nuances of what that means, but I'm curious about, do you think we can prevent— Let’s say we went back to the printing press days, is there something we could have said to Martin Luther to be like, hey this schism thing and the reformation— It’s great to have a personal relationship with God, but maybe cool it a little bit on some of this stuff, because it's going to lead to a lot of war? Or is that sort of inevitable?

Reid Hoffman (00:05:20)

Well, I think it depends on where your society and culture is. At the time, it's unclear that even if Martin Luther had said, let's do this gradually and have a set of discussions that the reigning Catholic Church would have tolerated and allowed it.

Dan Shipper (00:05:40)

I think he did. I think he was trying to go into the system for a long time before he did the door-pounding thing.

Reid Hoffman (00:05:45)

Exactly. So it's unclear. But one of the things we hope in having built great global institutions post-World War II, having learned from what the massive amount of tragedy is when you get in that kind of amount of conflict, that— And by the way, of course, we have the we have various technologies, not the least of which is printed books, to remind us of our histories and to learn it, that maybe we can do this one much better. And that's part of the reason why, and you know from having read Superagency, it's part of the reason for doing the book, for doing podcasts and other things to say, hey, if we actually steer the right way, we can minimize the transition difficulties. I don't think there'll be zero transition. I think that's a pipe dream and no one should have that expectation. But we can actually get through the transition much cleaner and much more compassionately and humanely than we have done in earlier transitions and get to the amazing benefits that are always on the other side of these massive new technology leaps.

Dan Shipper (00:07:00)

That makes sense. I think one of the places to go from there is, the book to me is not just a history book. It's also a book about psychology in a certain way, because you have a theory of why we are typically very afraid of new technology.

Reid Hoffman (00:07:20)

So, part of our fundamental concept of our place in the world and our dignity, our meaning, etc., comes down to a notion of agency and it's both agency as individuals and agency as groups and agency as societies. And so most often, and back to the earlier referred printing press, what people experience the new technology as is reducing the agency for key people who are leaders in society: heads of institutions, participants in institutions, the institutions themselves, and so they react because they go, oh, this is gonna be a change in agency therefore reduction and therefore bad, therefore destructive society. 

And the actual path that happens is, yes, agency changes. So some things that you had agency in before, you no longer have agency in. But when you begin to look at agency, it's not just a set of external factors, it's internal factors. It's how do you approach it. And so, for example, let's use a very modern example that people can think about. So is it a loss of agency to being driven in an Uber or a gain in agency to being driven in an Uber? And obviously if you're like, oh my god, my hands aren't on the steering wheel. And who knows what this random human being is doing? Then you're like, oh my god, it's an enormous loss of agency. And yet, of course, hundreds of millions of people are doing it because they realize it's a gain of agency. I can not have a car and get somewhere. I can go, oh, I drunk too much, I'm going to get home this way much more safely, etc. 

And so it's a question of how we approach it, and how we experience it, and how we choose our own agency. That doesn't mean it's completely separate from external effects, but it's a combination of that internal and external, which is really appropriate and then what happens as we first encounter technology. I remember arguing at Davos with people about: Are smartphones these humanity-reducing cybernetic control of human beings—

Dan Shipper (00:09:40)

Never argue with people at Davos. That’s mistake number one.

Reid Hoffman (00:09:44)

You're making me think about The Princess Bride: “Your first mistake!” 

And then, obviously, part of the reason why we have billions of people with them is because actually, in fact, it's a massive increase in agency, and matter of fact, it quickly gets to kind of the thesis of the book of Superagency is what happens when millions of people get all elevated in their agency with a new technology, and then all of a sudden we collectively get Superagency, both as individuals and as societies.

Dan Shipper (00:10:20)

Yeah, one of the core parts that I read in that I want to pull out is: You talk about agency, but you talk about it as a sense, which I read as like an internal kind of aesthetic experience of agency, which I think, most people think about when they think about agency, it's all external. But it's quite clear to me that the aesthetic component is a huge part of it. And I think a lot of the questions about human agency reflect almost a lack of faith in our ability to change and adapt, and then also lack of understanding of how that aesthetic sense arises and how you can bring it to any experience to some degree, like the history of literature is about people in like pretty dire circumstances they have some agency, which is not to say that external conditions don't matter, but a lot of it is internally driven. I'm curious how you got there. What influenced that? How did you start doing that?

Reid Hoffman (00:11:20)

Well, some of it may be my philosophical background and highlighting the aesthetic is also very interesting. I think maybe when I was writing, I was also thinking of Dan Dennett's intentional stance. It's a stance that we have towards the world, like a mental stance, a worldview. And obviously there's an aesthetic stance too. So I think that's a great highlight. 

And I think part of it is that, using the Uber example and using the smartphone example, I think that there's many times where, when you're encountering an external circumstance including of course technologies, if you approach it as this is taking my my agency, then you're essentially throwing yourself under the wheels, right? Whereas if you go, oh, here's how I can use this to transform my agency, to extend and enhance my agency in various ways, then it becomes much better. So for example, think about just driving down the highway. You go, hey, I'm in this car. I have an ability to slow down, speed up, drive, etc.. Well, there's a lot of other cars on the road, too. If you say, oh my god, my agency is taken away because these other cars are slowing down or are potential hazards, etc., well, then you're never going to get on the road, right?

Dan Shipper (00:12:45)

And I think that points to: Agency is to some degree a way of looking at the world and where you put your attention. So if you push your attention to, oh my god, all these other cards, of course that sense of agency is going to go down. If you put your attention somewhere else that sense of agency is going to go up. And again it’s not only your attention. There are external things, but to some degree there’s some sense of control. And I'm curious for you, how do you notice your personal sense of agency fluctuating day-to-day and in your life? Have you always felt this connection with a sense of agency? Have there been periods where you haven’t? And how has that played out for you?

Reid Hoffman (00:13:25)

Great question. It's interesting because I think one of the things I picked up from fairly early in my childhood was realizing that kind of that old catechism that's pretty good, which is something like the strength to change the things I can, the tolerance to live with the things I can't, and the wisdom to know the difference.

Dan Shipper (00:13:45)

Yeah, it’s like the Alcoholics Anonymous prayer thing?

Reid Hoffman (00:13:50)

Yes, yes, exactly. And I think it comes from a Christian Catholic catechism. And I paraphrased it.

Dan Shipper (00:14:00)

Did you grow up Catholic?

Reid Hoffman (00:14:02)

No, no. It wasn't so much that, as I'd adopted that— I came to love that catechism later because I had early got to that sense of how you should navigate the world.

Dan Shipper (00:14:10)

Wow. How’d you do that?

Reid Hoffman (00:14:12)

I think it’s just maybe playing a lot of board games. I mean, it's just kind of the sense of, hey, these things are under your control and you can affect them. And then you can affect that outcome. And these things are not under your control, and overly tearing your hair out about the things that are out of your control is not helpful to you or to anyone else. So you go, the fact that there's suffering in the world. If you go, oh my god there's suffering in the world, then of course you’re gonna get crushed—there’s going to be suffering in the world. You should, of course, always feel for people's suffering. But the fact that there are children dying around the world today, it's really, really sad. We should try to do things about it, but we're not going to stop all of it today. It's an ongoing process. And so there's things out of your control, and then there's things in your control. And of course, that's where your ability to navigate it— And so it's kind of, whether maybe it's kind of simple, like who you're friends with when you're in school. And you go, oh, do you have friends that you treasure and then that's really great. And maybe there's other people that you want to be friends with who aren't as interested in you. That's fine too. But kind of navigating that within the kind of child circumstances I think where I started, and then from there, I think that became kind of how I approached each new challenge that I was getting. 

And part of how I think I got a sense of good strategy in life and strategy for me, strategy for what I did in school, strategy for what I do with companies, strategy for what I do with investing, all came from this kind of, figure out what the nature of the game is and what are the things that were within your ability to change and then accepting the things that you can't while changing some really interesting things.

Dan Shipper (00:16:17)

It sounds like that came from board games. What board games are we talking about?

Reid Hoffman (00:16:24)

Well, so a whole set. Some and it’s less board games than Dungeons and Dragons, so I was doing that, which seems to have been some resurgence on, which is cool. But also what probably most people don't track these days, these Avalon Hill board games I did a bunch of. And then a variety of others. And one of the things about It being multiple, is that you're learning all of them. And actually one of the things that people frequently say, I did play chess and I did play go. I wasn't as attracted to those games because part of the thing with when you're playing like the Avalon Hill board games or also Starfleet Battles was another one I did. By having some randomness with dice rolls, it actually more closely approximated the kinds of circumstances we encounter in life. Because, life is not like chess. Life is not like go. It is not deterministic that way. There're uncertainty variables that you have to play into and epistemic uncertainty sometimes that you have to play into and both go and chess have no epistemic uncertainty. And so adopting your strategies to those was really important. I think those are the initial mindsets.

Dan Shipper (00:17:35)

That makes sense. There’s a meme in tech right now about being high-agency, which I think is great. It’s good to be high-agency, but I think we tend to think of agency as always good. And generally I think it's good to have agency, for example, in the example you gave of, there's suffering if you think of yourself as high agency and have high internal locus of control for things that are totally out of your control, it’s actually a pretty miserable way to live. And it definitely doesn't make you more effective. And I think what you're saying is there's a certain range of things within which you want to be high-agency and have internal levels of control. And then there's also a whole set of other experiences in life that are important that are about completely giving up control and recognizing your lack of control, and those are some of the most meaningful experiences that people have transcendent experiences. I’m curious what that sparks in you.

Reid Hoffman (00:18:40)

Well, I mean, I think the obvious ones are in friendship, romantic relationships, and other things as part of what you're doing is essentially giving yourself over to not being in complete control of how a relationship's playing. And those are obviously some of the places where we learn and become kind of wiser, more compassionate, more evolved people, that's actually, in fact, I think one of the really central things. It's also, by the way, sometimes people encounter that playing in team sports of various sorts. There's team sports themselves. There's also, of course, team sports within companies in terms of how a company's operating. The shared controls. The shared agency becomes, I think, really key. And obviously, some people find that within a religious experience, too.

Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!