Transcript: Reid Hoffman on How AI Might Answer Our Biggest Questions

‘How Do You Use ChatGPT?’ with the LinkedIn cofounder, author, and venture capitalist

8

The transcript of How Do You Use ChatGPT? with Reid Hoffman is below for paying subscribers.

Timestamps

  1.  Introduction: 00:01:58
  2.  Why philosophy will make you a better founder: 00:04:35
  3.  The fundamental problem with “trolley problems”: 00:08:22
  4.  How AI is changing the essentialism v. nominalism debate: 00:14:27
  5.  Why embeddings align with nominalism: 00:29:33
  6.  How LLMs are being trained to reason better: 00:34:26
  7.  How technology changes the way we see ourselves and the world around us: 00:44:52
  8.  Why most psychology literature is wrong: 00:46:24
  9.  Why philosophers didn’t come up with AI: 00:52:46
  10.  How to use ChatGPT to be more philosophically inclined: 00:56:30

Transcript

Dan Shipper (00:01:13)

Reid, welcome to the show.

Reid Hoffman (00:01:15)

It’s great to be here.

Dan Shipper (00:01:16)

It’s great to have you. So I'm sure that everyone listening or watching knows this, but you are a renowned entrepreneur. You're a venture capitalist. You are an author. You're best known as the co-founder of LinkedIn, you’re a partner at Greylock. You were a board member, and an early backer at OpenAI. And you also have an incredible podcast, Masters of Scale. But perhaps most relevant to this conversation, you also studied philosophy at Stanford and Oxford, and you almost became a philosophy professor, which I didn't know before researching this interview. It's really cool.

Reid Hoffman (00:01:53)

Yeah, no, part of it was I've always been interested in human thought and language. Started at Stanford with a major called Symbolic Systems. I was the eighth person to declare that as a major at Stanford and then kind of thought, hmm, we don't really know what thought and language fully are, maybe philosophers do. And so, took some classes at Stanford, but also trundled off to Oxford to see if philosophers had a better understanding of it.

Dan Shipper (00:02:24)

I love it. It's funny. I feel like since then, Symbolic Systems has become the go-to Stanford major for curious, analytical people who end up doing startups. So that's pretty funny to know that you're one of the first. So, usually in this show, we talk about actionable ways that people use ChatGPT. And that's the big question. That's, I think, what people come here for. But underneath that, I think what a more interesting question is how does AI in general and ChatGPT in particular, might change what it means to be human? How might it change how we see ourselves and how we see the world? How might it enhance our creativity, our intelligence, all that kind of stuff. And these are really deep, big philosophical questions. And as someone who rigorously studied philosophy and probably still thinks about those questions, I thought you might have a unique perspective on this intersection. 'Cause I think people tend to be— They're either in the philosophy camp or they're in the language models camp. And people who are sort of in the middle are kind of an interesting one. And what I wanted to start with because I think there are probably people who are listening or watching who are like, I just want Reid’s actionable tips, is to ask— Tell me more about why you care about philosophy. And I think you got into that a little bit in talking about how you got into it, but tell us why do you care about philosophy? Why are answering these big questions important?

Reid Hoffman (00:03:50)

So, one of the things that I sometimes will tell MBA schools when I give talks, that a background of philosophy is more important for entrepreneurship than an MBA, which of course is startling and contrarian. And part of that is to get people to think crisply about this stuff. 'Cause part of what you're doing as an entrepreneur is you're thinking about what is the way the world could be, what could it possibly be? What, if you wanted to use analytic philosophical language, logical possibility or something like that, but it's kind of what is possible. And then, partially because these are human activities, what's your underlying theories of human nature about how human beings are now, how they are quasi-eternally, and how they are as circumstances change, as the environment, the ecosystems we live in change, which is technology and in political power and institutions and a bunch of other things as ways of doing that. And philosophy is very important to this stuff because it's understanding how to think about very crisply what are possibilities, what are theories of human nature as they are manifest today and as they may be modified by new products and services, new technologies et cetera. And so obviously people tend to say, oh, that's a philosophical question because it's an unanswerable question. Nature of truth. Or while we all speak and understand languages, we don't really know how that works. And as part of the reason why there was the linguistic turn in philosophy that Wittgenstein and others were so known for, which is, well, maybe these problems in philosophy are problems in language. And if we understand language, we'll understand philosophy. And this question around these unanswerable questions, but actually, in fact, science itself is full of a lot of unanswerable questions. And it's the working theory as we dynamically improve, and that's part of what the human condition is. And that's part of what actually the in depth philosophy is. It isn't to say that the same questions today— Some of the same questions today in philosophy, the same questions that Plato and Aristotle, and even the pre-Socratics and other folks are grappling with: truth, knowledge, et cetera. But some of the questions are also new questions and the questions evolve and part of how science has evolved from philosophy was this question as we get to our more specific theories and kind of developing the new questions that we get to those are outgrowths. And the same thing is true in building technology, in building products and services in entrepreneurship. And that's why philosophy is actually, in fact, robust and important, as applied to serious questions versus the— One of the things I wrote my thesis on in Oxford was the uses and abuses of thought experiments. And the most classic one is trolley problems. And there are both uses and abuses within the methodology of trolley problems. The most entertaining of which, if people haven't watched it is, there's a TV series called The Good Place, which embodied the trolley problem on a TV episode in an absolutely hilarious way.

Dan Shipper (00:07:28)

That's really interesting. What is the way that people tend to misuse that? Because I feel like trolley problems are so common in EA discourse and people run into that a lot online.

Reid Hoffman (00:07:37)

The fundamental problem is they try to frame it— To get, to get an intuition, to derive an intuition, a principle, et cetera, they try to frame an artificially different environment. So it's like, no, no, it's a trolley and the trolley will either hit the five criminals or the one human baby. And it's default set to hit the human baby. And do you throw the switch or not? And then when you start attacking the problem, you say, well, how do I know that I can't break the trolley? I could just not make it continue to run. It's like, well, you know that you're like— Oh, so you're positing in your thought experiment that I have perfect knowledge that breaking the trolley is impossible. So in your posit to make your thought experiment work, you're positing something we never— Or, when we encounter, we generally think people are crazy. You have perfect knowledge. Why in fact do I know that I have perfect knowledge that I can't break the trolley? And say what is the right human response to this trolley problem is I'm going to try to break the trolley. So it doesn't hit either of them. 

And you might even say that the problem is that to say, well, you have perfect knowledge that you can't break it. You're like, well, okay, a.) don't have perfect knowledge and b.), even if you did, maybe it's still the right response. You're trying to get me to say, do I do nothing and run over the baby, or do I do something and run over the five criminals? Those are my only two options. And you're like, well, no, I could say, even if I think I can't break the trolley, that's what I'm going to try to do because that's the moral thing to do.

Dan Shipper (00:09:22)

I've heard a lot of trolley problems and I've never heard anyone posit the third option. I love that. That's great. And also there's something about that where it's, yeah, certain thought experiments sort of hijack your instincts and you don't quite have a reason through all these hidden assumptions that I think honestly reminds me of certain doomer arguments. And I don't want to go into the full thing, but I think it's a really interesting way to think about it. If I had to summarize what you just said, the value to you of philosophy is thinking crisply about possibilities, thinking about human nature and reality. All of those things are really, really, really important for business people. I want to take it another step, which is some of those questions that philosophers or philosophy students or philosophy nerds just sharpen our skills on. There are some of these big questions. Some of the big perennial questions, like what is truth? What is reality? What can we know? All that kind of stuff. I'm kind of curious if you have a sense, as we start to get into talking about AI stuff, what are those questions where AI large language models are going to give us a little bit of a new lens on some of those questions? Or what are questions where we'll find new ones to ask that are better than previous ones, even if they maybe don't answer them. Do you have a sense for that?

Reid Hoffman (00:10:52)

Well, I mean, historically, for example, questions have led to a bunch of various science disciplines, right? It's everything from things in the physical world to things in the biological world, like germ theory and all the rest. I think it's actually even true. It's one of the reasons why philosophy is the root discipline for many other disciplines. When you get to questions around, how do you think about economics and game theory? Or how do you think about political science and realpolitik and kind of the conflict of nations and interests. And it's also one of the reasons why probably one of my deepest critiques of the non-reinvention of the university is the intensity of disciplinarianism. So it's just the discipline of just political science or just the discipline of even philosophy as opposed to multidisciplinary. And if part of the thing that I tend to think is an interesting thing is how much the academic disciplines tend to be more and more disciplinary versus the, hey, maybe every 25 years, we should think about blowing them all up and reconstituting them in various ways. And that would be actually a better way of thinking and why some of the most interesting people are the people who are actually blending across disciplines within academia. And I think that part of it is extremely important. And part of the question in philosophy is the kind of the question of, well, how do we evolve the question of what do we know? And obviously you evolve the question where through, for example, a lot of the history of science is instrumentation, new measurement devices that help with provisioning of theories. And that's one of the reasons why people frequently don't think enough about how technology helps us change. What is the definition of a human? Because we have this kind of imagination, like the Descartesian imagination, that we are this pure thinking creature and you're like, oh, if you've learned anything, that's not really the way it works. That doesn't mean that we don't think that way to have abstractions to generate logic and theories of the world and all the rest. But put your philosopher on some LSD and you'll get some different outputs.

Dan Shipper (00:13:37)

That makes sense. So I guess along those lines, if I step back and squint, I can kind of divide the history of philosophy into essentialism and nominalism for a certain part of philosophy, right? And essentialists are like, do you believe that there's a fundamental objective reality out there that's knowable and that there's a way to kind of carve nature at its joints. And nominalists, which would include Wittgenstein, which I know you studied pretty deeply, and pragmatists. I think that truth is more or less relative, or it's about social convention or it's about what works or there's a lot of different formulations of it. And there's this sort of ongoing debate between people who think one thing or the other. Do you think language models like change, or add any weight to either side of that debate?

Reid Hoffman (00:14:31)

I think they add perspective and color. I don't think they resolve the debate. And there's certainly some question about since they function more like later Wittgenstein or more nominalist, you say, well, does that weigh in on the side of nominalists because of actually, in fact, the way they function? And actually, in fact, you say, well, if you look at how we're trying to develop the large language models, we're actually trying to get them to embody more essentialist characteristics as they do it. How do you ground in truth, have less hallucination, et cetera? And to gesture at a different, earlier German philosopher, Hegel, one of the things I think it was kind of the human condition is that thesis antithesis synthesis, like you could say, hey, we have an essentialist thesis, we have a nominalist antithesis, and the synthesis is how we're putting them together in various ways, because you say— And I don't even think later Wittgenstein would have said that the world is only language, kind of what the deconstructionist and Derrida went to. It was only the veil of language and you have no contact with the world, so you're not grounded in the world at all. I think he would think that's kind of absurd, right? But his point was to say that there is also in how we live as forms of life, the way that it operates is not a simple denote, and he understood it wasn't just denoting the cat on the mat or the possibilities. The cat is on the mat and the possibility of the cat is on that, but actually possible configurations of the universe. And that was this kind of notion of logical possibility that was described as one one language of possibility was to say that kind of essentialist about a language of possibility is actually incorrect to actually how we discover truth and how we operationalize truth. And you still have a robust theory of truth, which is not essentially what the deconstructionists do. But the robust theory of truth is partially grounded in this notion of language games and a biological form of life of how you do that. And then obviously you go into this deeply with saying, well, okay, how is mathematics a language game as a classic language of truth is a way of trying to understand that. And that's part of where you get what philosophers refer to as Kripkenstein, Saul Kripke’s excellent lens on reading part of what Wittgenstein was about. And you kind of then apply all that—everyone's going, where's this going?—to large language models? And you say, well, actually, in fact language is this play out of this language game. Large language models are playing out this language game in various ways. But part of what is revealed is we don't just go. Truth is what is expressed in language. Truth is a dynamic process and human discourse could be synthesis, synthesis thesis, antithesis, synthesis, or other things is this human discourse that's coming out of this dialogic period, this truth discovery, this logical reasoning, whether it's induction is reasoning, whether it's abduction, whether it's deduction and these reasoning processes that get us to what we think are these kind of theories of truth that are always to some degree works in progress.

Dan Shipper (00:18:17)

That's really fascinating. I want to try to summarize that in case it was a little bit difficult to follow— to be honest, there's a point in there that I think I missed something. So you tell me what I missed. But I think one of the things that I heard there that I thought was really interesting is, when you think about how we built AI, which is, predicting the next token, that's a very, late Wittgenstein compatible idea or pragmatic, compatible idea where it's really about the relationship between different words in a sentence. And we're not finding anything out about the world. There are other AI approaches, I don't know, in the eighties or seventies, where it was literally let's list out every single object in the world. And those didn't really work. And that would be something along the lines of a more essential approach to AI. And, the one that works is a more pragmatic and the more late Wittgensteinian one. But, what's quite interesting is now that we have that pragmatic base that we've bootstrapped, we're in this process of trying to make it more grounded in reality or more reduced down to being able to talk about the essential ground truth. And I think what's really interesting about Wittgenstein is he's sort of famous for saying like the limits of my language are the limits of my world. I don't know. I don't remember if that's late or early, but more or less I think what you're saying is that Wittgenstein doesn't think that there's nothing outside of language, but he does think that the way we talk about the world, or the way that we use language is part of this sort of social discourse where we're all kind of going back and forth to co-invent language and structures and language games together. And you kind of see that happening with language models where when you do something like RLHF that's sort of us playing with a language model, like playing a language game to be like, no, no, you don't talk like that. Is that like generally what you're getting at?

Reid Hoffman (00:20:33)

Yes. So everything you said. But then the additional thing, which later, Wittgenstein was really trying to explore in various ways because he wasn't trying to do a kind of a completely social construction of truth. You have to be a Wittgenstein scholar to actually understand how both early and late Wittgenstein are actually part of the same project. And late Wittgenstein wasn't, early Wittgenstein was an idiot and I've religiously converted to this different point of view. But there is a particular thing, which is how do you get to the notion of understanding truth? And truth is the dynamic of discovery through language and it has to have some explicit external conditions that isn't my truth, your truth. There is only, to some degree, our truth or the truth, in various ways. And how do you get to that? Is what you're doing and having truth conditions and then kind of early Wittgenstein. The truth condition was that it caches out into a state of possibilities and actualities in this logical space of possibilities, which include physical space is part of the broader than that. And then, later Wittgenstein said, well, actually, in fact, this modeling of logical possibility is actually not the way this works, right? And we're not actually, in fact, grounding it that way. The way that we're grounding it is in the notion of how we play language games, make moves in language, and the way that's grounded is to some degree sharing a certain biological form of life by which we recognize that's a valid move in the language game, this is not a valid move in the language game. 

Now, this is what's interesting when it gets to large language models, because you go, well, large language models, are they the same biological form of life as us, or are they different? And how does that play out? And I think Wittgenstein would have found that question utterly fascinating, and really would have gone very deep on it, trying to figure that out. And by the way, the answer might be some and some, not 100 percent yes or 100 percent no. Because the argument in favor is the large language models are trained on the corpus of human knowledge and language and everything else. And they're doing language patterns on that. Some might even argue that some of their patterns are very similar to the the patterns of human learning and brains. Others would argue that it's not, but then you'd say, well, but it's also not a biological entity and it actually learns very differently than human beings learn. And so maybe its language game, which looks like it's the human language game, is actually different in significant ways. And so therefore the truth functions are actually very different.

And in a sense, what we're trying to do when we are modifying and making progress with how we build these LLMs to make them much more reliable on a truth base. We love the creativity and the generativity, but we want it, for a huge amount of the really useful cases in terms of amplifying humanity, we want it to have a better truth sense. I mean, the paradoxes in current GPT are when you can kind of tease it out with very simple questions around prime numbers and you go, well, you got that answer wrong. It's like, oh yeah, I got it wrong. Here's the answer. Well, that answer is wrong too. Oh, I got that one wrong too. Here's the answer. And a human being understanding these things, I'm just getting these things wrong. I get I'm wrong as opposed to, oh, I'm sorry. You're right. I got it wrong. And here's another wrong answer. And we're trying to get that truth sense into it as we're doing because we do have some notion of, oh, right. This is what characteristic mathematics gets us in very pure definitions of certain kinds of language games. It's one of the reasons why centuries ago, people thought math was maybe the language of the universe or language of God or language of et cetera, because you're like, okay, there is the one where the purest truths that we know, two plus two equals four is kind of embedded in, and we're still working that out. As we play with how we create these language tools, these language devices, and it's part of the reason I think this question is really interesting because you can actually model it to some of the actual, as it were, technological physics that we're trying to create when we're doing the next version well how do we, how do we get these things into good reasoning machines, not just good code. Generativity machines and they have some reasoning from their generativity, but part of the classic showing where they break is showing where their reasoning stops working in ways that we value and aspire to in terms of what we try to do as human beings, as in our best selves.

Subscribe to read the full article

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders