Win At Business And Life In An AI World

Philosophy and AI: What is the Future of Creativity? (Episode 215)

Nick Bostrom is a Professor at Oxford University and the founding director of the Future of Humanity Institute. 

Nick is also the world’s most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a wrote a New York Times bestseller which sparked a global conversation about the future of AI.

‌His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds. 

‌He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list.

‌He has just published a new book called “Deep Utopia: Life and Meaning in a Solved World.”

What you will learn

  • Find out why Nick is spending time in seclusion in Portugal
  • Nick shares the big ideas from his new book “Deep Utopia”, which dreams up a world perfectly fixed by AI
  • Discover why Nick got hooked on AI way before the internet was a big deal and how those big future questions sparked his path
  • What would happen to our jobs and hobbies if AI races ahead in the creative industries? Nick shares his thoughts
  • Gain insights into whether AI is going to make our conversations better or just make it easier for people to push ads and political agendas
  • Plus loads more!

Transcript

Jeff Bullas

00:00:04 – 00:00:46

Hi, everyone and welcome to the Jeff Bullas Show. Today I have with me, Nick Bostrom. Now, Nick is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute.

Bostrom is the world’s most cited philosopher aged 50 or under.

He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked a global conversation about the future of AI.

His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds.

His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world.

He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. He has an academic background in theoretical physics, AI, and computational neuroscience as well as philosophy.

He has just published a new book “Deep Utopia: Life and Meaning in a Solved World.”

Jeff Bullas

00:01:30 – 00:01:49

Nick, welcome to the show. It’s an absolute pleasure to have you here.

Professor Nick Bostrom

00:01:51 – 00:01:55

Yeah, Thanks Jeff.. I look forward to chatting with you.

Jeff Bullas

00:01:55 – 00:02:49

Right now, Nick is dialing in from Portugal where he’s uh trying to get some seclusion. Um And uh we all like that. I’m in New York uh because I’m on the road a little bit, which is the opposite of seclusion. It’s uh just New York is this crazy big city that is full of interesting divergent people and ideas and minds. And it’s um so Nick’s in seclusion, I’m actually in the middle of chaos. So Nick, I have so many questions I want to ask you. And uh I suppose firstly, I wanna know a little bit, you’ve got a whole range of things you do. You’re into economics, physics, neuroscience A I and philosophy. OK. And I’m sure there’s other topics surrounding that subtext as well. So, and you’ve written two books though on a I, that’s what I find interesting as well. So,

Jeff Bullas

00:02:49 – 00:03:18

What led you because you are Swedish born? You, I think you did your first degree. Was it in Sweden? Was it? Yep. So, what, what was that first degree? And where did that come from? In other words, there was a curiosity about something that called you to do that degree. And then I want to talk a little bit about where you moved on from there. So, what was your first degree and curiosity that led you into the academic field?

Professor Nick Bostrom

00:03:18 – 00:03:45

Uh It was actually a mixed, one of the good things about the Swedish university system, at least back then was that you had um as an undergraduate, a lot of freedom to um compose your own curriculum, you could sort of pick and choose uh if you like to do that. And uh so, yeah, I think that was um uh mathematics and uh philosophy and some artificial intelligence if I recall that first

Jeff Bullas

00:03:45 – 00:03:55

degree. OK. And how long ago was that an Iaia? I have been around for a long time. So, when, how, when was that first degree done with a I?

Professor Nick Bostrom

00:03:56 – 00:04:43

Uh well, I mean, that’s like, it’s gonna be the, I mean, it was, I guess in the early nineties, uh I mean, uh this is a long time, this was pre internet era or at least before I started using the internet. So back then it was possible to be interested in the topic and not know anybody else who was interested in the topic. Uh So for me, I had long been fascinated with these, I don’t know big picture questions about humanity’s future and whether the human condition could somehow be fundamentally transformed for better or for worse with technological advances. But I knew nobody else who was in the least bit interested in this back when I was uh

Professor Nick Bostrom

00:04:44 – 00:04:57

an undergraduate in Sweden, which is like, obviously you just Google it and you’d immediately find the news groups and youtube channels and all the rest of it, right? But uh so it was a different era in that respect.

Jeff Bullas

00:04:58 – 00:05:43

What, what I love, what you just told me was that the fact that you could actually build your own degree. I, the thing I like about that is quite often we’re shoehorned, are doing degrees that aren’t quite in your curiosity or there’s no, no compelling thing for you that you are curious about. And um for me, instead of a box, you’re being given like a smorgasbord, which is um I, I think a great way to approach education maybe because I, I actually taught at private schools, high school for a few years before I moved into technology. So I, the education system is sort of a, you know, a subtopic of my interests and uh I am curious about its evolution as well. So, but that’s another topic.

Professor Nick Bostrom

00:05:45 – 00:06:08

Oh, yeah. Yeah, I’m gonna resist because, like, yeah, I enjoyed university but I didn’t like school prior to that. And so, uh, uh, being sort of liberated to study at my own pace, what I wanted, uh, like at university was, was a big change for me. Um, but, uh, yeah, well, let’s, let’s, let’s, uh, resist the temptation.

Jeff Bullas

00:06:08 – 00:06:24

We won’t go down that rabbit hole because that’s another topic of its own. It is a philosophical one as well. But so then did, when did you move after your initial degree, you, you’ve done multiple degrees in phd. S? Is that correct?

Professor Nick Bostrom

00:06:25 – 00:07:09

Only one phd. Um, but yeah, I did study widely at that point in, in fact, because I had, uh, a law school I didn’t read or was really not interested in science and learning and literature as a child. And it only was at age 15 that I discovered the world of learning and, and then I felt I had wasted the 1st 15 years of my life. So I became very, uh, intensely focused on a program of self study. Um, and, um, and, and so, yeah, I did then kind of, um, pursue a wide variety of different topics. Um, but both within the university letter, but also on my own in the parlor.

Jeff Bullas

00:07:10 – 00:07:48

Yeah. II, I totally get following your own topics. And for me, I read about 60 to 70 books a year. And I just follow my curiosity and I find that it just takes me to places like your book would have been discovered through your curiosity. So, and I, there’s two words that are rather like that I discovered recently that I’ve sort of joined together. One is what are you curious about? And then what compels you out of these, um, kaleidoscope of curiosities that we as humans quite often have. So there’s a, that almost leads to a calling, doesn’t

Professor Nick Bostrom

00:07:48 – 00:08:24

it? Yeah, it’s, uh, I mean, I think curiosity is a wonderful thing and uh sometimes you need to interleave it with periods of more focused work to actually sort of deliver stuff. But I think switching between sort of a phase where you’re more putting on the blinkers and getting stuff down and then when it’s done, you ideally take them off and kind of explore. Um So I’m, I’m hoping to enter that second phase now, having finished the book which required a kind of sustained period of trying not to get interested in too many other things.

Jeff Bullas

00:08:24 – 00:08:31

Right. So, just a quick aside here, um, what is the reason for the seclusion for doing some writing and thinking?

Professor Nick Bostrom

00:08:32 – 00:08:39

Yeah. Yeah. For, for uh getting some work done and ideally some thinking as well?

Jeff Bullas

00:08:39 – 00:09:10

Ok. Uh Well, I’m very grateful that you’ve actually taken a little bit of time out from seclusion to um, talk to the world. This is great. I’m really, really, thankful for that. Uh, ok. So when did you start leaning into, like, you’ve done physics? You’ve done, uh, you’ve studied economics? I think you’ve done it. Um, and now a, I, so where did a, I arise initially? It rose at university. So when did it really start to take on some, actually

Professor Nick Bostrom

00:09:10 – 00:09:27

before that? Um, I, I, uh, I remember when I was in the, uh, I know the army conscription center back then that uh there was conscription in every, every I guess 17 year old had to go into some center and

Jeff Bullas

00:09:27 – 00:09:29

It was two years, wasn’t it? Well,

Professor Nick Bostrom

00:09:29 – 00:10:26

you didn’t have, I mean, so, so you had to go to the center and they did the whole battery of physical and psychological tests and stuff. And then, um but I, I remember like there while I was waiting to step on the exercise bike or whatever it was reading this um parallel distributed processing, I obtained this book on inter library loan, which was very early work on neural networks. Um um And finding that very fascinating and thinking that that seemed to be on the path to words, uh understanding the nature of, of human cognition and, and building artificial cognitive systems ultimately. Um already uh at that point, I guess I had this general sense that A I the mechanization of cognition would be not just another cool technology but uh much more profound in,

Professor Nick Bostrom

00:10:26 – 00:10:54

in, in, in, in the ultimate record in the final invention that we will ever need to make. Because once you have machine brains that can do the invention activity better than we can, then obviously, they would then be carrying forward the scientific and technological enterprise at digital speeds. Um Yes. And uh and the human brain would become obsolete as an instrument for uh further development. Yes.

Jeff Bullas

00:10:55 – 00:10:58

So it happened, so it happened even before university you were saying?

Professor Nick Bostrom

00:10:58 – 00:11:45

Yeah. Yeah. Yeah. As a sort of, yeah, I think I was 15 or something when I first became interested in these things. Um um So, yeah, so it’s, it’s uh it’s been at the back of my mind and I guess it’s guided some of my studies at university as well more broadly. I felt. Um I wasn’t sure about what the meaning of life was or what I wanted to do with my life at that point. And so I figured a good interim goal would be to try to put myself in a better position to answer that. And so um studying various topics that seem potentially relevant, like philosophy deals with fundamental questions, physics deals with fundamental questions

Professor Nick Bostrom

00:11:46 – 00:12:36

I like psychology and neuroscience. It’s pretty um so it’s kind of trying to pick up different um ideas and insights from a bunch of different areas uh would seem good preparation for uh uh like if you don’t know what, what you should do, like, like in the interim, you could try to, like, get yourself better at figuring out the answer to that question. Um So, although it was uh um um, like maybe a kind of seemingly random and motley assortment of uh topics to study, I think they actually, had, had a common denominator in that I felt not one discipline had a monopoly on important insights. And so, um I was kind of looking around or where I might gain some, some useful concepts. Yeah.

Jeff Bullas

00:12:36 – 00:13:06

Well, I think what interests me is that there are a lot of philosophers. Um, well, a lot of physicists become philosophers on their own, right? Because they’re looking at, you know, the universe, the world and where, where man fits in it and how it all works. Um And sometimes it’s done in a mechanical way, but then the bigger questions arise out of that as well as um so some of the best quotes you’ll find on philosophy come from physicists and scientists.

Professor Nick Bostrom

00:13:07 – 00:13:34

Yeah, I’ve never, to me this, this hard separation that is sometimes made between philosophy and science has always seemed more of an obstacle than like at least my interest. I’ve kind of been pursuing them whether they read, whether it’s called philosophy or, or, or physics or something else. I, I don’t really care so much about the labels but um but just kind of chasing after the uh

Professor Nick Bostrom

00:13:35 – 00:14:01

the, the prey wherever it runs, maybe that’s a brutal metaphor. But, uh, you know what I mean? You’re chasing the question rather than sort of what, what many do is they, um, train in a particular academic discipline and they learn the methods of that discipline and then they apply those methods to the questions that you’re supposed to ask in that discipline. And that’s, that’s a valid, valid thing for, for people to do. Uh, some, some people should,

Professor Nick Bostrom

00:14:01 – 00:14:23

uh specialize in that way. But I think there is also room in the world for some people who kind of roam a little bit more and, and there are certain questions that you can’t really tackle. If, if you confine yourself to a specific methodology, you have to kind of grab whatever ideas and tools and concepts you can to try to make any uh get any traction at all.

Jeff Bullas

00:14:23 – 00:14:36

Yes. Yeah. Well, it’s uh I think uh sometimes having a PhD following the actual tools that rise within that and the structures and approaches leads to a certain myopia, doesn’t it?

Professor Nick Bostrom

00:14:37 – 00:15:14

Uh Yeah. Yeah. That’s uh yeah, like the, what the, the, the Formacion professor now? Like where are you, um, become? So you internalize the norms and expectations of your chosen profession because you spend all your day doing that and talking to other people to do that and then your whole outlook on the world is kind of filtered through the particular concepts in, in that discipline. But um I mean, we are all like, yeah, we all have different deformations of, of our past, I guess. But, uh, um, you, you can, I, I, um,

Professor Nick Bostrom

00:15:15 – 00:15:53

I guess it’s also a question of psychology so some people find it uncomfortable to be uncertain about things and the unfamiliar. Um, and, and other people, I mean, you mentioned curiosity before are drawn to it. Like, I sometimes find that the less I know about some topic that the, the more fun I have learning about it often, uh, then if it’s a topic I know very well then it feels more like, um, it feels more like work as it were where if it’s like some completely random different thing, I’ve never have any clue about and just learning the basics of that, uh, to me feels quite relaxing and spontaneous and fun.

Jeff Bullas

00:15:54 – 00:16:01

I think that raises the quote, I don’t know, it was true, but if you love what you do you’ll never work another day in your life.

Professor Nick Bostrom

00:16:02 – 00:16:46

Yeah. Yeah. Uh, I think that’s the, yeah, that’s like the, I guess the ideal to hope for if one can find, if I can find a little slot in the world that allows one to do that. That is, uh, unfortunately a fairly rare privilege because it can be hard to love what you do if, what you do is, um, something that, uh, is, is, you know, maybe maybe, uh, like strenuous or, or boring or un gratifying. There are a lot of jobs that have to be done but they are not being done because they are fun but they have to. And that, that’s, that’s valuable too. But,

Jeff Bullas

00:16:46 – 00:17:06

uh, yeah, it is. And there’s no escaping that. But I think, um, that pursuing that, having that intent to try and discover that is, um, it’s not a bad life journey as far as I’m concerned. So, do you, do you feel like what you do is almost play, Nick?

Professor Nick Bostrom

00:17:06 – 00:17:49

Um Yeah. Well, the distinction between play and work is, is, is also one of these dichotomies that I have uh that doesn’t seem to match very well to my own personal experience. It blurs differently. Um Like all II I sometimes have like if I were trying to keep some time logged on, like, so how many hours did you work today? I have no idea really because it’s like what I do for fun. A lot of that indirectly informs, yeah, my work and a lot of the work is also fun. And so um yeah, I think that those categories even are not like the very useful descriptor. I think

Jeff Bullas

00:17:50 – 00:17:51

they’re entangled, aren’t they?

Professor Nick Bostrom

00:17:52 – 00:17:54

Yeah. Yeah. Very much overlapping.

Jeff Bullas

00:17:55 – 00:18:42

Yes. So when did a A I is obviously a big focus of yours because they’re the two books that have been um I know Super Intelligence was a New York Times best seller and I enjoyed it. It’s very philosophical which I rather liked because I think we need to um ask better questions and explore uncertainty. I, and I think there’s a lot of people who are not comfortable with uncertainty. They give me a, give me a template, give me, you know, steps, give me and then I can become successful whether it’s a philosopher or a teacher or an entrepreneur. So let’s lean a little bit more into a I now. So when did I really become a bigger focus for you? When did that really happen?

Professor Nick Bostrom

00:18:45 – 00:19:22

Yeah, I mean, throughout really, like, I, I wasn’t maybe uh obviously not writing much about it as, as, as as a kind of undergraduate student. Uh But um um it, it has seemed for a long time like a central uh issue for the future of humanity. And I was like back in, in the uh like nineties, I think it was like doing a master’s degree at the time. I think I wrote this uh sort of manifesto of predictions from philosophy as I am kind of arguing. I mean, basically what I said earlier that we shouldn’t make this very hard boundary between philosophy and science and that

Professor Nick Bostrom

00:19:22 – 00:19:53

Like even philosophical concepts can be useful, particularly in the preparation phase where you’re tackling questions for which there is not as yet a clear methodology where maybe it’s not even obvious exactly what the right way to ask the question is. Then, you know, these more conceptual skills that uh philosophers sometimes develop could be useful. But particularly if combined with some other uh knowledge and insight from other disciplines. And um that you can get, if, if somebody understands

Professor Nick Bostrom

00:19:54 – 00:20:40

two fields, they sometimes can do things that you couldn’t do by say, combining one person who understands one field and one person who understands the other field. And then you sit them at the same desk and hope that some great collaboration will happen. Well, it can happen, but often those kinds of um sort of forced interdisciplinary collaborations are very superficial. Um but having people um with at least some degree of basic understanding of a wider range of questions can be um quite fruitful. And, this later in the future of the Humanity Institute that I set up at Oxford, this I guess, fast forward some years, but we had a lot of these polymath types there. Um And uh I think

Professor Nick Bostrom

00:20:41 – 00:20:57

It just felt natural to be curious and to sort of learn whatever you needed from different areas to be able to pursue the questions we were interested in. And I think that did create a very fruitful intellectual uh culture. Yeah. Yeah,

Jeff Bullas

00:20:57 – 00:21:21

I think, you know, maybe one way to describe it is hard science and soft science. So philosophy um versus, you know, chemistry, for example, or physicists, these are sort of like there’s data about that, but it’s very hard to get data about the human mind, isn’t it? To actually be hard and fast on it? Because we’re still trying, we still don’t know how the human one really works.

Professor Nick Bostrom

00:21:23 – 00:22:07

Um, yeah, I mean, we know a lot about it but, uh, there’s also a lot we don’t know about it. Um, um, in, in, in, in, in some sense, there’s nothing we know better than the human mind in the sense that we have, like, spent our whole lives inhabiting it and seeing from the inside uh um how it operates, but like matching that up to observations made in a neuroscience lab or something that’s still um I, I do think that with the current advances in A I that does create an additional uh lens uh through which to look at human uh psychology and human cognition, these neural networks that we’re now building in. Although they are

Professor Nick Bostrom

00:22:08 – 00:22:50

very simplified versions of biological neurons, you still see some of the same phenomena that, that we can observe uh when, when humans are thinking in, in these big neural networks, I mean, like to an almost ridiculous degree in some cases where these like chatbots sometimes, sometimes you have to kind of give them encouragement in the prompt or, or trying to really uh like give them almost like a pep talk and, and that sometimes that increases their performance on a particular task, they seem to try harder, which is uh like, totally not what you would have expected. In the days of good old fashioned A I, right, like uh but the the anthropomorphism of,

Professor Nick Bostrom

00:22:50 – 00:23:29

of, some of these approaches are quite striking. And so, yeah, it could just be a coincidence that, but I think there are probably some deeper shared structures between the way that the human brain works and how these large transformer models work. Obviously, there are still elements to the way that our cognition is organized that are not yet implemented. But um the way they de generalize the way that they develop intuitions and see patterns from data. And um I, I think that does probably uh match uh at, at the sort of basic principles levels, some of the things that are going on inside our own brains,

Jeff Bullas

00:23:29 – 00:23:46

you could almost describe for me just think about as we’ve been talking here is A I is almost the bridge between machines and humanity that’s going to help us make sense of being human, I think because it raises big questions, don’t you think that?

Professor Nick Bostrom

00:23:46 – 00:24:37

Yeah, it certainly raises big questions. Yeah, I mean, I think the biggest question it raises probably is on the practical impact of A I as we develop increasingly powerful machine tolerances. Um How can we make sure to uh align them and make them safe? Um And that, that was obviously a big focus of the, the, the previous book Superintelligence. Um And then assuming we can do that or I guess, kind of entangled with that is the question of governance. Um If, if we can control these tools, how do we make sure we don’t use them as we do with so many other technologies to wage war against each other or oppress one another, et cetera, et cetera. So, then beyond that, we have practice ethics that have received less attention so far. But for example,

Professor Nick Bostrom

00:24:38 – 00:25:17

some of these digital minds, I think, well, the candidates for having moral status so that they’re not mere tools but beings in their own right, that might have morally considerable interest and getting that right is another big question. Uh But then you’re right on top of these more practical questions, there is also uh the question of whether it changes how we see ourselves or like how we conceive of our role in the world, um which is also interesting, I think. And uh yeah, and I guess that gets us closer to the topic of this Deep Utopia bug which uh

Professor Nick Bostrom

00:25:18 – 00:26:12

considers what would happen if A I goes right? If, if we do solve these other practical problems, uh which is a big if, but let’s postulate that for the sake of argument. And then if everything goes well, we develop these machine super intelligence that can do all tasks better than we can do. Um And we solve, we solve the world to the extent that it can be solved, right? Then what at that point would give purpose and meaning to our human lives. Like what, what would be doing all, all day long. Um um And um I think they are kind of very like there are superficial answers to that, but once you start to really dig into this, this gets to some quite deep philosophical questions. Um which um yeah, I tried to explore, um,

Jeff Bullas

00:26:13 – 00:26:47

yeah, I, I, you in page um I’m almost through your second book after reading Super Intelligence and I suppose you’re just trying to steal the two books and your journey and your thinking about and philosophizing about A I and humans and how that impacts us as humans. What will bring us meaning and purpose if you know A I can actually produce great works of art and music. OK. So um a lot of artists get great meaning and, and writers get great meaning out of crafting words and putting them together. So if the machine can do it,

Jeff Bullas

00:26:47 – 00:27:44

you’re going well, what am I here for? Right. So, but your two books are interesting in the sense that the first one is more a little bit dystopian and you, then the second one is more utopian or deep utopia, obviously. Um And that’s maybe simplifying things, but in the first book, quickly set things up as you talked about. Um The singleton versus the multi polar I think is the two words you used in other words, Singleton, as I understood it is one entity controlling A I for the world, the other one where it’s where you have multiple entities uh fighting for domination. That’s a race to the bottom. So I found it quite interesting. Where’s that gonna go? And you wrote this in 2014? So what are your thoughts on those basically multipolar versus Singleton in 2024?

Professor Nick Bostrom

00:27:45 – 00:28:26

Yeah. Um Well, I mean, I actually wrote it in the years preceding 2014. It took, I think, six years. Yeah. Yeah. So these are sort of two abstract properties of a world order. Um where singleton, as you say, means a world order in which at the highest level of decision making, there is a kind of one point of agency. So in theory, that could be, you know, a world government or a world dictator or it could be like an A I that has taken over everything or like maybe a, a moral code that is strongly self enforcing and sufficiently prevalent. So it’s kind of neutral as to, you know, whether

Professor Nick Bostrom

00:28:26 – 00:29:15

you have a good singleton or a bad singleton or the precise form. But at an abstract game theoretic level, it means that there are certain global coordinations problems that can be solved in a world where you have a single uh and the opposite of that would be multipolar if there’s more than one decision making. Uh entity at the highest level with different goals and objectives. And then, then you can get these, uh you can get conf conflict um and war and competition. Um And, and each of these have their own ways of going wrong potentially. Um We are familiar at the global level with the failure modes of a multipolar world where throughout human history, we’ve had arms races and uh wars and um destruction of the global uh com like the public comments like overfishing in the oceans or pollution,

Professor Nick Bostrom

00:29:16 – 00:30:04

um greenhouse gasses, et cetera. Um uh We, we don’t have uh uh experience with a global singleton. We have not yet had a world that has been integrated at that level, but it’s very easy to imagine how that could go wrong if this, there’s like one decision making node and that gets captured by some malevolent actor. Uh Then there is nothing else that you could take ref refuge into or that could overthrow it. So it, it um and so I think both of these potentially are still on the table with respect to our future. Um um My guess is that uh the A I transition might increase the probability of us ending up with a kind of singleton but it’s by no means a given.

Jeff Bullas

00:30:04 – 00:30:52

Mm Yeah. It’s, it’s, it’s interesting watching um things unfold. I’ve been in technology and seen trends since the nine, mid 19 eighties and the rise of the PC revolution, the rise of the web and the internet in the mid nineties, the rise of social media in the early two thousands. And it’s interesting to see the evolution of those technologies. Um What’s exciting me about A I and my, my real curiosity and being compelled now to write about it more and explore it and think about it more. Uh is the intersection of A I and humanity because in 2022 and in 2022 which is only basically 18 months ago, um A I got given a human face with a small search box and that was Chat GPT

Jeff Bullas

00:30:54 – 00:31:17

and the velocity of the uptake of that technology. I found out that an amazing 100 million people signed up for it in about seven weeks. So would you envision, had you envisioned A I had been accepted and embraced by humanity, the general population so quickly. Did you envision that or not?

Professor Nick Bostrom

00:31:18 – 00:32:03

Um I mean, the speed of uh development at a certain point? Yeah, I do discuss superintelligence, consider different possible um scenario classes where either if, if you kind of, especially if you sort of the two different questions about timeline, you could ask at least, right? So there’s like, how far are we now from um human, fully human level general intelligence that can do everything that we can do. And then a second question you could ask is if and when we ever reach that point, how far will that be from some radical forms of superintelligence that just completely outpaces us in all complex cognitive fields? Um And so that second question,

Professor Nick Bostrom

00:32:04 – 00:32:51

like at at least at the time, sometimes referred to as like the take off, like if you have a slow takeoff, maybe that going from roughly human level to radical superintendent, it would take decades, you know, in a slow scenario, I thought that was less likely and I still think that is uh less likely and more likely you would have either a fast or an intermediate takeoff. So like a fast it could be anything from hours to weeks or a few months, maybe. Uh, an intermediate might be some months to a few years. And I think we are most likely going to have something like that. Um mm um It, it was not, it was not very obvious back in 2014 to what extent you would get there,

Professor Nick Bostrom

00:32:52 – 00:33:37

like significantly world changing products coming out from A I prior uh uh to to the takeoff beginning as it were like, you, you could imagine a scenario where you had a is that were not really very useful for anything at all. And then some labs stumble on a key insight and they develop something internally that becomes superintelligence. And but during this period, there might not be much impact on the world that that would be one scenario. Now, it looks more like we are in a scenario where you have important applications of A I, even before we have fully generally intelligent A I systems. So with these uh LL MS that we see today and, and I I,

Professor Nick Bostrom

00:33:37 – 00:34:17

unless we get an intelligence expulsion within the next few years, I imagine we will have more of these um sort of pre A G I impacts on the world that um can have an influence over ultimate outcomes in particular, it makes it more salient to people that A I is coming. And so you see now over the last couple of years, um policymakers waking up to this. Um And II I was just in, in, in Brussels uh uh yesterday and there was a big conference there. We’ve seen statements from the White House and the UK had its Global A I Summit um some months back.

Professor Nick Bostrom

00:34:18 – 00:34:53

Um And this is a big difference from 2014. We even talked about things like A G I was viewed generally as kind of science, just science fiction futurism like a fringy thing. It wasn’t really taken seriously by most people. But now, of course, everybody sees that this is an important topic. Um And I think that is mainly due to the fact that uh we have seen impressive A I systems already and people start to have experience interacting with chatbots, et cetera and, and then we’ll see more of that over the coming years.

Jeff Bullas

00:34:54 – 00:35:39

Yeah, it’s um that, that leads me to the next part, which is, let’s talk more about your Deep Utopia book, which is your latest release, which I’ve read 350 pages of, there’s 500 pages there. It’s um it’s a great read. Uh raises many questions and also provides possible scenarios. So there’s, I was looking for themes within this, within this, you know, basically you’d raise a lot of things. So there’s a lot of things going on in the book. Um which I think, yeah, there is um because I detect if I am right or wrong, but you not only have a very strategic mind, but also you, you love an attention to detail as well. Is that correct?

Professor Nick Bostrom

00:35:40 – 00:35:50

Um un unfortunately, perhaps because it does make getting something like this a much longer and harder process than if you just slap something together too. Um

Jeff Bullas

00:35:52 – 00:36:29

Yeah, that, that’s absolutely fine and that I detected that I think even just, I think you admitted in one of your books that you uh had this thing that controlled you quite often, which was a real attention to detail and trying to get to the real nitty gritty of things which raises some really great questions. So in deep utopia, there’s a few themes that I wanna look at a little bit more, you talk about redundancy. In other words, computers or A I makes us as humans redundant. And then that raises the next part which is about, well, why am I human? Why am I here?

Jeff Bullas

00:36:32 – 00:36:46

Ok. I’m sorry about that, but I’ll tell you what that is. We’ve had an earthquake in New York. Really? It’s an emergency alert. 4.7 earthquake in New York. JFK.

Professor Nick Bostrom

00:36:47 – 00:37:22

4.7. Ok. Uh, I mean, that’s hopefully you guys should be all right. It’s a little bit like they had snow. So that was in, in, in, in Britain. Like, uh it, it’s not generally very cold, but there was this classic phrase, the wrong kind of snow. Have you heard of that? Is it wet? Snow? It was like the rail system was crap. And then, um, at one point they get like one day of snow a year. So that was like a little bit of snow. Like if you said to me, I don’t know. Uh And then, the railways shut down and they say, how can you not deal with snow like this? You should expect

Professor Nick Bostrom

00:37:23 – 00:38:12

that occasionally snow and then they like, had this lame excuse. Well, it was the wrong kind of snow we had prepared. But, um, and, uh, so like, and it’s a big contrast because I spent a lot of time in Canada as well and, and, and, and there they can get like feet of snow and everybody just goes on with life and at an island and then there’s like a couple of centimeters of snow, the whole country just shuts down. Uh, kids stay home from school, they can’t walk to school if there is like, uh, a couple of centimeters of snow. Um, but anyway, yeah. Um, but I’m, I’m sure 4.7 is, um, yeah, um, I, I guess for people who live in the actual set of earthquake prone zones, they might have a similar attitude to 4.7.

Jeff Bullas

00:38:12 – 00:38:16

Well, you’ve never had an interview interrupted by an earthquake zone warning, have you? Uh,

Professor Nick Bostrom

00:38:16 – 00:38:18

no, I think that’s a first.

Jeff Bullas

00:38:19 – 00:38:50

Yeah. Well, just talk quickly about the side to, uh, snow. Um, s, see, my ex wife is Swedish and you’re Swedish. I’ve actually known a little bit about Sweden in that, um, in Sweden at a certain time of year you’ve gotta put snow tires on. Whereas in the USA that’s, you don’t do that. You’re not allowed to do that because that’s infringement of liberty and freedom. Um, so, uh, because you have to put snow tires on, which means that if it’s ice and you’re still driving, ok. And a bit of snow happens, you’re actually ok.

Professor Nick Bostrom

00:38:51 – 00:39:24

Yeah. Um, yeah, I don’t, yeah, I, it seems to work in both countries. Like, I mean, people still, we managed to get done with it, which makes you wonder how many things we just assume are necessary. And then if, if there were only one country I, I, like, imagine how many things they would convince themselves were necessary. And the only possible. Right. Way of doing things. Uh, if, if there were not other places you could look at to sort of see, ah, you could actually do it like that

Jeff Bullas

00:39:24 – 00:39:27

and that comes down to single and multi polar, doesn’t it? Really?

Professor Nick Bostrom

00:39:27 – 00:39:50

Yeah, it does come down to that. So, if there were a single time, like this is indeed one of the concerns with, with that kind of world order that you would have one orthodoxy and then they would define some overturned window of acceptable opinion and uh like acceptable way of doing things. Um and that there would be no counterexample. So uh like, I think a lot of

Professor Nick Bostrom

00:39:51 – 00:40:20

political progress around the world and like revolutions and stuff have happened because people under a despot could see that there were other people who didn’t have to live under a despot and they could draw inspiration from that and say, well, hey, if, if they can have a democracy, why can’t we have a democracy or if they can have, you know, public healthcare, why can’t we have public health care or if like, and um be being able to learn from, from people who do things differently, I think has been an important driver of, of progress. Yes. Yeah,

Jeff Bullas

00:40:20 – 00:40:55

exactly. And um being able to travel opens our eyes to that sort of world and, and shows us different options that we wouldn’t have thought about by hiding in our own country, which is great. So I want to get back to where we were rudely erupted by an earthquake alert in New York. Um, redundancy and the questions that come out of redundancy. In other words, you talk about, I think light and deep redundancy or I might be using the wrong terms. But um the thing is, I could make us as humans redundant. Tell us a little bit about your thinking on redundancy.

Professor Nick Bostrom

00:40:57 – 00:41:38

Well, yeah, deep or shallow. So um uh I mean, so there, there are layers to this and, and it’s like a kind of a journey that you can take uh to start to think about this. So at the most superficial level, you have the consideration that if A I become better, they could do more of our jobs. So then maybe you would have unemployment and then there is a question of what could you do about that? Like you could have maybe re-educated people to do the jobs that A s can’t do, you know, maybe unemployment insurance and there’s like a set of questions there, right? That the next sort of uh oo level you realize that. Well, you know, ultimately it’s not just some jobs that could be automated, but uh

Professor Nick Bostrom

00:41:39 – 00:42:34

with certain exceptions, all like if, if A I it’s not just that they could like, you know, do the assembly on factories, like, but they could also ultimately, you know, write the poetry and direct the movies and uh compose the, the music, et cetera, et cetera. And uh uh like the main exception to that would be in cases where consumers have a direct preference that the work be done by hand manmade or human made goods might uh command a premium price just as today. Some consumers might pay a premium for a trinket that is made by some favorite group or like, you know, some indigenous craft people make it versus it’s being done by a sweatshop in Indonesia. Maybe you pay more uh for, for the, the, the handmade product uh because you care not just about the product but about its causal origin.

Professor Nick Bostrom

00:42:35 – 00:43:25

And so to the extent that future consumers do that, that might create demand for human labor or I would prefer to maybe watch humans, uh human athletes compete even if the robots could run faster or box harder, like it might just be more fun. So, but if, if we bracket that for the time being, then yeah, you could have a kind of full unemployment. Um So, so that’s a slightly more radical conception. You have a kind of post work utopia if you try to think of some good way that things could be in that condition. Um But then, and, and that’s, that’s usually where the conversation stops. Uh if, if it, if it even gets to that point. But I think there are several steps beyond that, that you could um consider which is um you could have a more generally

Professor Nick Bostrom

00:43:26 – 00:44:07

post instrumental uh condition where it’s not just human economic labor that has been automated, but a lot of other things that we spend effort on as well as this condition of technological maturity, which I think would follow the invention of machine superintelligence. And so, if you think about it like, if you had to work, what would you do? Well, some people would say, oh, it is great. I like, I like fitness or maybe I would spend more time, you know, uh working out or something. But then you think, well, in a technologically mature world, you would be able to pop a pill that would have the same physiological effects uh spending

Professor Nick Bostrom

00:44:07 – 00:44:48

an hour uh on, on the uh uh cross trainer or with the weights. And so then would there still be any point in going to the gym if, if, if, if, if you like. And so then you think, well, you know, I really like, uh I’d like to understand mathematics better. So now maybe I wouldn’t have time to actually learn. I never really got to, you know, do this in school. But then again, at technological maturity, I think that would be an opportunity to directly install new skills and knowledge in the brain. And you could imagine different, either sort of nanobots infiltrating the human cortex and adjusting the synaptic weights

Professor Nick Bostrom

00:44:49 – 00:45:33

uh into whatever configuration constitutes knowledge and mathematical skill. Um Or if we are uploaded in the computers, then it would be even easier to sort of directly edit our um, parameters. Um And so you can then go through almost like a case study, uh the different types of activity that currently fill our lives and see whether in fact, there wouldn’t that technological mature to be some shortcuts to achieving the same outcome that we’re seeking. And I think, uh in a good many cases, in, in a big majority of them, yeah, there would be shortcuts that would then at least put a little question mark above those activities, like you could still do them, but it would sort of risk seeming pointless

Professor Nick Bostrom

00:45:33 – 00:46:13

if you had to put in this effort only to get to a point that you could have gotten equally easy by just pressing a button. So that, that then creates this more profound challenge of conceiving of a post instrumental utopia. I think there is like one step beyond that as well. Um Which is what I call a plastic utopia, which is where we don’t just have the ability to automate all the things that we put effort into. But where we ourselves also uh become malleable. So with this technological affordances, we could also modify our own,

Professor Nick Bostrom

00:46:14 – 00:46:42

the psychology, uh our own experiences, our own bodies. Um And in the limit, uh we’ll just be able to choose what we feel and think and um our natures and, and that would then put question marks over an even wider range of activities. I mean, um, if, if, for example, we do a bunch of stuff now because it’s fun. Like maybe you climb the mountain, it’s strenuous and your feet hurt. But in the end, you get this satisfaction of looking,

Professor Nick Bostrom

00:46:42 – 00:47:17

you know, at, at the, the, all, all the way to the horizon and you feel a sense of accomplishment. But if you are doing the whole thing only to get this jolt of, of uh joy at the end, I mean, you could get that jolt of joy at technological maturity, you know, with, like with a drug or some direct brain stimulation without the side effects and addiction. Um So you, you really don’t end up in this um the space where there’s a lot of stuff we could do, but also at the same time, at least prima facie, it looks like we’d have no particular reason to do any of those. And so

Professor Nick Bostrom

00:47:18 – 00:47:53

What then, I mean, do we just become a kind of contented pleasure? Blobs uh like enjoying uh drug ecstasies or direct brain stimulation of the nucleus accumbens uh or is there something more that we other values that we nevertheless would have recent and ability to instantiate in, in such a kind of plastic world? And so that, that then gets to kind of, yeah, where, where the book really starts. I mean, it does have a couple of chapters on the more mundane questions leading up to that, but that’s kind of at the heart of the investigation.

Jeff Bullas

00:47:54 – 00:48:38

Yeah, I, I love the set up in terms of, uh, the redundancy and then that raises the bigger questions of if the machines are doing it all, can it add to like you talked about plastic? Um A I or so the reality there is that a machine could, you know, produce the joy that the endorphins that we get from running, for example, could be done via, you know A I for example. And the technology now leads to redundancy, it raises the bigger questions for us as humans, which is purpose and meaning. And I’m gonna read a quote by, well, it’s a quote I know about and, and I’ve been fascinated by it um it’s by Joseph Campbell said, follow your bliss. Um

Jeff Bullas

00:48:39 – 00:49:18

That’s where you discover purpose and meaning. In other words, don’t worry about what or why purpose or bliss or purpose or meaning is because just following a bliss will produce happiness, fulfillment. And the other key word that you discuss a lot is fulfillment in your book, Deep Utopia, which is great. Um In other words, you don’t need to actually be able to define what or why the meaning or purpose is Geoff by following a bliss. Joseph Campbell said it is really enough because that’s where you’ll discover real deep joy and meaning just in that journey of following your bliss, which is an inclination to do things you’re good at.

Jeff Bullas

00:49:19 – 00:49:43

So let’s talk a little bit more about fulfillment, meaning and purpose, which comes out of the question of I’ve been replaced by uh A I, what next? Who am I? And for example, that’s the question of an author. A writer who goes to chat G BT can do this for me. Why do I actually need what I’m here for? Let’s discuss purpose and meaning and fulfillment. But because they’re big questions, aren’t they?

Professor Nick Bostrom

00:49:45 – 00:50:36

Yeah. So, I mean, maybe some people are led to their meeting if they follow their bliss. I mean, other people are uh led to fentaNYL. Um The uh it, it might depend a lot on the circumstances you’re in and what you try first. Uh um And it is indeed a deep, not so easy question whether ultimately uh we have most recently to reject a purely hedonic uh conception of Utopia. I mean, it’s easy to dismiss it because it seems kind of uninspiring. But uh the question is not which Utopia is most interesting to look at, but which one is best to inhabit? And these are two distinct questions and we have to be very careful to separate those in our minds if we’re trying to give answers. Um Now, I think ultimately, uh

Professor Nick Bostrom

00:50:37 – 00:51:29

it’s possible that we can and should instantiate other values in addition to Hedonic well, being in Utopia, we certainly could have that as well and, and should have huge amounts of it. Um But it doesn’t have to exclude the future, also being interesting and fulfilling and enchanting uh and even purposeful. Um You can distinguish um two different kinds of purpose. It is for both natural and artificial purposes. In, in Utopia, artificial purposes are just purposes. We give ourselves for the sake of having purpose. So we kind of do this, you know, you play a game like you, you decide that you are going to try to get your piece to a certain square on the board or something like that, right? It’s a completely made up purpose, but uh

Professor Nick Bostrom

00:51:30 – 00:51:57

it can still be a good thing to do that you adopt this goal and then once you have that goal, it creates an opportunity for an activity which today maybe gives us joy. But you might also think it’s intrinsically valuable to um to be involved in these kinds of activities in ways that draw on your different faculties and your creativity and you have to struggle for it. So certainly we could have kind of unlimited art, artificial purposes

Professor Nick Bostrom

00:51:57 – 00:52:34

in Utopia. If you have a problem, just setting yourself a goal and being motivated by it, certainly neuro technologies would be at your disposal to sort of install different goals in your mind if that helps. So, that’s like a check in that box for artificial purposes. Sure, the Utopians can at least have that. But I think on top of that, they might also be able to have some forms of natural purpose. So these would be purposes that kind of exist independently of our desire to have purpose. Um And we have a lot of those today, right? You have to, you have to uh

Professor Nick Bostrom

00:52:35 – 00:53:31

get into work if you’re gonna have a paycheck, which you need to pay the mortgage and you have to brush your teeth in order to have healthy teeth. We have a lot of instrumental reasons for doing something that gives us purposes uh in life. And there might be some of those natural purposes surviving into utopia, even though machines could automate a lot of the things we do. There might be some values that give us reasons for doing things that we can’t outsource, for example, uh the value of honoring your ancestors. Um It may require the honoring to be done by you, for you to spend time thinking about, you know, your parents or other people who have lived before you and remembering them and, and uh um and so on. Um um the continuing traditions, you might value tradition and

Professor Nick Bostrom

00:53:31 – 00:54:08

for the tradition to continue, it might not count. Uh if you just create some machines that enact the rituals or something like it might have to be done by the people who, you know, started the tradition or, or their sort of descendants depending on what the tradition is. Um There could be more subtle aesthetic values that require our direct participation for them to be realized. And there are social entanglements that we have. Like if somebody else happens to, for whatever reason, want you to do a certain thing, um then

Professor Nick Bostrom

00:54:08 – 00:54:40

even though you could create a robot that would do it, that might just not be what they want. So if you happen to care about what the other person wants and that’s what they want, then you now have a natural purpose. The only way for you to achieve your goal of satisfying their preference is by actually doing the thing. Um And I think there are more complex and subtle versions in which there are these value based constraints that might empower us to do things and give us some natural purposes as well. Um

Professor Nick Bostrom

00:54:41 – 00:55:05

So that’s the outlook for purpose. Now, I make a distinction between purpose and meaning where purpose is the kind of narrower term, which basically means doing something for the sake of achieving something else. And meaning is, you know, maybe that involves purpose, but it’s like a special kind of purpose or some. Um it has some extra attributes um um which I also discuss but um

Professor Nick Bostrom

00:55:07 – 00:55:11

There, the situation is more complicated.

Jeff Bullas

00:55:11 – 00:55:46

Yes. And you talked about levels of purpose. You’ve got uh like any used examples, for example, the reason you clean your teeth or go to work is to earn a paycheck. And then there’s a and then there’s big purposes which are, uh, much harder to define. Um, I did love the analogy about golf. And you, um, in talking about it, you don’t understand why people enjoy golf. I don’t, I think that’s what you said. But, um, and, and to be exact, what did you say? Yeah. So golf was about so, like, people go and play golf. Not,

Jeff Bullas

00:55:46 – 00:56:06

Well, I actually play golf and I feel better if I actually hit the ball in the hole in par instead of a bogey or a double bogey. On the other hand, there’s a certain, just joy in being, having a conversation with your playing partner and enjoying nature. So there’s, there’s different layers to purpose, isn’t there really?

Professor Nick Bostrom

00:56:08 – 00:57:07

Um Yeah. Yeah. Iii I don’t, I think I expressed my puzzlement by people who enjoy golf. I think all of the things you mentioned. Um And it’s an example of a game, right, where we kind of create artificial scarcity and limitations in, in order then to enable an activity that we enjoy, I think probably with respect to golf, it’s a lot about enjoyment um as, as a uh but but that is not a sufficient uh um goal to create natural purposes in top essence, that would be shortcuts to enjoyment. Um um Although there are two, that analysis becomes more complex because you could imagine having certain values that um um mhm would not want you to modify yourself. Um If you have those values. Um,

Professor Nick Bostrom

00:57:09 – 00:57:41

then the only way you might be able to get the enjoyment if somehow you have another value that prevents you from, um, you know, availing yourself of, of, of drugs or other shortcuts, then you might be stuck with having to try to generate little dribbles of pleasure by running around, uh, like a madman with a club and swinging at that little ball or, or whatever, tickles your jollies, um, uh, in perpetuity. Um, I mean, I think it’s not an either or thing there. I think we could certainly upregulate our ability to take pleasure in things. Um, but it,

Professor Nick Bostrom

00:57:41 – 00:58:24

it, it doesn’t need to be, uh all the way to the other extreme where you have some kind of junkie sprayed over a flea infested mattress still in bliss. Like you could have a combination of great levels of subjective enjoyment and, and also engaging in beautiful, um, you know, works of creation in uh interactions with other people, uh, exercising your different faculties to the maximum being creative, um, et cetera, et cetera. And, and there’s like an interesting question, how to, to what extent can you combine these different possible values so that they can be co instantiated, uh to a very, very high degree. And I’m, that’s, that’s the bit that I’m kind of optimistic about. Um,

Professor Nick Bostrom

00:58:26 – 00:59:12

that there is something there that uh could be created that would be, uh wonderful. In fact, I think it is wonderful, beyond our ability to imagine currently. Um um but it’s not a book that tries to, it’s not so much a book about conclusions as, as it is one about questions and trying to enable the reader to bring into focus some of these deep questions and then decide for themselves. So it’s a book that kind of plays with different perspectives and ideas and flashes them together. Um And I think some people , especially if they’ve only heard something about the book or, or like to read a little extract, I might not realize that some of these early things that are unappealing or kind of intentionally presented to be unappealing. But nevertheless,

Professor Nick Bostrom

00:59:13 – 00:59:29

The idea is to run into that counter intuitiveness and collide it with other counter intuitiveness and then see what kind of emergence from that rather than try to paper over the sort of philosophical inconveniences. Um Let’s rip the wallpaper off and see what’s actually there.

Jeff Bullas

00:59:30 – 00:59:39

So what you’re asking in this book is to challenge people to live a considered life, which is where you ask the questions.

Professor Nick Bostrom

00:59:39 – 00:59:47

Yeah, or at least to, to, to, to think about these things. And, and ultimately, the hope is that if at some point, some group of people

Jeff Bullas

00:59:48 – 01:00:13

Are tasked with liberating about the future, which might happen if A I goes well, you know, whether it’s a few people in some A I lab or some government or the voters of some country or, or maybe some humanity wide process, whereby ultimately, at some point, somebody needs to have some opinion about what kind of future ultimately we want to steer towards. Then that would be a very hard deliberation, right? Like imagine trying to

Jeff Bullas

01:00:13 – 01:00:41

somehow conceive of this, what, what the millions of years into the future. So I figured it could be useful for uh those to have something to read in preparation for going into those deliberations that could, you know, um uh he help them think about these questions and maybe put them in the right frame of mind, a kind of playful, generous frame of mind as they approach this. And um so, so that that’s the kind of little secret purpose behind the book. Um

Professor Nick Bostrom

01:00:42 – 01:01:13

And I think that’s fantastic and for me, it raises a whole lot. Uh it’s a little bit like you’re not, you’re trying to provide a forecast or different scenarios, I suppose, which are not answers and the scenarios pose questions which I think we need to consider. But one of the questions I, questions I ask is will A I enhance and amplify human creativity or suppress it? I’ve interested your thoughts on that.

Jeff Bullas

01:01:15 – 01:02:00

Um Yeah. Well, how do we measure human creativity? Um, I certainly think that in the near term, there will be a lot of cool creative stuff that people will be able to use A I tools to do. I mean, from image generation music, I just heard somebody who does some A I music generation stuff to make something quite funky. Um So, creative people will use this. Uh Now, ultimately, uh I think all the things we do with our creativity will be doable by A I without reliance on human creative co input. We saw this with chess where there were first humans who were just better than chess computers, right? And then chess computers became better than humans. But

Jeff Bullas

01:02:01 – 01:02:44

For a period of time, the very best performance would be if you combined a chess computer with a human grandmaster and together they could beat any other human or any other chess computer. But after uh I think about 10 years or so the chess computers just became so much better that it doesn’t really help to have a human playing alongside it. It’s just better to leave all the decisions to the uh chess computer. And I think that same will hold with respect to A G I that for a period of time, you get the best performance by kind of drawing on the strength of the human and the strength of the A I. And they have slightly different strengths, but ultimately, the A I will just surpass us across the board and, and then, then we’ll have to base our

Jeff Bullas

01:02:44 – 01:02:52

sense of purpose and meaning and worth on something other than us being able to make some practically useful contribution. Yeah.

Professor Nick Bostrom

01:02:53 – 01:03:32

Um Yeah, I think purpose and meaning come out for me. If you try and put a destination on it, I’ll be happy when, rather than I am happy now doing something that I seem to have skills and experience and love doing, which is to follow your bliss type scenario. Um So it, for me, the question is what I’m trying to find the answer myself is what brings me deep joy and, and I do know some of those number one having human conversations um like bumping into someone in a restaurant like in New York, we’ve had quite a few of those and you know what? They’re actually,

Professor Nick Bostrom

01:03:32 – 01:03:50

they’re better than the goal of visiting the Empire State. For example, for me having that human conversation where we are better communicators because we’ve got the time now because the A I is doing the heavy lifting where we can be better at listening, we can get to know a person, hear their stories. We’ve got the time. And what I’m grateful for is

Professor Nick Bostrom

01:03:51 – 01:04:28

You are giving me the time to hear your story and share it because that brings me a lot of joy. Um because I am fascinated, curious about the human condition and will continue to be and you need. And when you get to talk to a person, if you get context, hear their story. And you know, for me, part of the joy too is I, I write like running experiments with the waiters and waitresses as they serve you and you ask them their name and quite often they’ll be from South America, they’ll be from Kazakhstan that we did the other day and they will open up because you’ve seen them as a human,

Professor Nick Bostrom

01:04:29 – 01:04:51

right? And, and then what you get is you hear their story because you have seen them. Um That’s one of the things that brings me deep joy is the art of communication as humans and A I might release us from the drudgery. So we can actually be better humans as in the way we communicate. I don’t know. What are your thoughts on communication? Um At an A I.

Jeff Bullas

01:04:54 – 01:05:51

Um Yeah, I think it’s striking that right now leading ha I are actually talking to us like their language models. Um And so rather than sort of interacting with them via um like a C++ uh console or something, right? You actually just writing English um which does cast some of the issues uh related to A I alignment. It had new light that might be new affordances from the fact that we can formulate our goals uh and instructions in, in ordinary language. Um I think um we will see multimodal coming online soon and for better or worse and probably both really um that will start to impact how human communication systems work, I think. Um um

Jeff Bullas

01:05:52 – 01:06:36

there seems that seem obviously good, like automated translation and stuff like that and like artists having the ability to create more easily, you know, what would previously be a big budget movie production with lighting guys have to run around and like if, if that were in the hands of uh independent film producers, I think that would be really cool. So there’s a lot like an endless number of positive applications. But I think one also needs to think at a systemic level just as social media have slightly shifted the dynamics of how our public uh conversations go by kind of making certain types of messages uh proliferate more. And so you see a lot of um um

Jeff Bullas

01:06:36 – 01:07:27

like a lot of nastiness and hate and stuff in social media, it’s like easily, it has that immediate kind of engagement factor and it like shorter form content seems to be doing well relative to when books were like the main form, you know, so it does change indirectly also the kind of contents that get traction and get attention. And um and like most likely A I tools will also uh keep making different changes, like maybe individualize content more to the particular user in able in, in ways that might be kind of specifically designed and targeted to hit their trigger points. Uh like marketers who know precisely which words they need to say to Jeff Bola to get you to be excited about the product. Um

Jeff Bullas

01:07:28 – 01:08:15

um political communication. Um The the opportunities for uh censorship and propaganda are enormous, obviously, when you cannot just record what everybody is saying, but you could then actually have a is analyze all that they are saying and create some sort of scoring system that then directly ties into targeted communication to, you know, produce some quantifiable shift in public opinion about some particular top. Like all, all these uh tools have so many uses. But we don’t have the kind of predictive social science that would enable us to say that if you do change some of the underlying knobs on the global communication system, what kind of equilibrium ultimately results from that. So we’ll, we’ll just have to uh see us this plays out and uh hope for the best.

Professor Nick Bostrom

01:08:15 – 01:09:01

Yes. Well, we are complicated creatures and uh we’ve discovered that uh A I with its large, large language models um has just exhibited that to another degree. So um Nick, it’s been an absolute joy. Um I’m aware of your time. Um So you can do some more thinking and writing in beautiful Portugal. Um Are there any just one last question? Um, what brings you? Um, because I don’t think you, we, you are almost saying we still don’t know what meaning and purpose really is. We sort of just, it’s a question, what is meaning, what is purpose? But what brings you joy every day if you didn’t get paid, what would you do every day?

Jeff Bullas

01:09:02 – 01:09:06

Um, I mean, honestly the same as what I am doing. Uh

Professor Nick Bostrom

01:09:06 – 01:09:08

That would be the answer. Yeah.

Jeff Bullas

01:09:08 – 01:09:28

Yeah. Yeah. Yeah. Uh So that’s, that’s, that’s a big privilege. Um um Let’s, let’s hope we can Yeah, use this. And I, I guess maybe, I don’t know for you whether the answer is the same. Like would you, if, if you were not able to, you know, monetize your uh youtube streams sort of stuff, like, would you still be doing this? Do you think?

Professor Nick Bostrom

01:09:28 – 01:10:09

Absolutely. I am, I am grateful to be able to have these conversations with these incredible people from all around the world. Nick it, for me, it brings me joy. I learned so much from them. And then also what brings me joy is being able to share it from my platform. And uh because the other thing that’s a real, there’s basically three things I think important and as humans, for me is I think we are all innately creative as humans. Some are more creative than others. There’s different degrees in different areas. But I think I’m number one. I think as humans discover what you love doing, ask that question. Um It doesn’t mean watch TV, of course, um

Professor Nick Bostrom

01:10:09 – 01:10:28

discover what you love doing. Create something from that for me that’s writing, producing, you know, media and then sharing it with the world. I think the other thing is to make a difference out of your own creation. I think for me, they’re the three things in sequence that bring me joy.

Jeff Bullas

01:10:29 – 01:10:37

Good. Yeah, I’m, I’m happy you, you’re able to do that and, and with good results as well. Uh And other people can benefit from it. Hopefully. So. Yeah.

Professor Nick Bostrom

01:10:38 – 01:10:43

Yeah, it’s really so for me, um I would do this without getting paid.

Jeff Bullas

01:10:44 – 01:10:47

Yeah, let’s count our blessings and

Professor Nick Bostrom

01:10:48 – 01:10:57

Well, I’m very grateful and you’re mindful of your time. One quick question. Can you sum it up? How did people prepare for A I Utopia?

Jeff Bullas

01:10:58 – 01:11:50

Step one. Read the Deep Utopia and Excellent. Yeah, I mean, um I think we shouldn’t like it from a practical point of view. I mean, so I think uh ob obviously, I think these questions we should be thinking about tomorrow. I wouldn’t have written the book, but we, we, we, we need to remember that we are not there yet. So there’s like a whole bunch of very pressing and very practical obstacles between here and uh and, and this hypothetical future. So I think getting involved in trying to nudge A I developments or other issues in the world in a positive direction. Um, you know, trying to spread love and positivity and, and then try to tackle as best you can, the issues you see that are wrong. Um And um

Jeff Bullas

01:11:51 – 01:12:22

they like, yeah, for different people that will mean different specific things. But uh we do have one thing that in most respects, the utopians would have it much better than we do. But there is one respect in which we might be in a uniquely advantageous position, which is the ability to have a practical, useful, beneficial impact on the world. Now, I mean, in a world where every problem is solved there, there is like you, you can’t really contribute very much

Jeff Bullas

01:12:23 – 01:13:06

whereas now there are many problems that are not solved and that really need to be solved. And, and we, especially if we are kind of actually not that far from this A I nexus where we might have huge consequences in the far future. But even aside from that, just from a mundane perspective, there are like so many uh screaming necessities in the world and, and we are in a position to at least do something little here or there to help out uh in some way. Uh And, and that is something that like the utopians might not have, but that we have and I think from many different points of views, we should uh we should avail ourselves of these opportunities to try to, to, to leave the world a little bit better than it would have been without us if we can.

Jeff Bullas

01:13:07 – 01:13:14

Um, so those are my admonitions and encouragement to the viewers.

Professor Nick Bostrom

01:13:15 – 01:13:27

Yep. And that poses one last thing. There’s a lot of information but there’s not enough wisdom in the world. Um, so I think I can, would I help us with more wisdom?

Jeff Bullas

01:13:27 – 01:14:18

Uh Well, yeah, I, I have to uh I have to run off but um yeah, yeah, I think so. Yes. Um Well, we just have to be uh above some threshold of wisdom to actually use the A I to, to help us become wiser, like a sufficiently foolish person would deliberately hamstring the A I uh to not tell it the truth or to not, you know, help them develop more. And so, so you, you might need to like above a certain threshold, you can sort of bootstrap yourself, you can think about your own errors and start to figure out ways to correct for them. Um And the question is whether collectively we are above that threshold or under that threshold as a species. I, I think that’s an open question but I think

Jeff Bullas

01:14:18 – 01:14:36

we either just, just above it or, or, or possibly just below it. But um that, that’s like one of the key, the key open questions I think, or which the answer has not yet been written. No. All right. Good to

Professor Nick Bostrom

01:14:36 – 01:14:49

talk to you, Nick. Thank you very much. Thank you for your time. It’s been an absolute blast and a joy and um, I’ll be in London at the end of the year. Maybe you might be able to catch up. I’d love to meet you in person. So, um, do

Jeff Bullas

01:14:49 – 01:14:50

do, yeah, do stay in touch. Oh,

Professor Nick Bostrom

01:14:51 – 01:14:52

Well, great. Have a great day. Bye.

Jeff Bullas

01:14:53 – 01:14:54

Thank you very much.

Latest Shows