Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You've been working on AI safety for two decades at least.
Speaker 2 (00:02):
Yeah, I was convinced we can make safe AI, but
the more I looked at it, the more I realized
is not something we can actually do.
Speaker 1 (00:09):
You have made a series of predictions about a variety
of different dates. So visual prediction for twenty twenty seven.
Doctor Romani and Polski is a globally recognized voice on
AI safety and associate professor of computer science.
Speaker 2 (00:25):
He educates people on the terrifying truth of AI.
Speaker 1 (00:28):
And what we need to do to save humanity.
Speaker 2 (00:30):
In two years, the capability to replace most humans and
most occupations will come very quickly. And then in five
years we're looking at a world where we have levels
of unemployment we're never seen before. Not talking about ten percent,
but ninety nine percent. And that's without super intelligence, a
system smarter than all humans in all domains. So it
would be better than us at making new AI, but
(00:53):
it's worse than that. We don't know how to make
them safe. And yet we still have the smartest people
in the world competing to win the race to SO
for intelligence.
Speaker 1 (01:00):
But what do you make of people? Exam Open's journey
with AI so.
Speaker 2 (01:04):
A decade ago we published cardgrails for how to do
AI right. They violated every single one, and here's gambling
eight billion lives on getting richer and more powerful. So
I guess some people want to go to Mars. I'll
rest want to control the universe. But it doesn't matter
what builds it. The moment you switched to serper intelligence,
we will most likely be about it terribly.
Speaker 1 (01:25):
And then by twenty forty five.
Speaker 2 (01:27):
Now this is where it gets interesting.
Speaker 1 (01:29):
Me, doctor Roman Gimpowski, let's talk about simulation theory.
Speaker 2 (01:33):
I think we are in one and there's a lot
of agreement in this and this is what you should
be doing in it, so we don't shut it down. First.
Speaker 1 (01:42):
I see messages all the time in the comment section
that some of you didn't realize you didn't subscribe. So
if you could do me a favor in double check,
if you're a subscriber to this channel, that would be
tremendously appreciated. It's the simple, it's the free thing that
anybody that watches this show frequently can do to help
us here to keep everything going in this show in
the trajectory it's on. Please do double check if you subscribed,
and thank you so much because in a strange way,
(02:04):
you are You're part of our history and you're on
this journey with us, and I appreciate you for that.
Say thank you, doctor Roman Impolski. What is the mission
that you're currently on, because it's quite clear to me
that you are on a bit of a mission, and
you've been on this mission for I think the best
part of two decades at least.
Speaker 2 (02:26):
I'm hoping to make sure that superintelligence we're creating right
now doesn't that kill everyone.
Speaker 1 (02:37):
Give me some give me some context on that statement,
because it's quite a shocking statement.
Speaker 2 (02:42):
Sure, so in the last decade, we actually figured out
how to make artificial intelligence better. Turns out, if you
add more compute, more data, it just kind of becoup smarter.
And so now smartest people in the world, billions of dollars,
all going to create the best possible superintelligence we can. Unfortunately,
(03:05):
while we know how to make these systems much more capable,
we don't know how to make them safe, how to
make sure they don't do something we will regret. And
that's the state of the art right now. Then we
look at just prediction markets, how soon will we get
to advance AI the timelines a very short couple of years,
(03:28):
two three years according to prediction markets, according to CEOs
of top Labs. And at the same time, we don't
know how to make sure that the systems are aligned
with our preferences, so we are creating this alien intelligence.
If aliens were coming to Earth and you have three
(03:51):
years to prepare, you would be panicking right now. But
most people don't even realize this is happening.
Speaker 1 (04:00):
So some of the counter arguments might be, well, these
are very very smart people. These are very big companies
with lots of money. They have a obligation and moral obligation,
but also just a legal obligation to make sure they
do no harm. So I'm sure it'll be fine.
Speaker 2 (04:14):
The only obligation they have is to make money for
the investors. That's the legal obligation they have. They have
no moral or ethical obligations. Also, according to them, they
don't know how to do it yet. The state of
the art answers are we'll figure it out when we
get there, or AI will help us control more advanced AI.
That's insane.
Speaker 1 (04:34):
In terms of probability, what do you think is the
probability that something goes catastrophically wrong.
Speaker 2 (04:40):
So nobody can tell you for sure what's going to happen.
But if you're not in charge, you're not controlling it,
you will not get outcomes you want. The space of
possibilities is almost infinite. The space of outcomes we will
like is tiny.
Speaker 1 (04:56):
And who are you and how long have you been
working on this?
Speaker 2 (05:01):
I'm a computer scientist by training. I have a PhD
in computer science and engineering. I probably started working AI
safety mildly defined as control of bots at the time,
fifteen years ago.
Speaker 1 (05:17):
Fifteen years ago, So you've been working on AI safety
before it was.
Speaker 2 (05:20):
Cool, before the term existed. I coined the term AI safety.
Speaker 1 (05:24):
So you're the founder of the term AI safety.
Speaker 2 (05:26):
The term, yes, not the field. There are other people
who did brilliant work before I got there.
Speaker 1 (05:31):
Why were you thinking about this fifteen years ago, because
most people have only been talking about the term A
safety for the last two or three years.
Speaker 2 (05:38):
It started very mildly, just as a security project. I
was looking at poker bots, and I realized that the
bots are getting better and better, and if you just
project this forward enough, we're going to get better than us, smarter,
more capable, and it happened. They are playing poker way
better than average players. But more generally, it will happen
(06:01):
with all other domains, all the other cyber resources. I
wanted to make sure AI is a technology which is
beneficial for everyone, so I started making AI safer.
Speaker 1 (06:14):
Was there a particular moment in your career where you thought,
oh my god.
Speaker 2 (06:19):
First five years at least, I was working and solving
this problem. I was convinced we can make this happen,
We can make safe AI, and that was the goal.
But the more I looked at it, the more I
realized every single component of that equation is not something
we can actually do. And the more you zoom in,
it's like a fractal. You go in and you find
ten more problems, and then one hundred more problems, and
(06:43):
all of them are not just difficult, they're impossible to solve.
There is no seminal work in this field where like
we solved this, we don't have to worry about this.
There are patches, there are little fixes were put in place,
and quickly people find ways to work around them. A
drill break, whatever safety mechanisms we have. So while progress
(07:05):
in AI capabilities is exponential, or maybe even hyper exponential
progress in a safety is linear or constant. The GEAP
is increasing.
Speaker 1 (07:16):
The gap between how capable.
Speaker 2 (07:20):
The systems are and how well we can control them,
predict what they're going to do, explain their decision making.
Speaker 1 (07:27):
I think this is quite an important point because you
said that we're basically patching over the issues that we
find so developing this core intelligence and then to stop
it doing things or to stop it showing some of
its unpredictability or its threats. The companies that are developing
this AI are programming in code over the top to say, Okay,
(07:48):
don't swear, don't say that read word, don't do that bad.
Speaker 2 (07:50):
Thing exactly, and you can look at other examples of
that so hr manuals. Right, we have those humans. They're
general intelligences, but you want them to behave in a company,
so they have a policy no sexual harassment, all this,
not that, But if you're smart enough, you always find
a workaround. So you're just pushing behavior into a different,
(08:11):
not yet restricted subdomain.
Speaker 1 (08:14):
We should probably define some terms here. So there's narrow intelligence,
which can play chess or whatever. There's artificial general intelligence,
which can operate across domains, and then superintelligence which is
smarter than all humans and all domains, and where are.
Speaker 2 (08:28):
We So that's a very fuzzy boundary. Right. We definitely
have many excellent narrow systems, no question about it, and
they are super intelligent in that narrow domain. So protein
folding is a problem which was solved using narrow AI,
and it's superior to all humans in that domain. In
terms of AGI, again, I said, if we showed what
(08:50):
we have today to a scientist from twenty years ago,
they would be convinced we have full blown AGI. We
have systems which can learn, they can perform in hundreds
of dema means and be better than human in many
of them. So you can argue we have a weak
version of AHI. Now we don't have superintelligence yet we
still have brilliant humans who are completely dominating AI, especially
(09:14):
in science and engineering. But that gap is closing so
fast you can see, especially in the domain of mathematics.
Three years ago, large language models couldn't do basic algebra.
Multiplying three digital numbers was a challenge. Now they're helping
with mathematical proofs. They're winning mathematics Olympiads competitions, they are
(09:37):
working and solving millennial problems hardest problems in mathematics. So
in three years we close the gap from subhuman performance
to better than most mathematicians in the world. And we
see the same process happening in science and engineering.
Speaker 1 (09:54):
You have made a series of predictions and they correspond
to a variety of different dates, and I have those
dates in front of me. Here, what is your prediction
for the twenty twenty seven.
Speaker 2 (10:07):
We're probably looking at AGI as predicted by prediction markets
and tops of the labs.
Speaker 1 (10:14):
So we'd have artificial general intelligence by twenty twenty seven.
And how would that make the world different to how
it is now?
Speaker 2 (10:22):
So if you have this concept of a drop in employee,
you have free labor, physical and cognitive, trillions of dollars
of it. It makes no sense to hire humans from
most jobs if I can just get you know, a
twenty dollars subscription or a free model to do what
an employee does. First, anything at a computer will be automated.
(10:43):
And next I think humanoid Robosa maybe five years behind.
So in five years old, the physical labor can also
be automated. So we're looking at a world where we
have levels of unemployment we've never seen before not talking
about ten percent unemployment, which is scary, but ninety nine
percent all you have left is jobs were for whatever
(11:04):
reason you prefer, another human would do it for you.
But anything else can be fully automated. It doesn't mean
it will be automated in practice. A lot of times
technology exists, but it's not deployed. Videophones were invented in
the seventies, nobody had them until iPhones came around. So
(11:25):
we may have a lot more time with jobs and
with world which looks like this. But capability to replace
most humans and most occupations will come very quickly.
Speaker 1 (11:38):
Okay, So let's try and drill down into that and
stress test it. So a podcaster like me, would you
need a podcaster like me?
Speaker 2 (11:52):
So let's look at what you do. You prepare, you
ask questions, you ask follow up questions, and you look
good on camera.
Speaker 1 (12:01):
Thank you so much.
Speaker 2 (12:02):
Let's see what we can do. Large language model today
can easily read everything I wrote and have very solid understanding. Better,
I assume you haven't read every single one of my books.
That thing would do it. It can train on every
podcast you ever did, so it knows exactly your style.
The types of questions you ask. It can also find
(12:23):
correspondence between what worked really well, like this type of
question really increased views, This type of topic was very promising.
So you can optimize, I think better than you can
because you don't have the data set. Of course, visual
simulation is trivial at this point.
Speaker 1 (12:39):
So can you can make a video within seconds of
mesa here?
Speaker 2 (12:42):
And so we can generate videos of you interviewing anyone
on any topic very efficiently, and you just have to
get likeness approval whatever.
Speaker 1 (12:54):
Are there many jobs that you think would remain in
a world of AGI. If you're saying AJI is potentially
going to be here, is deployed or not by twenty
twenty seven, what kind and then okay, so let's take
holt of this any physical labor jobs for a second.
Are there any jobs that you think a human would
be able to do better in a world of AGI?
Speaker 2 (13:13):
Still, so that's the question I often ask people in
the world with AGI, and I think almost immediately will
get superintelligence as a side effect. So the question really is,
in a world of superintelligence, which is defined as better
than all humans in all domains, what can you contribute?
And so you know better than anyone what it's like
(13:34):
to be you. You know what ice cream tastes to you?
Can you get paid for that knowledge? Is someone interested
in that? Maybe not? Not a big market. There are
jobs where you want a human Maybe you rich and
you want a human accountant for whatever historic reasons. Old
(13:54):
people like traditional ways of doing things. Warren Buffett would
not switch to AI. He would use his human accountant.
But it's a tiny subset of a market. Today we
have products which are men made in US as opposed
to mass produced in China, and some people pay more
to herb those, but it's a small subset. It's almost
(14:16):
a fetish. There is no practical reason for it, and
I think anything you can do on a computer could
be automated using the technology.
Speaker 1 (14:27):
You must hear a lot of rebuttals to this when
you say it, because people experience a huge amount of
mental discomfort when they hear that their job, their career,
the thing they got a degree in, the thing they
invested one hundred thousand dollars into, is going to be
taken away from them. So their natural reactions for some
people is that cognitive dissonance that no, you're wrong. AI
can't be creative it's not this. It's not that it
(14:50):
will never be interested in my job. I'll be fine
because you hear these arguments all the time.
Speaker 2 (14:55):
Right, it's really fine. I ask people, and I ask
people in different occupations. I ask my Uber driver, are
you've worried about self driving cars? And they go no,
no one can do what I do. I know the
streets of New York. I can navigate like no, AI,
I'm safe. And it's true for any drop. Professors are
saying this to me. Oh, nobody can lecture like I do,
(15:16):
Like this is so special, but you understand it's ridiculous.
We already have self driving cars replacing drivers. That is
not even a question if it's possible. It's like how
soon before you fired?
Speaker 1 (15:30):
Yeah? I mean I've just been in La, yeah, yesterday,
and my car drives itself. So I get in the car,
putting where I want to go, and then I don't
touch the during wheel or the brake pedals, and it
takes me from A to B even if it's an
hour long drive without any intervention at all. I actually
still park it. But other than that, I'm not driving
the car at all. And then obviously in La we
(15:50):
also have way man now, which means you order it
on your phone and it shows up with no driver
in it and takes you to where you want to go. Yeah,
so it's quite clear to see how that is potentially
a matter of time for those people, because we do
have some of those people listening to this conversation right
now that their occupation is driving to offer them a
and I think driving is the biggest occupation in the world.
(16:13):
If I'm correct, I'm pretty sure it is the biggest
occupation in the world, one of the top ones. Yeah,
what would you say to those people? What should they
be doing with their lives? What should they should they
be retraining in something or what time frame?
Speaker 2 (16:26):
So that's the paradigm shift here. Before we always said
this job is going to be automated, retrained to do
this other job. But if I'm telling you that all
jobs will be automated, then there is no plan B.
You can actuallytrain. Look at computer science two years ago
with old people. Learn to code, you are an artist,
(16:49):
you can make money. Learn to code. Then we realized,
oh Ai kind of knows how to code and getting better.
Become a prompt engineer. You can engineer prompts for eies
it's going to be a great job, get a four
year degree in it. But then we're like, AI is
way better at designing prompts for other AIS than any human.
So that's gone. So I can't really tell you right now.
(17:11):
The hardest thing is design AI agent for practical applications.
I guarantee you in a year or two it's going
to be done just as well. So I don't think
there is this occupation needs to learn to do this. Instead,
I think it's more like were as a humanity. Then
we all lose our jobs.
Speaker 1 (17:30):
What do we do?
Speaker 2 (17:31):
What do we do financially? Who's paying for us? And
what do we do in terms of meaning? What do
I do with my extra sixty eighty hours a week?
Speaker 1 (17:42):
You've thought around this corner, haven't you?
Speaker 2 (17:45):
A little bit?
Speaker 1 (17:46):
What is around that corner in your view?
Speaker 2 (17:49):
So the economic part seems easy. If you create a
lot of free labor, you have a lot of free
wealth abundance. Things which are right now not very affordable
become dirty, and so you can provide for everyone basic needs.
Some people say you can provide beyond basic needs. You
can provide very good existence for everyone. The hard problem
(18:09):
is what do you do with all that free time.
For a lot of people, their jobs are what gives
them meaning in their lives, so they would be kind
of lost. We see it with people who retire or
do early retirement, and for so many people who hate
their jobs, they'll be very happy not working. But now
you have people who are chilling all day. What happens
(18:32):
to society? How does that impact crime rate, pregnancy rate,
all sorts of issues nobody thinks about. Governments don't have
programs prepared to deal with ninety nine percent unemployment.
Speaker 1 (18:47):
What do you think that world looks like?
Speaker 2 (18:51):
Again? I think a very important part to understand here
is the unpredictability of it. We cannot predict what a small,
harder than us system will do. And the point when
we get to that is often called singularity. By analogy
with physical singularity, you cannot see beyond the event horizon.
(19:11):
I can tell you what I think might happen, but
that's my prediction. It is not what actually is going
to happen, because I just don't have cognitive ability to
predict a much smarter agent impacting this world. Then you
read science fiction, there is never a super intelligence in
it actually doing anything because nobody can write believable science
(19:33):
fiction at that level. They ever banned AI like June
because this way you can avoid writing about it. Or
it's like Star Wars. You have those really dumb bods,
but nothing super intelligent ever, because by definition, you cannot predict.
Speaker 1 (19:48):
At that level, because by definition of it being super intelligent,
it will make its own mind up.
Speaker 2 (19:53):
By definition, if it was something you could predict, you
would be operating at the same level of intelligence, violating
our assumption that it is smarter than you. If I'm
playing chess with superintelligence and I can predict every move
I'm playing at that level.
Speaker 1 (20:07):
It's kind of like my French bulldog trying to predict
exactly what I'm thinking and what I'm going to do.
Speaker 2 (20:12):
That's a good cognitive gap. And it's not just he
can predict you going to work, you're coming back, but
he cannot understand why you're doing a podcast that is
something completely outside of his model of the world.
Speaker 1 (20:25):
Yeah, he doesn't even know that I go to work.
He just sees that I leave the house and doesn't
know where I go buy food for him. What's the
most persuasive argument against your own perspective.
Speaker 2 (20:36):
Here that we will not have unemployment due to advanced technology.
Speaker 1 (20:41):
That there won't be this French bulldog human gap in
understanding and I guess like power and control.
Speaker 2 (20:53):
So some people think that we can enhance human minds,
either through combination with hardware or something like neurolink, or
through genetic re engineering to where we make smarter humans.
It may give us a little more intelligence. I don't
think we're still competitive in biological form with silicon form.
(21:14):
Silicon substrate is much more capable for intelligence as faster,
it's more resilient, more energy efficient in many ways.
Speaker 1 (21:22):
Which is what computers are made out of. The brain.
Speaker 2 (21:25):
Yeah, so I don't think we can keep up just
with improving our biology. Some people think maybe, and this
is very speculative, we can upload our minds into computers.
So scan your brain, kind of calm of your brain,
and have a simulation running on a computer, and you
can split it up, give it more capabilities. But to me,
(21:46):
that feels like you no longer exist. We just created
software by different means. And now you have AI based
on biology and AI based on some other forms of training.
You can have evolutionary algorithms. You can have many paths
to reach a GI, but at the end none of
them are humans.
Speaker 1 (22:04):
I have another date here, which is twenty thirty. What's
your prediction for twenty thirty what will the world look like?
Speaker 2 (22:15):
So we probably will have humanoid robots with enough flexibility
dexterity to compete with humans, you know, all the means,
including plumbers. We can make artificial plumbers.
Speaker 1 (22:28):
Not the plumbers. Where was that? Felt like the last
bastion of human employment? So twenty thirty five years from
now humanoid robots. So many of the companies, the leading companies,
including Tesla, are developing humanoid robots at light speed, and
they're getting increasingly more effective. And these humanoid robots will
be able to move through physical space, but you know,
(22:51):
make an omelet do anything humans can do. But obviously
have be connected to AI as well, so they can think, talk.
Speaker 2 (23:00):
The controlled by EI. They always connected to the network,
so they're already dominating in many ways.
Speaker 1 (23:09):
Our world will look remarkably different when humanoid robots are
functional and effective, because that's really when you know, I
have something like the combination of intelligence and physical ability
is really really doesn't leave much does it for us?
(23:29):
Human beings?
Speaker 2 (23:31):
Not much so today if you have intelligence through Internet.
You can hire humans to do your bidding for you.
You can pay them in bitcoin. So you can have bodies,
just not directly controlling them. So it's not a huge
game changer to add direct control of physical bodies. Intelligence
is where it's at. The important component is definitely higher
(23:52):
ability to optimize, to solve problems, to find patterns people
can see.
Speaker 1 (23:58):
And then by twenty forty five, I guess the world
looks even more, which is twenty years from now. So
if it's still around, If it's still around.
Speaker 2 (24:09):
Ray Kurzweil predicts that that's the year for a singularity.
That's the year where progress become so fast. So this
AI doing science and engineering work makes improvements so quickly
we cannot keep up anymore. That's the definition of singularity.
Point beyond which we cannot see, understand, predict.
Speaker 1 (24:29):
See and understand predict the intelligence itself or what.
Speaker 2 (24:33):
Is happening in the world that technology is being developed.
So right now, if I have an iPhone, I can
look forward to a new one coming out next year
and I'll understand it has slightly better camera. Imagine now,
this process of researching and developing this phone is automated.
It happens every six months, every three months, every month, week, day, hour,
minute second. You cannot keep up with thirty iterations of
(24:58):
iPhone in one day. You don't understand and what capabilities
it has, what proper controls are. It just escapes you.
Right now, It's hard for any researcher and AI to
keep up with the state of the art. While I
was doing this interview with you, a new model came out,
and I'll no longer know what the state of the
art is. Every day, as a percentage of total knowledge,
(25:21):
I get dumber. I may still know more because I
keep reading, but as a percentage of overall knowledge, we're
all getting dumber. And then you take it to extreme values,
you have zero knowledge, zero understanding of the world around you.
Speaker 1 (25:37):
Some of the arguments against this eventuality are that when
you look at other technologies, like the Industrial Revolution, people
just found new ways to work and new careers that
we could never have imagined at the time were created.
How do you respond to that in a world of superintelligence.
Speaker 2 (25:56):
It's a paradigm shift. We always had tools, new tools
which allowed some job to be done more efficiently, So
instead of having ten workers, you could have two workers,
and eight workers had to find a new job, and
there was another job. Now you can supervise those workers
or do something cool. If you're creating a meta invention,
you're inventing intelligence. You're inventing a worker an agent. Then
(26:20):
you can apply that agent to the new job. There
is not a job which cannot be automated. That never
happened before. All the inventions we previously had were kind
of a tool for doing something. So we invented fire,
huge game changer, but that's it. It stops with fire.
We invent a wheel, same idea, huge implications, but reill
(26:43):
itself is not an inventor. Here, we're inventing a replacement
for human mind, a new inventor capable of doing new inventions.
It's the last invention we ever have to make. At
that point, it takes over and the process of doing
science research, even ethics research, morals, all that is automated
(27:04):
at that point.
Speaker 1 (27:06):
Do you sleep well at night?
Speaker 2 (27:07):
Really well?
Speaker 1 (27:09):
Even though you spent the last fifteen twenty years of
your life working on AI safety and it's suddenly among
us in a way that I don't think anyone could
have predicted five years ago. I want to say among us,
I really mean that the amount of funding and talent
that is now focused on reaching superintelligence faster has made
it feel more inevitable and more soon than any of
(27:33):
us could have possibly imagined.
Speaker 2 (27:35):
We're, as humans, have this built in bias about not
thinking about really bad outcomes and things we cannot prevent.
So all of us are dying, Your kids are dying,
your parents are dying. Everyone's dying. But you still sleep well,
you still go on with your day. Even ninety five
year olds are still doing games and playing golf and whatnot,
(27:55):
because we have this ability to not think about the
worst out comes, especially if we cannot actually modify the outcome.
So that's the same infrastructure being used for this. Yeah,
there is humanity level deathlike event. We're happening to be
close to it probably, But unless I can do something
(28:18):
about it, I can just keep enjoying my life. In fact,
maybe knowing that you have limited amount of time left
gives you more reason to have a better life. You
cannot waste any.
Speaker 1 (28:30):
And that's the survival trait of evolution. I guess because
those of my ancestors that spend all their time worrying
wouldn't have spent enough time having babies and hunting to.
Speaker 2 (28:39):
Survive suicidal idiation. People who really start thinking about how
horrible the world is usually escape pretty soon.
Speaker 1 (28:52):
One of the you co authored this paper analyzing the
key arguments people make against importance of AI safety, and
one of the arguments in there is that there's other
things that are of bigger importance right now. It might
be world wars, it could be nuclear containment, it could
be other things. There's other things that the governments and
podcasts like me should be talking about that are more important.
(29:12):
What's your rebuttal to that argument?
Speaker 2 (29:15):
So, superintelligence is a matter solution. If we get super
intelligence right, it will help us with climate change, it
will help us with wars, It can solve all the
other existential risks. If we don't get it right, it
dominates if climate change will take one hundred years to
boil us alive and superintelligence kills everyone. At five, I
(29:37):
don't have to worry about climate change. So either way,
either it solves it for me or it's not an issue.
Speaker 1 (29:44):
So you think it's the most important thing to be
working on.
Speaker 2 (29:47):
Without question, there is nothing more important than getting this right.
And I know everyone says that you take any class
but to take English Professor's class, and he tells you
this is the most important class you'll have a take.
But you can see the metal level differences with this one.
Speaker 1 (30:07):
Another argument in that paper is that we will be
in control and that the danger is not AI. And
this particular argument asserts that AI is just a tool.
Humans are the real actors that present danger, and we
can always maintain control by simply turning it off. Can't
we just pull the plug out? I see that every
time we have a conversation on the show about AI,
someone says, can't we just don't plug it? Yeah?
Speaker 2 (30:28):
I get those comments on every podcast I make, and
I always want to like get in touch with a
guy and say, this is brilliant. I never thought of it.
We're going to write a paper together and get a
Nobel price for it. This is like, let's do it
because it's so silly, Like, can you turn off a virus?
You have a computer virus, you don't like it? Turn
it off? How about bitcoin? Turn off bitcoin network? Go ahead,
(30:49):
I'll wait. This is silly. There's are distributed systems. You
cannot turn them off. And on top of it, they
smarter than you. They made multiple backups, they predicted what
you're going to do. They will turn you of before
you can turn them off. The idea that we will
be in control applies only to pre superintelligence levels, basically
what we have today today. Humans with AI tools are dangerous.
(31:12):
They can be hackers, malevelent actors, absolutely, but the moment
superintelligence become smarter dominates. They're no longer the important part
of that equation. It is the higher intelligence I'm concerned about,
not the human who may add additional malevel and payload,
but at the end still doesn't control it.
Speaker 1 (31:32):
It is tempting to follow you the next argument that
I've saw in that paper, which basically says, listen, this
is inevitable, so there's no point fighting against it because
there's really no hope here. So we should probably give
up even trying and be faithful that it will work
itself out. Because everything you've said sounds really inevitable. And
(31:54):
if with China working on actual Putin's got some secret division,
I'm sure Irana doing some bits and pieces. Every European
country is trying to get ahead of AI. The United
States is leading the way, so it's inevitable, so we
probably should just have faith in pray.
Speaker 2 (32:11):
Praying is always good, but incentives matter if you are
looking at what drives these people. So yes, money is important.
So there is a lot of money in that space,
and so everyone's trying to be there and developed this technology.
But if they truly understand the argument, they understand that
you will be dead, no amount of money will be
(32:32):
useful to you. That incentive switch. They would want to
not be dead. A lot of them are young people,
rich people, They have their whole lives ahead of them.
I think they would be better off not building advanced superintelligence,
concentrating on narrow AI tools for solving specific problems. My
company cures breast cancer, that's all. We make billions of dollars.
(32:54):
Everyone's happy, everyone benefits. It's a win. We are still
in contry all today. It's not over. Until it's over,
we can decide not to build general superintelligences.
Speaker 1 (33:07):
I mean, the United States might be able to conjure
up enough enthusiasm for that, but if the United States
doesn't build general superintelligences, then China are going to have
the big advantage.
Speaker 2 (33:18):
Right So, right now, at those levels, whoever has more
advanced AI has more advanced military, no question. We see
it with existing conflicts. But the moment you switch to
superintelligence and control superintelligence, it doesn't matter who builds it,
us of them, And if they understand this argument, they
also would not build it. It's a mutually assured distraction
(33:40):
and both ends.
Speaker 1 (33:42):
Is this technology different than say, nuclear weapons, which require
a huge amount of investment and you have to like
enrich the uranium and you need billions of dollars potentially
to even build a nuclear weapon. But it feels like
this technology is much cheaper to get to super intelligence potentially,
or at least it will become cheaper. I wonder if
(34:04):
it's possible that some guy, some startup is going to
be able to build superintelligence in you know, a couple
of years without the need of you know, billions of
dollars of compute or electricity power.
Speaker 2 (34:16):
That's a great point. So every year it becomes cheaper
and cheaper to train sufficiently large model. If today it
would take a trillion dollars to build super intelligence, next
year it could be one hundred billion, and so on.
At some point a guy in a laptop could do it.
But you don't want to wait four years for make
it affordable. So that's why so much money is pouring in.
(34:36):
Somebody wants to get there this year and Lucky in
all the winnings light corn Level Award. So in that
regard they both very expensive projects like Manhattan level.
Speaker 1 (34:48):
Projects, which was the nuclear bomb project.
Speaker 2 (34:51):
The difference between the two technologies is that nuclear weapons
are still tools some dictator, some countries, some has to
decide to use them deploy them, whereas superintelligence is not
at all. It's an agent. It makes its own decisions
and no one is controlling it. I cannot take out
his dictator, and now superintelligence is safe. So that's a
(35:14):
fundamental difference to me.
Speaker 1 (35:17):
But if you're saying that it is going to get
incrementally cheaper, I think it's More's law, isn't it that
theteology gets cheap readers. Then there is a future where
some guy in this laptop is going to be able
to create super intelligence without oversight or regulation or employees, etc.
Speaker 2 (35:33):
Yeah, that's why a lot of people suggesting we need
to build something like a surveillance planet where you're monitoring
who's doing what and you're trying to prevent people from
doing it. Do I think it's feasible now. At some
point it becomes so affordable and so trivial that it
just will happen. But at this point we're trying to
(35:54):
get more time. We don't want it to happen in
five years. We want it to happen in fifty years.
Speaker 1 (36:01):
I mean, that's not very hopeful.
Speaker 2 (36:03):
Depends on how old you.
Speaker 1 (36:04):
Are, Depends on how old you I mean, if you're
saying that you believe in the future people will be
able to make super intelligence without the resources that are
required today, then it is just a matter of time.
Speaker 2 (36:18):
Yeah, But so will be true for many other technologies.
We're getting much better and synthetic biology, where today someone
with a bachelor's degree in biology can probably create a
new virus. This will also become cheaper avid technologies like that.
So we are approaching a point where it's very difficult
to make sure no technological breakthrough is the last one.
(36:42):
So essentially, in many directions, we have this pattern of
making it easier in terms of resources, in terms of
intelligence to destroy the world. If you look at I
don't know, five hundred years ago, the worst dictator with
all the resources could kill a couple million people you
couldn't destroy the world. Now we know nuclear weapons, we
(37:03):
can blow up the whole planet multiple times over synthetic biology.
So with COVID you can very easily create a combination
virus which impacts billions of people, and all of those
things are becoming easier to do.
Speaker 1 (37:18):
In the near term. You talk about extinction being a
real risk, human extinction being a real risk. Of all
the pathways to human extinction that you think are most likely,
what is the leading pathway? Because I know you talk
about there being some issue pre deployment of these AI tools,
like you know, someone makes a mistake when they're designing
(37:39):
a model, or other issues post deployment. When I say
post deployment, I mean once a chatchibuty or something an
agent's released into the world and someone hacking into it
and changing it and re reprogramming it to be malicious.
Of all these potential paths to human extinction, which one
do you think is the highest probability?
Speaker 2 (37:59):
So I can only talk about the ones I can
predict myself. So I can predict even before we get
to superintelligence, someone will create a very advanced biological tool,
create a novel virus, and that virus gets everyone or
most everyone. I can envision it, I can understand the pathway.
I can say that.
Speaker 1 (38:17):
So just to assuming on that, then that would be
using an AI to make a virus and then releasing it.
And would that be intentional?
Speaker 2 (38:26):
There is a lot of psychopaths, a lot of terrorists,
a lot of doomsday calls we seen historically. Again, they
try to kill as many people as they can. They
usually fail. They kill hundreds of thousands, but if they
get technology to kill millions of billions, they would do
that gladly. The point I'm trying to emphasize is that
(38:47):
it doesn't matter what I can come up with. I
am not a malevolent actor you're trying to defeat here.
It's the superintelligence which can come up with completely novel
ways of doing it. Again, you brought up example of
your dog. Your dog cannot understand all the ways you
can take it out. It can maybe think you'll bite
(39:08):
it to that or something, but that's all Whereas you
have infinite supply of resources. So if I asked your
dog exactly how you're going to take it out, it
would not give you a meaningful answer. It can talk
about biding, and this is what we know, we know viruses,
we experienced viruses, we can talk about them. But what
(39:31):
an AI system capable of doing novel physics research can
come up with is beyond me.
Speaker 1 (39:37):
One of the things that I think most people don't
understand is how little we understand about how these AIS
are actually working. Because one would assume, you know, with computers,
we kind of understand how a computer works. We know
that it's doing this and then this, and it's running
on code. But from reading your work you described it
as being a black box. We actually, in the context
(39:58):
of something like CHATTYBT or and AI, we know you're
telling me that the people that have built that tool
don't actually know what's going on inside that.
Speaker 2 (40:07):
That's exactly right. So even people making those systems have
to run experiments on their product to learn what it's
capable of. So they train it by giving it all
of data, let's say, all of Internet text. They run
it and a lot of computers to learn patterns in
that text, and then we start experimenting with that model
(40:27):
or do you speak French or can you do mathematics?
Or are you lying to me now? And so maybe
it takes a year to train it, and then six
months to get some fundamentals about what is capable of
some safety overhead. But we still discover new capabilities in
old models. If you ask the question in a different way,
(40:48):
it becomes smarter. So it's no longer engineering how it
was the first fifty years where someone was a knowledge engineer,
programming and expert system AI to do specific things. It's
a science. We are creating this artifact, growing it. It's
like an alien plant, and then we study to see
(41:10):
what it's doing. Just like with plans, we don't have
one hundred percent argulrate knowledge of biology. We don't have
full knowledge here. We kind of know some patterns we
know okay, if we had more compute, it gets smarter
most of the time, But nobody can tell you precisely
what the outcome is going to be given a set
of inputs.
Speaker 1 (41:31):
I've watched so many entrepreneurs treat sales like a performance problem,
when it's often down to visibility. Because when you can't
see what's happening in your pipeline, what stage each conversation
is at, what's stalled, what's moving, you can't improve anything
and you can't close the deal. Our sponsor, Pipe Drive
is the number one CRM tool for small to medium businesses,
(41:51):
not just a contact list, but an actual system that
shows your entire sales process end to end, everything that's life,
what's lagging, and this steps you need to take next.
All of your teams can move smarter and faster. Teams
using pipe drive are on average closing three times more
deals than those that aren't. It's the first CRM made
by salespeople for salespeople that over one hundred thousand companies
(42:15):
around the world rely on, including my team, who absolutely
love it. Give pipe Driver try today by visiting pipe
drive dot com slash CEO and you can get up
and running in a couple of minutes with no payment needed.
And if you use this link you'll get a thirty
day free trial. What do you make of open ai
and Sam Outman and what they're doing? And obviously you're
(42:37):
aware that one of the co founders was it Ilack Ilia. Yeah,
Ilia left and he started a new company called.
Speaker 2 (42:45):
Super Intelligent Safety. Supercourse, if a safety wasn't challenging enough,
you decided to just jump right to the hard problem.
Speaker 1 (42:55):
As an onlooker, when you see that people are leaving
open ai to start Super Intelligent Safety, companies. What was
your read on that situation.
Speaker 2 (43:06):
So a lot of people who worked with Sam said
that maybe he's not the most direct person in terms
of being honest with them, and they had concerns about
his views and safety. That's part of it. So they
wanted more control, they wanted more concentration and safety. But
also it seems that anyone who leaves that company and
(43:29):
starts a new one gets a twenty billion dollar valuation
just for having it started. You don't have a product,
you don't have customers, but if you want to make
many billions of dollars, just do that. So it seems
like a very rational thing to do for anyone who can.
So I'm not surprised that there is a lot of
attrition meeting him in person. He's super nice, very smart,
(43:53):
absolutely perfect public interface. You see him testify in the Senate.
He says the right thing to the senators, you see
him talk to the investors. They get the right message.
But if you look at what people who know him
personally are saying, it's probably not the right person to
be controlling a project of that impact. Why he puts
(44:20):
safety second.
Speaker 1 (44:23):
Second to.
Speaker 2 (44:25):
Winning this race to superintelligence? Being the guy who created
guarding and controlling light Corn of the universe.
Speaker 1 (44:32):
He's worse, do you suspect that's what he's driven by,
is by the legacy of being an impactful person that
did a remarkable thing versus the consequence that that might
have for society, because it's interesting. That's his other startup
is world Coin, which is basically a platform to create
universal basic income. I a platform to give us income
(44:55):
in a world where people don't have jobs anymore. So
one hand, you're creating an AI company. The other hand,
you're creating a company that is preparing for people not
to have employment.
Speaker 2 (45:05):
It also has other properties. It keeps track of everyone's biometrics.
It keeps you in charge of a world's economy, world's wealth.
They're retaining a large portion of world coins, So I
think it's kind of very reasonable part to integrate with
(45:26):
world dominance. If you have a superintelligence system and you
control money, you're doing well.
Speaker 1 (45:36):
Why would someone want world dominance.
Speaker 2 (45:40):
People have different levels of ambition than you are, a
very young person with billions of dollars fame, you start
looking for more ambitious projects. Some people want to go
to Mars, hours want to control light Corn of a universe.
Speaker 1 (45:54):
What did you say, light coin of the universe?
Speaker 2 (45:56):
Light cone? So every part of a universe light can
reach from this point, meaning anything accessible you want to
grab and bring into your control.
Speaker 1 (46:05):
You think some odman wants to control every part of
the universe.
Speaker 2 (46:10):
I suspect he might. Yes, it doesn't mean he doesn't want.
A side effect of it being a very beneficial technology
which makes all the humans happy. Happy humans are good
for control.
Speaker 1 (46:24):
If you had to guess what the world looks like
in twenty one hundred, if you had to guess.
Speaker 2 (46:35):
It's either free of human existence or it's completely not
comprehensible to someone like us. It's one of those extremes.
Speaker 1 (46:46):
So there's either no humans.
Speaker 2 (46:48):
It's basically the world is destroyed, or it's so different
that I cannot envision those predictions.
Speaker 1 (46:56):
What can be done to turn this ship to a
more certain positive outcome at this point? Is there still
things that we can do? Or is it too late?
Speaker 2 (47:07):
So I believe in personal self interest, if people realize
that doing this thing is really bad for them personally,
they will not do it. So our job is to
convince everyone with any power in the space creating this
technology working for those companies. They are doing something very
bad for them. Now, just forget other eight billion people
(47:27):
you experimenting on with no permission, no consent. You will
not be happy with the outcome. If we can get
everyone to understand that's a default. And it's not just
me saying it. You had Jeff Hintonhan Nobel Prize winner,
founder of a whole machine learning space. He says the
same thing. Benji, dozens of hours top scholars. We had
(47:48):
a statement about dangerous of ay I, signed by thousands
of scholars computer scientists. This is basically what we think
right now, and we need to make it a universal
No one should disagree with this, and then we may
actually make good decisions about what technology are to build.
It doesn't guarantee long term safety for humanity, but it
(48:10):
means we're not trying to get there as soon as
possible to the worst possible outcome.
Speaker 1 (48:14):
And do you are you hopeful that that's even possible?
Speaker 2 (48:19):
I want to try. We have no choice but to try.
Speaker 1 (48:22):
And what would need to happen and who would need
to act? What is it? Government legislation? Is it?
Speaker 2 (48:28):
Unfortunately? I don't think making it illegal is sufficient. There
are different jurisdictions, there is you know, loopholes, and what
are you going to do? If somebody does it, You're
going to find them for destroying humanity like very steep
finds for it. Like, what are you going to do?
It's not enforceable if they do create it. The novel
superintelligence is in charge. So the judicial system we have
is not impactful. And all the punishments we have are
(48:51):
designed for punishing humans, prisons, capital punishment doesn't apply to AI.
Speaker 1 (48:56):
You know the problem I happens when I have these conversations.
I never feel like I will call way with I
hope that something's gonna go well. And what I mean
by that is I never feel like I walk away
with clear, some kind of clear set of actions that
can cause correct what might happen here? So what should
what should I do? What should the person sat at
(49:18):
home listening to this day?
Speaker 2 (49:19):
Do you you talk to a lot of people who
are building this technology, ask him precisely to explain some
of those things that claim to be impossible, how they
solved it or going to solve it before they get
to where they're going.
Speaker 1 (49:34):
Do you know, I don't think Sam Moreman wants to
talk to me.
Speaker 2 (49:37):
I don't know. He seems to go on a lot
of podcasts.
Speaker 1 (49:39):
Maybe he does, he wants to go online. I wonder
why this is. I'd love to speak to him, but
I don't. I don't think he wants to. I don't
think he wants me to interview him.
Speaker 2 (49:55):
Have an open challenge. Maybe money is not incentive. But
whatever attracts beeople like that. Whoever can convince you that
it's possible to control and make safe superintelligence gets the price.
They come on your show and prove their case anyone.
If no one claims the price or even accepts the
challenge after a few years, maybe we don't have anyone
(50:16):
with solutions. We have companies valued again at billions and
billions of dollars working on save superintelligence. We haven't seen
their output yet.
Speaker 1 (50:27):
M Yeah, I'd like to speak to earlier as well,
because I know he's working on safe super intelligence here.
Speaker 2 (50:34):
Not as a pattern. Two, if you look at history
of AI safety organizations or departments within companies, they usually
start well, very ambitious, and then they fail and disappear.
So open AI had Superintelligence Alignment Team. The day they
announced it, I think they said they're going to solve
(50:55):
it in four years. Like half a year later they
canceled the team. There is dozens of similar examples. The
eating a perfect safety for superintelligence, perpetual safety as it
keeps improving, modifying, interacting with people, You're never going to
get there. It's impossible. There's a big difference between difficult
(51:16):
problems in computer science and p complete problems and impossible problems.
And I think control, in definite control of superintelligence is
such a problem.
Speaker 1 (51:26):
So what's the point trying then?
Speaker 2 (51:27):
If it's impossible, Well, I'm trying to prove that it is.
Specifically that once we establish something is impossible, fewer people
will waste their time claiming they can do it and
find looking for money. So many people go and give
me a billion dollars in two years and I'll.
Speaker 1 (51:41):
Solve it for you.
Speaker 2 (51:42):
Well I don't think you will, but.
Speaker 1 (51:45):
People aren't going to stop striving towards it. So if
there's no attempts to make it safe, and there's more
people increasingly striving towards it, than it's inevitable.
Speaker 2 (51:54):
But it changes what we do. If we know that
it's impossible to make it right, to make it safe,
then this direct path of just build it As soon
as you can become suicide mission, hopefully fewer people will
pursue that. They may go in other directions. Like again,
I'm a scientist and an engineer. I love AI, I
love technology. I use it all the time. Build useful tools,
(52:15):
stop building agents. Build narrow superintelligence, not a general one.
I'm not saying you shouldn't make billions of dollars. I
love billions of dollars, but don't kill everyone, yourself included.
Speaker 1 (52:33):
They don't think they're going to, though.
Speaker 2 (52:35):
Then tell us why. I hear things about intuition, and
I hear things about will solve it? Later, tell me
specifically in scientific terms, publish a peer reviewed paper explaining
how you're going to control superintelligence.
Speaker 1 (52:48):
Yeah's strange. It's strange to even bother if there was
even a one percent chance of human extinction. It's strange
to do something like if there was a one percent
ch someone told me there's a one percent chance that
if I got in a car, I might not be alive.
I would not get in the car. If you told
me there was a one percent chance that if I
drank whatever liquid is in this cup right now, I
might die. I would not drink the liquid, even if
(53:10):
there was a billion dollars if I survived, So the
nineteen percent chances they get a billion dollars to one
percent is I die. I wouldn't drink it. I wouldn't
take the chance.
Speaker 2 (53:20):
It's worse than that. Not just you die, everyone dies. Yeah,
now we'll let you drink it at any odds. That's
for us to decide. You don't get to make that choice.
Speaker 1 (53:30):
For us.
Speaker 2 (53:31):
To get consent from human subjects, you need them to
comprehend what they are consenting to. If those systems are unexplainable, unpredictable,
how can they consent. They don't know what they are
consenting to, so it's impossible to get consent by definition,
So this experiment can never be run ethically. By definition,
(53:51):
they are doing unethical experimentation and human subjects.
Speaker 1 (53:55):
Do you think people should be protesting?
Speaker 2 (53:58):
There are people protesting, There is stop either is POSSI.
They block offices of open AI. They do it weekly, monthly,
quite a few actions and very cruding new people.
Speaker 1 (54:08):
Do you think more people should be protesting? Do you
think that's an effective solution.
Speaker 2 (54:12):
If you can get it to a large enough scale
to where majority of population is participating it would be impactful.
I don't know if they can scale from current numbers
to that, but I support everyone trying everything peacefully and legally.
Speaker 1 (54:27):
And for the for the person listening at home, what
should they what should they be doing? What? What? Because
they don't want to feel powerless. None of us want
to feel powerless.
Speaker 2 (54:35):
So it depends on what scale we're asking about. Time scale.
I was saying, like, this year your kid goes to college,
what major to pick? Should they go to college at all?
Should you switch jobs? Should you go into certain industries?
There's questions we can answer. We can talk about immediate future.
What should you do in five years with this being
(54:55):
created for an average person? Not much, Just like they
can't influence World War three, nuclear holocaust or anything like that.
It's not something anyone's going to ask them about today.
If you want to be a part of this movement, yeah,
Joe and POSII joined STAPAI. Those organizations currently trying to
build up momentum to bring democratic powers to influence those individuals.
Speaker 1 (55:23):
So in the nartime, not a huge amount. I just
wondering if there are any interesting strategies in the nartime.
Should I be thinking differently? About my family about I
mean you've got kids, right, you've got three kids. I
don't know about Yeah, how are you thinking about parenting
in this world that you see around the corner? How
are you thinking about what to say to them? The
advice to give them what they should be learning.
Speaker 2 (55:44):
So there is general advice outside of this domain that
you should live here every day as if it's your last.
It's a good advice no matter what. If you have
three years left or thirty years left, you lived your
best life. So I do not do things you hate
for too long, do interesting things, do impactful things. If
(56:07):
you can do all that while helping people do that.
Speaker 1 (56:10):
Simulation theory is an interesting sort of adjacent subject here
because as computers begin to accelerate and get more intelligent
and we're able to, you know, do things with AI
that we can never have imagined in terms of like
can imagine the world that we could create with virtual reality.
I think it was Google that recently released what was
(56:30):
it called, like the AI worlds.
Speaker 2 (56:35):
When you take a picture and a general it is
a whole world.
Speaker 1 (56:38):
Yeah, and you can move through the world. I'll put
it on the screen for people to see. But Google
have released this technology which allows you, I think, with
a simple prompt actually to make a three dimensional world
that you can then navigate through. And in that world
it has memory. So in the world, if you paint
on a wall and turn away, you look back the wall. Yeah,
it's persistent. And when I saw that, Jesus Bloody hell.
(56:59):
This is this is like the foothills of being able
to create a simulation that's indistinguishable from everything I see here.
Speaker 2 (57:07):
Right, That's why I think we are in one. That's
exactly the reason AI is getting to the level of
creating human agents, human level ations, and virtual reality is
getting to the level of being indistinguishable from ours.
Speaker 1 (57:20):
So you think this is a simulation.
Speaker 2 (57:22):
I'm pretty sure we are in a simulation. Yeah.
Speaker 1 (57:26):
For someone that isn't familiar with the simulation arguments, what
are the first principles here that convince you that we
are currently living in a simulation?
Speaker 2 (57:34):
So you need certain technologies to make it happen. If
you believe we can create human level AI, yeah, and
you believe we can create virtual reality as good as
this in terms of resolution, haptics, whatever properties it has,
then I commit right now the moment, this is affordable
I'm going to round billions of simulations of this exact moment,
(57:55):
making sure you're statistically in one.
Speaker 1 (58:00):
Say that last part again, you can run. You're going
to run.
Speaker 2 (58:02):
I'm gonna commit right now, and it's very affordable. It's
like ten bucks a month to run it. I'm going
to run a billion simulations of this interview. Why because
statistically that means you are in one right now. The
chance of you being in the real one is one
in a billion.
Speaker 1 (58:19):
Okay, So to make sure I'm clear on this.
Speaker 2 (58:22):
It's a retroactive placement.
Speaker 1 (58:24):
Yeah. So the minute it's affordable, then you can run
billions of them and they would feel and appear to
be exactly like this interview right now.
Speaker 2 (58:33):
H So, assuming the AI has internal stage experiences quality,
some people argue that they don't. Some say they already
have it. That's a separate philosophical question. But if we
can simulate this.
Speaker 1 (58:45):
I will. Some people might misunderstand. You're not saying that
you will. You're saying that someone will.
Speaker 2 (58:55):
So I can also do it. I don't mind. Of course,
ours will do it before I get there. If I'm
getting it for ten dollars, somebody got it four thousand
that's not a point. If you have technology, we're definitely
running a lot of simulations for research, for entertainment, games,
all sorts of reasons, and the number of those greatly
(59:16):
exceeds the number of real worlds we in. Look at
all video games kids are playing. Every kid plays ten
different games. You know, billion kids in the world, So
there is ten billion simulations in one real world. Even
more so, when we think about advanced AI superintelligence systems,
they're thinking is not like ours. We think in a
(59:38):
lot more detail. They run experiments, So running a detailed
simulation of some problem at the level of creating artificial
humans and simulating the whole planet would be something they'll
do routinely. So there is a good chance this is
not me doing it for ten dollars. It's a future
simulation thinking about something in this world.
Speaker 1 (01:00:03):
So it could be the case that a species of
humans or a species of intelligence in some form got
to this point where they could affordably run simulations that
are indistinguishable from this, and they decided to do it,
and this is it right now, and it would make
(01:00:26):
sense that they would run simulations as experiments or for
games or for entertainment, And also when we think about
time in the world that I'm in, in this simulation that
I could be in right now, time feels long relatively,
you know, I have twenty four hours in a day,
but in their world it could be relative. Yeah, it
could be a second. My whole life could be a
(01:00:47):
millisecond in there.
Speaker 2 (01:00:48):
Right, you can change speed of simulations. You're rinning for sure.
Speaker 1 (01:00:53):
So your belief is that this is probably a simulation.
Speaker 2 (01:00:56):
Most likely, and there is a lot of agreement than
that if you look again. But it turning to religions.
Every religion basically describes what a superintelligent being an engineer
a program, or creating a fake world for testing purposes
or for whatever. But if you took the simulation hypothesis paper,
(01:01:16):
you go to jungle, you talk to primitive people, a
local tribe, and in their language you tell them about it.
Go back to generations later, they have religion. That's basically
what the story is.
Speaker 1 (01:01:29):
Religion. Yea, it describes a simulation theory. Basically, somebody creates
by default.
Speaker 2 (01:01:34):
That was the first theory we had, and now with
science more and more people are going like, I'm giving
it non trivial probability. A few people as high as
I am, but a lot of people give it some credence.
Speaker 1 (01:01:45):
What percentage you out in terms of believing that we
are currently living in a simulation?
Speaker 2 (01:01:49):
Very close to certainty?
Speaker 1 (01:01:52):
And what does that mean for the nature of your life?
If you're close to one hundred percent certain that we
are currently living in a simulation, does that change anything
in your life?
Speaker 2 (01:02:02):
So all the things you care about are still the same. Pain,
still hurts, love still love, right, like those things are
a difference, It doesn't matter. It is still important. That's
what matters. The little one percent different is that I
care about what's outside the simulation. I want to learn
about it. I write papers about it. So that's the
only impact.
Speaker 1 (01:02:22):
And what do you think is outside of the simulation?
Speaker 2 (01:02:24):
I don't know. But we can look at this world
and derive some properties of the simulators. So clearly, brilliant engineer,
brilliant scientists, brilliant artist, not so good with morals and ethics.
Speaker 1 (01:02:40):
Room for improvement in our view of what morals and
ethics should be.
Speaker 2 (01:02:44):
Well, we know that it's suffering in a world. So
unless you think it's ethical to torture children, then I'm
questioning your approach.
Speaker 1 (01:02:54):
But in terms of incentives. To create a positive incentive,
you probably also need to create negative incentives. Suffering seems
to be one of the negatives and incentives built into
our design to stop me doing things I shouldn't do,
so like put my hand in a fire. It's gonna hurt.
Speaker 2 (01:03:09):
But it's all about levels levels of suffering, right, So
unpleasant stimuli negative feedback doesn't have to be at like
negative infinity hell levels. You don't want to burn alive
and feel it. You want to be like, oh, this
is uncomfortable, I'm going to stop.
Speaker 1 (01:03:24):
It's interesting because we assume that they don't have great
more morals and ethics, but we too would We take
animals and cook them and eat them for dinner. And
we also conduct experiments on mice and rats.
Speaker 2 (01:03:36):
But to get university approval to conduct an experiment, you
submit a proposal, and there is a panel of ephesus
who would say, you can't experiment in humans, you can't
burn babies, you can't eat animals alive. All those things.
Speaker 1 (01:03:48):
Will be banned in most parts of the world.
Speaker 2 (01:03:52):
Where they have ethical boards. Yeah, because some places don't
bother with it, so they have easier approval process.
Speaker 1 (01:03:59):
It's funny when you talk about the simulation theory, there's
an element of the conversation that makes life feel less
meaningful in a weird way, like I know it doesn't matter,
but whenever I have this conversation with people not on
the podcast about are we living in a simulation? You
almost see a little bit of meaning come out of
(01:04:21):
their life for a second, and then they forget and
then they carry on. But the thought that this is
a simulation almost posits that it's not important, or that
I think humans want to believe that this is the
highest level and we're that the most important and where
it's all about us. We're quite egotistical by design, and
(01:04:41):
just interesting observation I've always had when I have these
conversations with people, that it seems to strip something out
of their life.
Speaker 2 (01:04:47):
Do you feel religious people feel that way They know
that it's another world and the one that matters is
not this one. Do you feel they don't value their
lives the same?
Speaker 1 (01:04:56):
I guess in some religions, I think the you think
that this world is being created for them and that
they are going to go to this heaven or hell,
and that still puts them at the very center of it.
But if it's a simulation, you know, we could just
be some computer game that four year old Aliens is
messing around with while he's got some time to burn.
Speaker 2 (01:05:18):
But maybe there is, you know, a test, and there
is a better simulation. You go to a worse one,
maybe very different difficulty levels. Maybe you want to play
it on a harder setting.
Speaker 1 (01:05:29):
Next step. I've just invested millions into this and become
a co owner of the company. It's a company called
Keytone IQ, and the story is quite interesting. I start
talking about ketosis on this podcast and the fact that
I'm a very low carb, very very low sugar, and
my body produces keytones which have made me incredibly focused,
have improved my endurance, have improved my mood, and have
(01:05:49):
made me more capable of doing what I do here.
And because I was talking about it on the podcast,
a couple of weeks later, these showed up on my
desk in my HQ in London, these little shots, and oh,
oh god, the impact has had on my ability to
articulate myself, on my focus, on my workout, on my mood,
on stopping me crashing throughout the day was so profound
(01:06:11):
that I reached out to the founders of the company
and now I'm a co owner of this business. I
highly highly recommend you look into this. I highly recommend
you look at the science behind the product. If you
want to try it for yourself, visit keytone dot com
slash Stephen for thirty percent off your subscription order and
you'll also get a free gift with your second shipment.
That's keytone dot com slash Stephen. And I'm so honored
(01:06:33):
that once again a company I own can sponsor my podcast.
I've built companies from scratch and backed many more, and
there's a blind spot that I keep seeing in early
stage founders. They spend very little time thinking about HR.
And it's not because they're reckless or they don't care.
It's because they're obsessed with building their companies. And I
can't fault them for that. At that stage, you're thinking
about the product, how to attract new customers, how to
(01:06:55):
grow your team, really how to survive, and HR slips
down the list because it doesn't feel urgent, but sooner
or later it is. And when things get messy, tools
like our sponsor today just works go from being a
nice to have to being a necessity, something goes sideways
and you find yourself having conversations you did not see coming.
This is when you learn that HR really is the
infrastructure of your company, and without it, things wobble and
(01:07:18):
just work stops you learning this the hard way. It
takes care of the stuff that would otherwise drain your
energy and your time, automating payroll, health, insurance benefits, and
it gives your team human support at any hour. It
grows with your small business from startup through the growth,
even when you start hiring team members abroad. So if
you want HR support that's there through the exciting times
(01:07:38):
and the challenging times, head to justworks dot com. Now
that's justworks dot com. And do you think much about
longevity A lot?
Speaker 2 (01:07:47):
Yeah, that's probably the second most important problem, because if
A doesn't get us, that will What do you mean
You're going to die of old age?
Speaker 1 (01:07:56):
Which is fine, and that's not good.
Speaker 2 (01:07:58):
You want to die, I mean, you don't have to
just a disease, look and cure it. Nothing stops you
from living forever, not as long as universe exists, unless
we escape for simulation.
Speaker 1 (01:08:12):
But we wouldn't want a world where everybody could live forever, right,
that would be surely.
Speaker 2 (01:08:16):
Why who do you want to die?
Speaker 1 (01:08:19):
Well? I don't know. I mean I say this because
it's all I've ever known that people die. But wouldn't
the world have become pretty overcrowded?
Speaker 2 (01:08:25):
If no, you stop reproducing. If you live forever, you
have kids because you want a replacement for you. If
you live forever, you're like, I'll have kids in a
million years. That's cool, I'll go explore universe first. Plus,
if you look at actual population dynamics outside of like
one continent, we're all shrinking when not growing.
Speaker 1 (01:08:43):
Yeah, this is crazy. It's crazy that the more rich
people get, the less kids they have, which aligns with
what you're saying. And I do actually think. I think,
if I'm going to be completely honest here, I think
if I knew that I was going to live to
a thousand years old, there's no way I'd be having
kids at thirty.
Speaker 2 (01:09:00):
Biological clocks are based on terminal points, whereas if your
biological clock is infinite one.
Speaker 1 (01:09:06):
Day and you think that's close being able to extend
our lives.
Speaker 2 (01:09:11):
It's one breakthrough away. I think somewhere in our genome
we have this rejuvenation loop, and it's said to basically
give us at most one hundred and twenty. I think
we can recite it to something bigger.
Speaker 1 (01:09:23):
AI is probably going to accelerate that.
Speaker 2 (01:09:26):
That's one very important application area. Yes, absolutely so.
Speaker 1 (01:09:30):
Maybe Brian Johnson's right when he says, don't die now.
He keeps saying to me, He's like.
Speaker 2 (01:09:35):
Don't die now, don't die ever.
Speaker 1 (01:09:38):
He's saying like, don't die before we get to the technology.
Speaker 2 (01:09:40):
Right, longevity escape velocity. You want to live long enough
to live forever. If at some point we every year
of your existence at two years to your existence through
medical breakthroughs, then you live forever. You just have to
make it to that point of longevity escape velocity.
Speaker 1 (01:09:58):
And he thinks that longevity is scal velosity. Especially world
of AI is pretty. It's pretty it's decades away minimum,
which means.
Speaker 2 (01:10:06):
As soon as we full I understand human genomal I
think will make amazing breakthroughs very quickly. Because we know
some people have genes for living way longer. They have
generations of people who are centarius. So if we can
understand that and copy that or copy it from some
animals which will live forever, we'll get there.
Speaker 1 (01:10:24):
Would you want to live forever?
Speaker 2 (01:10:25):
Of course, reverse reverse the question. Let's say we'll lived
forever and you ask me, do you want to die
in four years? Why would I say yes?
Speaker 1 (01:10:34):
I don't know. Maybe you're just used to the default. Yeah,
I am used to the devout.
Speaker 2 (01:10:37):
And nobody wants to die. Like, no matter how old
you are, nobody goes, yeah, I want to die this here,
Everyone's like, no, I want to keep leaving.
Speaker 1 (01:10:43):
I wonder if life and everything would be less special
if I lived for ten years. I wonder if going
to Hawaii for the first time, or I don't know,
a relationship, or these things would be way less special
to me if they were less scarce, and if there
are just you know.
Speaker 2 (01:11:03):
It could be individually less special. But there is so
much more you can do right now. You can only
make plans to do something for a decade or two.
You cannot have an ambitious plan of working in this
project for five hundred years. Imagine possibilities open to you
with infinite time and the infinite universe. Gosh, well you
can because it's a big amount of time. Also, I
(01:11:26):
don't know about you, but I don't remember like ninety
nine percent of my life in detail. I remember big highlights.
So even if I enjoyed Hawaii ten years ago, I'll
enjoy it again.
Speaker 1 (01:11:37):
Are you thinking about that really practically? As in terms of,
you know, in the same way that Brian Johnson is
Brian Johnson is convinced that we're like maybe two decades
away from being able to extend life. Are you thinking
about that practically? And are you doing anything about it?
Speaker 2 (01:11:50):
Diet nutrition? I try to think about investment strategies which
pay out in a million years.
Speaker 1 (01:11:55):
Yeah? Really, yeah?
Speaker 2 (01:11:57):
Of course?
Speaker 1 (01:11:57):
What do you mean? Of course? Of course?
Speaker 2 (01:11:59):
Well I warn't you. If you think this is what's
going to happen, you should try that. So if we
get AI right now, what happens to economy? We talked
about world coin, We talked about free labor. What's money?
Speaker 1 (01:12:10):
Is that? Now?
Speaker 2 (01:12:11):
Bitcoin? Do you invest in that? Is there something else
which becomes the only resource we cannot fake? So those
things are very important research topics.
Speaker 1 (01:12:20):
So you're investing in bitcoin, aren't you? Yeah, because it's
the only scarce resource.
Speaker 2 (01:12:29):
Nothing else has scarcity. Everything else, if price goes up,
will make more. I can make as much gold as
you want given a proper price point. You cannot make
more bitcoin.
Speaker 1 (01:12:41):
Some people say bitcoin is just this thing on a
computer that we all agreed was value.
Speaker 2 (01:12:44):
We are a thing in a computer, remember.
Speaker 1 (01:12:50):
Okay, so I mean not investment advice, but investment advice.
Speaker 2 (01:12:55):
It's hilarious. Off. That's one of those things where they
tell you it's not, but you know it is immediately.
There is a bit your call is important to us.
That means your call is of zero importance, and investment
is like that.
Speaker 1 (01:13:05):
Yeah. Yeah, when they say no investment advice, it's definitely
investment advice, but it's not investment advice. Okay, So you're
bullish on bitcoin because it's it can't be messed with.
Speaker 2 (01:13:15):
It is the only thing which we know how much
there is in the universe. So gold, there could be
an asteroid made out of pure gold heading towards us,
devoluing it, well also killing all of us. But Bitcoin,
I know exactly the numbers, and even the twenty one
million is an upper limit. How many are lost, passwords forgotten?
(01:13:37):
I don't know what Satoshi is doing with his million.
It's getting scarcer every day while more and more people
are trying to accumulate it.
Speaker 1 (01:13:47):
Some people worry that it could be hacked with a supercomputer.
Speaker 2 (01:13:50):
A quantum computer can break that algorithm. There is strategies
for switching to quantum resistant cryptography for that, and quantum
computs are still kind of weak.
Speaker 1 (01:14:02):
Do you think there's any changes to my life that
I should make following this conversation? Is there anything that
I should do differently the minute I will out of
this door.
Speaker 2 (01:14:11):
I assume you already invest in bitcoin heavily.
Speaker 1 (01:14:14):
Yes, I'm an investor in bitcoin.
Speaker 2 (01:14:15):
It's this financial advice. No, just you seem to be winning.
Maybe it's your simulation. You're rich, handsome, you have famous
people hang out with you like, that's pretty good, keep
it up. Robin Hanson has a paper about how to
leave in a simulation, what you should be doing in it,
(01:14:38):
And your goal is to do exactly that. You want
to be interesting. You want to hang out with famous
people so they don't shut it down, So you are
part of a part someone's actually watching in pay per
view or something like that.
Speaker 1 (01:14:48):
Oh, I don't know if you want to be watched
on paper fee, because then even be this.
Speaker 2 (01:14:52):
Time and they shut you down. If no one's watching,
why would they play it?
Speaker 1 (01:14:57):
I'm saying, don't you want to fly into the radar.
The guy just living a normal life. That the most is.
Speaker 2 (01:15:02):
Those are NPCs. Nobody wants to be an NPC.
Speaker 1 (01:15:07):
Are you religious?
Speaker 2 (01:15:08):
Not in any traditional sense. But I believe in simulation hypothesis,
which has a super intelligent being.
Speaker 1 (01:15:13):
So, but you don't believe in the like you know
the religious books.
Speaker 2 (01:15:18):
So different religions. This religion will tell you don't work Saturday,
this one don't work Sunday. Don't need pigs, don't need cows.
They just have local traditions on top of that theory.
That's all it is. They all the same religion. They
all worship super intelligent being. They all think this world
is not the main one, and they argue about which
(01:15:38):
animal not to eat, skip the local flavors, concentrate on
what do all the religions have in common? And that's
the interesting part. They all think there is something greater
than humans, very capable, all knowing, all powerful. Then I
run a computer game for those characters in a game,
I am that I can change the whole world. I
(01:16:00):
can shut it down. I know everything in the world.
Speaker 1 (01:16:05):
It's funny. I was thinking earlier on when we started
talking about the simulation theory that there's there might be
something innate in us that has been left from the creator,
almost like a clue, like an intuition, because that's what
we tend to have through history. Humans have this intuition
that all the things you said to true, that there's
this somebody above.
Speaker 2 (01:16:24):
And now we will have generations of people who were religious,
who believed God told them and was there and give
them books, and that has been passed on for many generations.
This is probably one of your earliest generations not to
have universal religious believe.
Speaker 1 (01:16:40):
What if those people are telling the truth. I wonder
if there's people that say God came to them and
said something. Imagine that. Imagine if that was part of this.
Speaker 2 (01:16:46):
I'm looking at the news today, something happened an hour ago,
and I'm getting different conflicting results. I can't even get
with cameras with drones, with like guy on Twitter there,
I still don't know what happened. And you think three
thousand years ago we have accurate record of translations and no,
of course not.
Speaker 1 (01:17:05):
You know these conversations you have around AI safety, do
you think they make people feel good?
Speaker 2 (01:17:12):
I don't know if they feel good or that, but
people find it interesting. It's one of those topics. So
I can have a conversation about different cures for cancer
with an average person. But everyone has opinions about AI,
everyone has opinions about simulation. It's interesting that you don't
have to be highly educated to a genius to understand
those concepts because.
Speaker 1 (01:17:33):
I tend to think that it makes me feel not positive,
and I understand that. But I've always been of the
opinion that you shouldn't live in a weld of delusion
where you're just seeking to be positive, have sort of
positive things said, and avoid uncomfortable conversations. Actually, progress often
(01:17:58):
in my life comes from like having uncomfortable conversations, becoming
aware about something, and then at least being informed about
how I can do something about it. And so I
think that's why I asked the question, because I assume
most people will should if that, you know, if they're
normal human beings, listen to these conversations and go gosh,
(01:18:18):
that's scary and this is concerning. And then I keep
coming back to this point, which is like, what do
I do with that energy?
Speaker 2 (01:18:28):
Yeah, but I'm trying to point out this is not
different than so many conversations we can talk about all
there is starvation in this region. Geno said, in this
region you are dying, cancer is spreading, Autism is up.
You can always find something to be very depressed about
and nothing you can do about it. And we're very
(01:18:49):
good at concentrating on what we can change, what we're
good at, and basically not trying to embrace the whole
world as a law environment. So historically, you grew up
with a tribe, You had a dozen people around you.
If something happened to one of them, it was very rare.
It was an accident. Now, if I go on the internet,
(01:19:09):
somebody gets killed everywhere all the time. Somehow thousands of
people are reported to me every day. I don't even
have time to notice. It's just too much. So I
have to put filters in place, and I think this
totic is what people are very good at filtering, as like,
this was this entertaining talk I went to kind of
(01:19:31):
like a show and the moment I exit, it ends.
So usually I would go give a keynote at a
conference and I tell them basically, you're going to die,
you have two years left. Any questions and people be like,
will I lose my job? How do I lubricate my
sex robot, like all sorts of nonsense, clearly not understanding
(01:19:52):
what I'm trying to say there, And those are good questions,
interesting questions, but not fully embracing ver res old. They
still in their bubble of local versus global.
Speaker 1 (01:20:03):
And the people that disagree with you the most. As
a raise to AI safety, what is it that they say?
What are their counterarguments?
Speaker 2 (01:20:11):
Typically, so many don't engage at all, Like they have
no background knowledge in a subject. They never read a
single book, single paper, not just by me, by anyone.
They may be even working in a field. So they
are doing some machine learning work for some company maximizing
ad clicks, and to them, those systems are very narrow.
(01:20:35):
And then they hear that obviously I is going to
take over the world, like it has no hands. How
would you do that? It's nonsense. This guy is crazy,
has a beard. Why would I listen to him?
Speaker 1 (01:20:45):
Right?
Speaker 2 (01:20:46):
That's then they start treating a little bit. They go, oh, okay,
so maybe AI can be dangerous. Yeah, I see that,
but we always solve problems in the past. We're going
to solve them again. I mean, at some point we
fix the virus or something. So it's the same, and basically,
the more exposure they have, the less likely they are
(01:21:07):
to keep that position. I know many people who went
from super careless developer to safety researcher. I don't know
anyone who went from I worry about the eye safety too,
like there is nothing to worry about.
Speaker 1 (01:21:29):
What are you closing statements?
Speaker 2 (01:21:31):
Let's make sure there is not a closing statement. We
need to give a humanity. Let's make sure we stay
in charge, in control. Let's make sure we only build
things which are beneficial to us. Let's make sure people
who are making those decisions are remotely qualified to do it.
They are good not just at science, engineering, and business,
(01:21:52):
but also have moral and ethical standards. And if you
doing something which impacts other people, you should ask their
permission before you do that.
Speaker 1 (01:22:02):
If there was one button in front of you and
it would shut down every AI company in the world
right now, permanently, with the inability for anybody to start
a new one, would you press the button.
Speaker 2 (01:22:15):
Are we losing narrow AI or just superintelligent.
Speaker 1 (01:22:18):
AGI part lousing all of AI.
Speaker 2 (01:22:21):
That's a hard question because AI isn't extremely important. It
controls stock market power plants, it controls hospitals. It would
be a devastating accident. Millions of people would lose their lives.
Speaker 1 (01:22:35):
Okay, we can keep narrow AI.
Speaker 2 (01:22:36):
Oh yeah, that's what we want. We want NARROI to
do all this for us, but not God. We don't
control doing things to us.
Speaker 1 (01:22:45):
So you would stop it, you'd stop AGI and superintelligence.
Speaker 2 (01:22:48):
We have AGI. What we have today is great for
almost everything. We can make secretaries out of it. Ninety
nine percent of economic potential of current technology has not
been deployed AKI so quickly it doesn't have time to
propagate through the industry through technology. Something like half of
all jobs are considered be as jobs. They don't need
to be done bullshit jobs, so those can be not
(01:23:11):
even automated, they can be just gone. But I'm saying
we can replace sixty percent of jobs today with existing models.
We're not done that. So if a goal is to
grow economy to develop, we can do it for decades
without having to create superintelligence as soon as possible.
Speaker 1 (01:23:28):
Do you think globally, especially in the Western world, unemployment
is only going to go up from here? Do you
think relatively this is the low of unemployment.
Speaker 2 (01:23:36):
I mean, it fluctuates a lot with other factors that
are worse. There is economic cycles, but overall, the more
jobs you automate, and the higher is the intellectual necessity
to start a job, the fewer people qualify.
Speaker 1 (01:23:50):
So if we plotted it on a graph over the
next twenty years, you're assuming unemployments gradually going to go
up over that.
Speaker 2 (01:23:58):
I think, so fewer and fewer people would be able
to contribute. Already. We kind of understand it because we
created minimum wage. We understood some people don't contribute anough
economic value to get paid anything really, so we had
to force employers to pay them more than they were
And we haven't updated it. It's about seven to twenty
(01:24:19):
five federally in the US. If you keep up with economy,
it should be like twenty five dollars an hour now,
which means all these people making less are not contributing
enough economic output to justify what they're getting paid.
Speaker 1 (01:24:35):
We have a closing tradition on this podcast where the
last guest leaves a question for the next guest, not
knowing who they're leaving it for, And the question left
for you is what are the most important characteristics for
a friend, colleague, or mate.
Speaker 2 (01:24:52):
Those are very different types of people. But for all
of them, loyalties.
Speaker 1 (01:24:58):
Number one, And what does loyalty mean to you?
Speaker 2 (01:25:03):
Not betraying you, not screwing you, not cheating on you,
despite the temptation, despite the world being as it is
situation environment.
Speaker 1 (01:25:18):
Dr Roman, thank you so much. Thank you so much
for doing what you do, because you're you're starting a
conversation and pushing forward a conversation and doing research that
is incredibly important, and you're doing it in the face
of a lot of a lot of skeptics. I'd say
there's a lot of people that have a lot of
incentives to discredit what you're saying and what you do
because they have their own incentives, and they have billions
(01:25:39):
of dollars on the line, and they have their jobs
on the line potentially as well. So it's really important
that there are people out there that are willing to,
I guess, stick the head above the parapet and come
on shows like this and go on big platforms and
talk about the unexplainable, unpredictable, uncontrollable future that we're heading towards.
So thank you for doing that. This book, which I
(01:26:01):
think everybody should should check out if they want a
continuation of this conversation. I think it was published in
twenty twenty four. Gives a holistic view on many of
the things we've talked about today, preventing AI failures and
much much more. And I'm going to link it below
for anybody that wants to read it. If people want
to learn more from you, if they want to go
further into your work, what's the best thing for them
to do? Where do they go?
Speaker 2 (01:26:21):
They can follow me, follow me on Facebook, follow me
on X just don't follow me home.
Speaker 1 (01:26:25):
Very important, Okay, So I'll put your Twitter, your ex
account as well below so people can follow you there.
And yeah, thank you so much for doing what you do.
It's remarkably eye opening and it's given me so much
food for thought, and it's actually convinced me more that
we are living in a simulation. But it's also made
me think quite differently of religion. I have to say,
because you're right, all the religions, when you get away
from this sort of the local traditions, they do all
(01:26:47):
point at the same thing. And actually, if they are
all pointing at the same thing, then maybe the fundamental
truths that exists across them should be something I pay
more attention to things like loving thy neighbor, things like
the fact that we are all one, that there's a
divine creator, and maybe also they all seem to have
consequence beyond this life, So maybe I should be thinking
more about how I behave in this life and where
(01:27:09):
I might end up thereafter. Romend, Thank you a man M.