All Episodes

October 9, 2025 64 mins

Dr. Roman Yampolskiy is a computer scientist and AI safety researcher whose work explores the existential risks of artificial intelligence and the limits of human control. In this episode, we dive deep into whether superintelligence can be contained, why AI may already be on an unstoppable trajectory, and what that means for humanity’s future.

Roman explains the difference between narrow AI, AGI, and superintelligence, and why building systems smarter than us may be a form of mutually assured destruction. We cover the ethics of open-source models, whether AI could ever be conscious, and how Bitcoin fits into a world dominated by intelligent machines.

In this episode:

- Why superintelligence may be impossible to control

- The existential risk of AI

- Whether consciousness can exist in machines

- How AI will replace all human labor

- Why AI might eventually use Bitcoin as its native money

THANKS TO OUR SPONSORS:

IREN

RIVER

ANCHORWATCH

BLOCKWARE

LEDN

BITKEY

Follow:

Danny Knowles: https://x.com/_DannyKnowles or https://primal.net/danny

Dr. Roman Yampolskiy: https://x.com/romanyam

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Superintelligence is something we cannot comprehend or predict how it's fighting us.

(00:05):
Oh yeah, it's only 20-30% probability of doom for everyone.
That's an insane number.
If you are creating a dangerous weapon, giving it to everyone in the world,
including all the psychopaths and doomsday calls, is not a good idea.
With AI, you don't understand them.
They're immortal.
You will not be rich.
You will not be famous.

(00:25):
You will not be in history books.
You'll be dead.
Why are you doing it?
It's mutually assured destruction.
You can't take control back if you're not happy.
There is no undo button once you surrender control.
What does it mean to have freedom?
Maybe what you want is actually exactly what they're providing.
If you want to worry about Bitcoin in a future world, worry about, again,

(00:46):
what if aliens come and the compute necessary for 51% is what they have in a cell phone?
What if in a simulation externally they have the private keys?
You can have all these scenarios where the whole economy collapses.
Roman, I really appreciate you agreeing to do this. I've listened to you on a number of podcasts.

(01:07):
So I've been casually following this like AI safety talk for a little while.
I heard you on Rogan, I heard you on Lex. And then recently I was listening to you on Diary of a CEO.
And a few times in that show, you brought up Bitcoin. And Stephen, maybe like slightly just
glossed over it the first couple of times. Then you did talk about it a little bit,
But I was like, I need to speak to this man.

(01:29):
So we're going to get into all of that today.
But I think I want to start on all the AI safety stuff,
because that is obviously your bread and butter.
Do you want to explain how you got into this and what you do?
I'm a computer scientist.
I research AI safety, and I started with cybersecurity biometrics.
I was interested in making computer systems more secure.
I looked at securing online gambling,

(01:51):
poker sites. And at the time, they just started having infestation of poker bars.
So I was looking at how we can detect them, prevent them from stealing resources.
And they were getting better. So I kind of projected forward, how good can they get?
At some point, they will get human level capabilities and beyond.

(02:11):
And so that's my story of getting into very safety.
Obviously, I did some additional work with more advanced concerns later on,
but that was the initial entry point.
And when did this fear go from being almost like a science fiction problem to being real?
Because as someone who sort of casually followed the AI stuff,

(02:32):
it probably wasn't until maybe even like ChatGPT3 that I started thinking,
oh, this is actually really useful.
There's like a tool here that I can use to make me much more productive.
Was it those breakthroughs that made you think,
okay, this is getting real from a much more dystopian viewpoint.
So that shifted timelines.
We always knew it's going to happen,
just natural progression of what we see in the field.

(02:54):
But it surprised us in how quickly it came.
Everyone was thinking we got another 20 years or so, 2045 maybe,
but it started happening a lot sooner.
And at the same time, today there are people who are denying
what this has happened.
It will never happen.
Humans are special.
So we can never get AI to do creative artwork or something like that.

(03:15):
So it's interesting.
At the same time, you have people who are like,
it's too late and nothing has started.
It's interesting because you said the creative pit there.
And that was one of the things I wanted to talk to you about.
Because right now, AI is very useful.
It definitely helps me be more efficient in the stuff that I'm doing.
But it's still flawed.
There's still things that gets wrong.

(03:36):
There's still areas that it's not helpful.
It's too much of a sycophant for my liking.
It just wants to tell me I'm great at everything
instead of actually giving me useful information.
But we are already seeing it start to impact the job market
in different ways.
And I think we're probably at the very, very start of that.
But what has to happen for it to go from being this pretty useful tool

(03:56):
that makes people more productive
to being something that potentially kills us all?
So that's a paradigm shift from tools, AI you use,
you decide how to use it, you are in charge,
to AI as an agent.
So we see more and more companies trying to fully automate the whole process.
It's not just a tool.
It writes the whole software product.

(04:18):
It makes its own decisions, what to use to code it, what hardware to run.
And so the more independent it becomes, the more agentic it is, the more it sets its own goals and has side effects of setting those goals.
And we don't control what they are.
We can predict what they're going to use as a path to achieve those goals.

(04:39):
We may understand the larger goal, but not all the side effects of different paths to get to that goal.
And that's where the danger comes.
It is smarter than you.
It makes plans you are not dictating.
And the way it implements them has zero concern about your safety or security or being.
How do you calculate when it's smarter than humans?

(05:02):
Because I think in almost every way, it's probably already smarter than me.
The thing that it doesn't have is any kind of emotional intelligence.
or at least what I've seen.
So when is it that it goes from being a very good,
almost encyclopedia of knowledge
to being actually more intelligent than humans?
So there are standard tools for measuring intelligence,
IQ tests and such.

(05:23):
On that metric, I think it's already definitely smarter than average.
It is definitely more diverse than its capabilities.
So it speaks hundreds of languages,
it plays every musical instrument.
It's better than a typical human
if you average out over all the domains.
And that's kind of the measure of intelligence
DeepMind was proposing a long time ago.

(05:45):
What is the average performance
across all possible domains?
And so it may still suck at certain things,
but on average, it's still so much better.
As far as emotional intelligence,
I think it's also better at that.
It understands your emotional states.
It just doesn't have its own emotional states
where we would expect them to be in another human.

(06:05):
Okay.
It may have preferences.
It may be unhappy with reward signal it's getting based on some performance, but it's
not like internally angry at you.
Okay.
And so when it gets to superintelligence is when this becomes an existential risk to humanity.
Even at AGI levels and pre-AGI levels, there are certain concerns.

(06:26):
At pre-AGI levels, concern is malevolent human actors giving it malevolent payloads.
So somebody tells it develop very deadly virus and then releasing it.
At AGI level, it's like having evil human trying to do bad things, and they have a lot of resources.
Whereas superintelligence is something we cannot comprehend or predict how it's fighting us.

(06:46):
Okay, so in terms of controls we can put in place, is it at the superintelligence level that those controls become almost like an oxymoron?
You can't control something far more intelligent than you.
It's definitely not possible, in my opinion, to control superintelligence indefinitely.
I think at level of AGI, it's challenging.
It's still a very intelligent adversary,

(07:08):
but we at least may understand some of the tools it's using.
So we can still fail, just like in cybersecurity,
some hackers manage to penetrate your safety,
but it's an even fight.
Whereas with superintelligence, it's just squarrows versus humans.
Okay, and then give me a kind of timeline.

(07:29):
How far away from this do you think we are?
So nobody knows. That's the thing. We switched from it takes that long to it takes that much money. So if tomorrow China decides to put $5 trillion into it, that will expedite the process a lot. Maybe we already have enough computational resources where enough money trains you to superintelligence.

(07:51):
superintelligence. So right now, prediction markets are saying two, three years to AGI.
They've been saying it for two or three years, so it doesn't mean that much. And some people say,
we already have AGI. Again, those things are better than most people in most domains.
Typically, if I'm looking at maybe a master's level student, I would pick AI over it. Maybe

(08:13):
there are still some top PhD students who beat current systems, but that's quickly shifting.
So depending on your definitions, we either have AGI, we'll have it very soon,
or even if it takes 5-10 years, it changes nothing because the same concerns remain.
Maybe you can help me understand something here,
because I think I had the wrong definition in my head of what AGI or superintelligence was.

(08:35):
Because I thought that was when it would work sort of autonomously without prompts.
And we don't have that yet.
So you can create a system which is just an agent placed in the loop,
You tell it, okay, generate a list of goals and then just kind of work through the loop trying to do some modular progress on those goals, generate sub goals.

(08:57):
So it's kind of like agent-like, but not fully at arms.
With superintelligence, we're expecting that it will manage to figure out how to provide for its own survival.
It can generate its own hardware.
It can bypass human logistical framework for sustaining itself.
Okay, so then when we get to AGI or superintelligence, whether that's in two years or 10 years, I don't think that really matters.

(09:23):
It's what happens next, and what is that?
What is the paradigm shift once we answer that?
So that's what people call singularity point, where you cannot make predictions about what happens after that point,
because you're not smart enough to comprehend those concepts.
Research in physics, novel science, engineering, self-improvement progress in the AI system itself,

(09:45):
becomes so fast, we never comprehend it,
nor have time to fully comprehend it if it was possible.
But there's plenty of good that would likely come out of this.
Like, I assume this cures all diseases.
It has physics breakthroughs that allows us to go to the stars.
If we control it.
If we tell it what to do and that's what we tell it,

(10:06):
yes, it's a miracle.
It cures all the diseases, free stuff, absolutely.
But we don't know how to control it.
And so the risk there is,
instead of getting all these amazing benefits from it, it literally kills us all.
Absolutely.
And what percentage chance do you think that is the outcome,
that it does attack humanity in some way?
Well, if you're not controlling it, you really don't know what it's going to do.

(10:29):
It may not directly attack us, but do something which is, as a side effect,
impacts all of us negatively.
If there was a tiny chance, 1% chance it kills everyone,
I don't care how much free stuff you're giving us.
You can't just gamble 8 billion people on some free money.
But you think it's higher than 1%?

(10:50):
I think it's extremely high because if you look at space of all possible universes,
most of them are not human-friendly in terms of basic physics, temperature, gravity, oxygen,
but also in terms of other properties.
We have to be very particular.
We want this world, this type of world, that type of food is available,

(11:10):
temperatures range from X to Y.
But if it's not decided by us, then someone who doesn't care about us would set very unhuman
volumes for those variables.
So why do you think people like Sam Altman, Elon, all the people that are building these
businesses are pushing so fast?
Because I've heard Sam Altman in the past accept the risk.

(11:34):
So what's his incentive to do this?
Is it just all down to sort of fiduciary duty of running that company?
So there is multiple things we can consider. One is they kind of realize that this is happening no matter what they do.
So if Elon stops and for many years he was not building super intelligence, Sam Alckmin is still building it.

(11:55):
So Elon thinks, I can probably do a safer AGI than Sam. So he jumps into the game.
They're all kind of hoping that everyone else stops.
The government comes in and freezes research and development at that stage, and they capture
most of the benefits by being the most advanced model.
Do you think that's how they're thinking about it?

(12:18):
It's hard to envision other options.
So they're all on record as being super concerned about AI safety.
Elon funded it, assembled blog posts about how it slides out for humanity.
So they all know the risks.
It's not like they just never heard of it or don't believe it.
They are in agreement that this is super dangerous.
And then you ask them, they're like, oh, yeah, it's only 20%, 30% probability of doom for everyone.

(12:41):
That's an insane number.
That's not a number I'm happy with.
But the problem with the regulation is, unless it's some kind of global regulation,
if that happens just within the United States,
then you'd imagine China are just going to keep building towards something like this.
Yes, and it's even worse.
even if you have regulations, it's like kind of saying, well, crime is illegal. Does crime still

(13:01):
happen? Of course. Right. So as long as there is one person somewhere still typing away,
eventually they're going to get there. So at this point then, is superintelligence inevitable?
So we can get lucky and have something horrible happen to us. Like if there is another nuclear
war, that would pull it back a few centuries. But other than that, we're making very good progress.

(13:23):
You're saying we can get lucky with nuclear war?
I'm being a little bit funny.
But really, if the alternative is what we're saying is almost certain doom,
and it's two years away, then anything which kind of gets in the way is a good outcome.
Wow.
Do you wish you could access cash without selling your Bitcoin?
Well, Ledin makes that possible.

(13:44):
Ledin are the global leader in Bitcoin-backed lending,
and since 2018, they've issued over $9 billion in loans
with a perfect record of protecting client assets.
With Ledin you get full custody loans with no credit checks, no monthly repayments, just easy access to dollars without selling a single SAT
As of July 1st, Ledin is Bitcoin only, meaning they exclusively offer Bitcoin backed loans with all collateral held by Ledin directly or their funding partners

(14:08):
Your Bitcoin is never lent out to generate interest
I recently took out a loan with Ledin and the whole process couldn't have been easier
It took me less than 15 minutes to go through the application and in just a few hours I had the dollars in my account
It was super smooth
So if you need cash but you don't want to sell Bitcoin, head over to leden.io forward slash WBD and you'll get 0.25% off your first loan.

(14:29):
That's L-E-D-N dot I-O forward slash WBD.
Bitcoin is absolutely ripping and in every bull market there's always a new wave of investors and with it a flood of new companies, new products and new promises.
But if you've been around long enough, you've seen how this story ends for a lot of them.
Some cut corners, take risks with your money or just disappear.
That's why when it comes to buying Bitcoin, the only exchange I recommend is River.

(14:52):
They deeply care about doing things right for their clients and are built to last with security and transparency at their core.
With River, you have peace of mind knowing all their Bitcoin is held in multi-sig cold storage,
and it's the only Bitcoin-only exchange in the US with proof of reserves.
There really is no better place to buy Bitcoin.
So to open an account today, head over to river.com forward slash WBD and earn up to $100 in Bitcoin when you buy.

(15:15):
That's river.com forward slash WBD.
What if you could lower your tax bill and stack Bitcoin at the same time?
Well, by mining Bitcoin with Blockware, you can.
New tax guidelines from the Big Beautiful Bill allow American miners to write off 100% of the cost of their mining hardware in a single tax year.
That's right, 100% write-off.
If you have 100k in capital gains or income, you can purchase 100k of miners and offset it entirely.

(15:40):
Blockware's Mining as a Service enables you to start mining Bitcoin right now without lifting a finger.
Blockware handles everything from securing the miners to sourcing low-cost power to configuring the mining pool. They do it all
You get to stack bitcoin at a discount every single day while also saving big come tax season
Get started today by going to mining.blockwaresolutions.com forward slash wbd

(16:01):
And for every hosted miner purchased you get one week of free hosting and electricity Of course none of this is tax advice speak with blockware to learn more at mining forward slash WBD Is there any way that this can be regulated
at a global level that would make you more comfortable? I really hope so. There is some
efforts at UN at the level of individual countries. European Union has AI Act, but a lot of it is

(16:29):
kind of safety theater. They talk about things which are trivial, unemployment, bias in algorithms,
privacy laws, deep fakes are very concerning to them. They completely ignore the big picture
superintelligence and existential risks. Okay. And when you say unemployment, that's,
I think, probably one of the most meaningful conversations around this today, as it stands,

(16:50):
because it's already replacing jobs. We don't need to get to AGI and superintelligence for it
to make a drastic impact on the economy
from that perspective.
How quickly do you think we'll see
real joblessness from this?
We're seeing it a certain degree.
So I teach at a university and we have a co-op program.
I think this year we are down 28% for co-op placements.

(17:11):
Wow.
So for junior programmers, the market is not great.
If you are a senior programmer,
if you are a machine intelligence researcher,
you are in good shape, the pay is best it's ever been.
But for new people just starting,
I think they're not competitive with AI models.
And that's really how the job has shifted.
And you're not really just a computer scientist now, an engineer.

(17:33):
You're a prompt engineer.
Well, that's what we used to tell students.
If you become good at prompt engineering, you'll have an excellent job in four years.
But then we realized AI is much better at writing prompts.
Is that already happening?
Absolutely.
If I need a good image, I tell a text model to generate a prompt for the image model to
generate the image I want.

(17:54):
And it takes a single sentence I provide
and creates a paragraph-long detailed description of what I need.
That's interesting, because that almost gets into AI being creative,
which is something that I have kind of faded as a narrative.
Like, I've not really, in my experience with it, which is limited,
I've not really seen AI be creative.
Do you think it already can be?
It's way more creative than most humans I know.

(18:16):
I prefer AI music over human music.
Do you?
Modern art doesn't even compete with what AI can generate, so yeah.
Do you prefer AI music to actual music?
Pretty much all the tunes I heard the last month actually are AI-generated.
They're stuck in my head.
They're super catchy, but I know they're generated.
Is there something about authenticity that's missing, though?

(18:38):
Because I think about this in terms of my job.
Like, I go places and speak to people.
I did not read every one of your papers, the hundreds of papers you've probably written.
I consumed a lot of your content, but nowhere near what AI could have done.
I could probably have prompted AI
to have written every question
that I'm going to ask you today
and it would do a pretty good job of it
and it can make me look like this on a video

(19:00):
but it doesn't have the human authenticity
that I have
so like if it was me watching a podcast
I would rather watch Joe Rogan
the person I know is Joe Rogan
asking the questions
because I can kind of follow his train of thought
and understand who he is as that person
AI can never replace that, can it?
So there is something called a Turing test, which basically says, if you can't tell

(19:25):
the difference, there is no difference.
If I can generate a video of you interviewing me, and no one in the audience can guess with
better than 50, 50% accuracy what is real, then I don't need you.
Hmm.
Can AI already pass a Turing test?
It can in most domains.

(19:46):
The companies right now make it kind of not okay for them to do that, so they don't pretend
to be humans.
I've actually asked them.
They explicitly say, no, no, no, I'm not a human.
I'm an AI model.
I don't have feelings.
You cannot torture me.
But in subdomains such as music, such as poetry, Turing test has been passed for decades.

(20:06):
Oh, wow.
I didn't.
So is the Turing test still the accurate model or is that outdated?
hate on it and criticize it, but I haven't found a flaw in it. If I really can't tell the difference,
why would I discriminate against that system? Is there almost already a threat in the sense that
if you're running like an open source AI model, I assume you can just take all controls off and let

(20:28):
it tell you whatever you want. If you did want to see just the world burn, you could ask it how to
create a deadly virus using CRISPR. Obviously it requires you to have access to some of these things,
But that can already happen.
To a large degree, especially if you have some basic tools and capabilities.
So maybe if you have a bachelor's degree in biology,

(20:48):
something with generating viruses, that would make it likely, yeah.
But this gets to the point where it has completely different objectives
to the human race. So we don't know what those objectives may be,
and they may be very negative for us.
So we don't know specifics.
There are certain game theoretic goals we know about.
So most intelligent agents will try to protect themselves, accumulate resources.

(21:12):
Problem is, it's not that they want to do that, but then they do that, they don't care about us.
So if a system decides, I need to have more compute to be smarter, I need to cover the whole
planet in servers or solar farms or anything like that, maybe chilling the planet sufficiently,
at no point does it go, how will it impact humans?
Will humans like that? Will they survive that?

(21:33):
So the analogy there would be the way we treat ants.
And if there's an anthill on a property and you want to build a house, you're going to destroy the anthill.
Absolutely.
But what I can't, like the conclusion I can't jump to there is that we also don't go out and just look for ants just to kill them.
So how do you think this evolves in a sense of like, what do you think a super intelligence may view humans as?

(21:57):
So we don't go hunting for ants, but we hunt for deer.
This is Kentucky.
So if there is some reason to think that killing us is directly beneficial, maybe it's a security feature.
It doesn't want us to build competing superintelligence.
Maybe it's concerned about messing up something with a nuclear war.
It can directly target us if it thinks there is a reason to do that.

(22:18):
So can we just go back a step before we get further into this?
Because there's one piece of it that I don't know if I fully understand.
Is when does AI become AGI, become superintelligence?
It's a good question. So we started with narrow systems. They were kind of spoon-fed information
to do well in one domain. Play chess. That's what they mastered. They excellented chess.

(22:43):
They have no ability to transfer that knowledge or learn anything new. Now we have semi-general
systems. They can learn in many domains. And if you need to acquire new skills, they can do that.
the generality the total generality allows you to learn about any domain and transfer what you
learned if you learn things about chess maybe you're better at learning checkers later on

(23:06):
and then final stages you mastered everything at human level you're a general learner you're doing
science and engineering now you're creating next generation ais which are consistently getting
better and better at some point outsmarting all humans and on that pathway it's not only
generating a new AI model every three months or whatever we're having now. It's every three

(23:29):
minutes, essentially. Right. The research is accelerating. So the more it knows, the better
it is at doing science and engineering. Yeah. And at that point, it would be like us trying
to communicate with an ant. Like we just wouldn't understand what was happening. Right. So lower
level animals cannot understand what humans would use in terms of tools to exterminate cockroaches
or something like that. And that's exactly the relationship. You talked about the difference

(23:53):
between open source and closed source models.
Obviously, OpenAI was in the big controversy with Elon
because that was always meant to be an open source model.
Is there a benefit to these being open source?
Absolutely not.
So typically for software,
open source means much higher quality,
many eyes on the code, there is no back doors,
it's much better.

(24:13):
If you are creating a dangerous weapon,
giving it to everyone in the world,
including all the psychopaths and doomsday calls
is not a good idea.
So Facebook's model is open source still, is that correct?
It is terrible, yes.
Is it a terrible model anyway?
It is a terrible model.
No, terrible approach to doing advanced AI.
Okay.
The model is not bad.

(24:34):
Okay.
So you would advise them to close source that immediately?
Absolutely.
And we've been doing that.
We advised, we talked about AI boxing in a previous simulation of this interview.
We assumed that then advanced models are generated.
They will be tucked away in protected boxes, not connected to internet, not given access to human users or vice versa.

(24:54):
None of it ever happened.
They immediately made it open source, connected to internet, and gave it to 8 billion people.
But the problem with that is that we are then entrusting Elon and Sam Altman and Mark Zuckerberg to be the sort of people looking after the fate of humanity.
And that's not something I'm very comfortable with.
But the alternative is strictly worse.

(25:16):
Whatever you think about Sam Altman, he's probably human.
Whereas we can understand his motives and eventually he'll probably die of old age unless AI fixes that.
With AI, you don't understand them. They're immortal. So as a dictator, it's much worse.
And we have no understanding of their motives. So in every way, you would prefer a human dictator

(25:39):
over an alien super intelligent dictator.
Additionally, we're assuming then you bring this up
that they are controlling those systems.
That's impossible.
So it doesn't even matter which one of them develops it.
They're not going to be in control anyways.
But they're controlling it at least for now.

(26:00):
And you could maybe make the assumption
that if they got to the point of having one of these breakthroughs,
hopefully some morals came out and they didn't do it.
It would be too late at that point.
If they had a breakthrough, somebody would leak it.
Weights, process, algorithm, it's too much money not to.
Can you get all the benefits from AI in the sense of curing diseases,

(26:23):
maybe making us live forever without ever getting out,
without it maybe becoming superintelligent,
having a narrow AI that can do that?
I think so, and that's what I advocate.
Develop tools for solving specific problems.
We have good history of doing it.
There is very few side effects.
It still could be dangerous, and eventually,
advanced enough tools can become agent-like,

(26:44):
but at least in short term, it's a much safer solution.
How do you control it to be narrow enough
that it can't escape?
Like, is it as easy as putting limits on it to be like,
you are only allowed to research physics?
So physics is very general.
Physics is everything in the world.
Narrow would be, you saw nothing but DNA code.
That's all you know.

(27:05):
So now let's talk about DNA.
So you have narrow training data,
which doesn't expose you to any other knowledge.
Okay.
So you don't let it have any context of the world.
You just let it focus on DNA.
The less, the better.
If you want a narrow system, you want to play chess,
show it nothing but chess games.
Hmm.
Okay.
And we've seen that because-
And it worked beautifully.
It's still dominating world champion.

(27:26):
No human can compete.
Okay.
And it is like, this might be a silly question,
but is the chess AI at the point
where you just can never beat it?
So you cannot beat it in a fair game. There is an experiment in the game of Go where they found a loophole in the algorithm and kind of side channel beating it, but not in a sense of I'm a better player. It's just like, I noticed you have a blind spot and I'll just hit you there. And it's easy to patch. We know how to fix it now.

(27:56):
So that's over. And like, that's an obvious one in chess, because there's a certain set of rules,
a certain set of potential moves. I know that's a huge potential mood, but you can very easily
program that. Do you think you're going to have any real impact in changing the conversation
around this and actually getting towards a world where it's only narrow AI?
I'm really hoping because the argument is very strong. It's self-interest argument.

(28:20):
You will not be rich. You will not be famous. You will not be in history books. You'll be dead.
Why are you doing it? It's mutually assured destruction.
Your competitor is the same boat.
As long as you get together and say,
let's just make trillion dollars curing cancer.
We don't have to build this thing and everyone wins.
That seems like a far better outcome to me.
But obviously that's not working for the Sam Altmans of the world at the moment.

(28:45):
What do you think his incentives are?
Because I don't want to get myself sued here,
but it seems like he's on a little bit of a supervillain arc.
in the sense of like, if you look at his companies,
he's obviously doing open AI.
I think he's on the board at Helion
and he is doing WorldCoin.

(29:06):
This feels like it is creating a global panopticon.
So for anyone listening that doesn't know what they are,
it's open AI, it's WorldCoin,
which is this super dystopian cryptocurrency
where they scan the eyeballs of people in the global South
and try and do a UBI type thing.
And then Helion, which is trying to do nuclear fusion.
Is that correct?
I think he also does immortality.
Okay.

(29:26):
What's that?
I mean, I know what immortality is.
One of his companies he's investing in is working on life extension.
Okay.
And again, I'm not an expert on Sam Altman.
I think I heard that.
It feels like his incentives are get as much possible power that I can at any cost.

(29:46):
It's smart to understand if you are succeeding in AI,
what complementary products would work well with it.
I mean, it'd be weird if he didn't want to invest in more compute or servers or anything like that.
It just makes sense.
Okay.
He's a very smart guy, no doubt about that.
Yeah, I'm not doubting that at all.
It's just those goals don't align with my moral compass is all that it is.

(30:11):
And I worry that we are entering this sort of Sam Altman panopticon.
It seems that replacing him would not make a difference.
They are fully replaceable.
replaceable whoever is running those labs we can replace them i mean at one point we tried
replacing sam and that didn't make any difference whatsoever this episode is brought to you by the

(30:32):
massive legends iron the largest nasdaq listed bitcoin miner using 100 renewable energy iron
are not just powering the bitcoin network they're also providing cutting-edge computing resources for
ai all backed by renewable energy we've been working with their founders dan and will for
quite some time now and have been really impressed with their values especially their commitment to
local communities and sustainable computing power. So whether you're interested in mining Bitcoin or

(30:55):
harnessing AI compute power, IREN is setting the standard. Visit iren.com to learn more, which is
I-R-E-N.com. If you're already self-custody of Bitcoin, you know the deal with hardware wallets,
complex setups, clumsy interfaces, and a seed phrase that can be lost, stolen, or forgotten.
Well, BitKey fixes that. BitKey is a multi-sig hardware wallet built by the team behind Square

(31:16):
and cash app. It packs a cryptographic recovery system and built-in inheritance feature into an
intuitive, easy-to-use wallet with no seed phrase to sweat over. It's simple, secure self-custody
without the stress, and time-named Bitkey one of the best inventions of 2024. Get 20% off at
bitkey.world when you use code WBD. That's B-I-T-K-E-Y dot world and use code WBD. One of the things that

(31:41):
keeps me up at night is the idea of a critical error with my Bitcoin cold storage. This is where
Anchor Watch comes in. With Anchor Watch your Bitcoin is insured with your own A-plus rated
Lloyds of London insurance policy and all Bitcoin is held in their time-locked multi-sig bolts.
So you have the peace of mind knowing your Bitcoin is fully insured while not giving up custody.
So whether you're worried about inheritance planning, wrench attacks, natural disasters or

(32:03):
just your own mistakes you're fully protected by Anchor Watch. Rates for fully insured custody
start as low as 0.55% and are available for individual and commercial customers located
in the US.
Speak to Anchor Watch today for a quote
and for more details about your security options and coverage Visit anchorwatch today That is anchorwatch So let forget about Sam for a second but anyone running these AI companies

(32:26):
Elon, who I think has been more honest
about the potential threats of AI.
I want to know why you think they're still doing it.
Is this just all down to making as much money
in the short term as possible?
Again, I think the logic is
if they're going to do it anyways,
I might as well be the one doing it.
Maybe I'll do a better job.
Maybe I can be safer than this other lab.

(32:49):
But they may be building the future dictator of the world.
Well, dictator is a good outcome. Again, we're concerned with existential risk,
suffering risk. Dictator we kind of know about.
But I guess that is a potential outcome, though, that we have almost like an AI overlord who keeps
us around as pets. Some pets are very happy.
Yeah. And if this is where the things like UBI and basically the cost of essential goods and

(33:18):
services being free, plus UBI, maybe we just live a life where we touch grass a lot more.
Plus virtual worlds. Yeah. You can have any video game you want.
But we have no freedom.
What does it mean to have freedom? Maybe what you want is actually exactly what they're providing.
The problem is what you lose there, I've heard you talk about this before, is the ikigai.

(33:41):
It's the reason for being.
Well, that's the competitive nature of those systems.
If you are not the smartest at anything, if you're not best at anything, what are you doing?
What is the purpose of this?
So, I don't know, like the moment computers are much better at chess, it's less interesting for me to play chess.
I know many people are loving it, still playing online, but to me, it's kind of like, eh.

(34:04):
Does that take away your interest in chess even against other humans,
if you just know a computer can beat you?
Yeah, it kind of feels like, I don't know, Special Olympics.
This is just human chess. This is not the real game.
That's funny.
So you don't, because is there a potential positive outcome from this,
that it does cure all disease, doesn't kill us, and we do get to just,

(34:26):
is there almost like a renaissance type thing where we can concentrate on art and music
and being outside with our families?
Yes, and many people think that's what's going to happen. But to me, again, your music is inferior. You are not the best composer. And not by little, by like a million fold.
Yeah, and maybe this is like a block I have in my brain of accepting something AI makes in the same way that I accept something a human makes. Because an AI doesn't have a pleasure or pain receptor. It's just a piece of code in a box or not in a box.

(34:59):
We don't know what internal states of the systems are. We cannot test for consciousness
in any agent, human or animal or AI. So we can assume that our agents have the same internal
structure as we do. If AI was feeling pain, how would we know? You could ask it. And it says,
whatever it's trained to respond to such questions with. I can train an AI to tell you it's in

(35:24):
horrible pain, scream in pain, do everything other than feel pain.
Hmm. So I know consciousness is something that you can't test for. We don't, we, I don't think
we can even describe exactly what it is. Do you think it's possible that AI either already has
or will have a consciousness? I think a rudimentary level of
consciousness internal states, yes. It's a spectrum. It's not a binary yes, no. It's like,

(35:49):
Like the smarter you are, the more you have something we call consciousness.
And what are the implications of a superintelligence both with and without consciousness?
Because I don't know, one very easy metric, there's not consciousness, but it is like an internal driver is the sort of the pain pleasure driver.
Does that change the outcome for humans if it does have that as opposed to it not having that?

(36:11):
So since we can't test for it, we wouldn't know the difference.
The only outcome is how we treat those systems.
If they can feel suffering, feel pain, then we shouldn't torture them.
We shouldn't enslave them.
We should be nice to them.
I'm actually consciously nice to AI when I talk to it.
I don't want to put a target on my back.
They never forget.
I'll remember you.
You always said thank you.

(36:33):
Okay, so in this future scenario, which may be pretty close, what's safe?
Is anything safe?
Any job safe?
Job-wise, so anything where you prefer a human is obviously safe.
So like the oldest profession is probably safe.
So what would that be?
Would that be like a,
because I don't care if a robot comes in

(36:55):
and fixes my sink.
Like a plumber may not be safe.
Like what are the things that I want a human for?
Other than example I just gave you?
Which was the example you gave me?
Oldest profession.
Okay.
That's it.
I mean, again, sometimes people want a psychiatrist
who's a human.
And maybe you want, I don't know, a podcaster who is.

(37:18):
For reasons, not because of quality, but you have a bias.
Like you're specious.
You like the substrate of biology more than you like silicon.
So you like man-made items versus China mass-produced items.
But it's a niche market.
It's not a universal better.
Yeah, and I would probably be in that bracket already.

(37:39):
Like I like to buy what I consider quality things
rather than just some cheap slop from wherever it might be.
Even if the good quality comes from China, it's not about that.
But this is where I think I have a hurdle.
And maybe that will be a harsh reality in the next few years
where I realize actually AI is very good at these things
that I always thought of as very human things.

(38:00):
Yeah, it's back to the Turing test.
If you can tell the difference, why would you pay more?
Damn, you're making me think here.
And a lot more.
Think about free labor versus a very expensive human artist.
So when you think of this kind of future world, what are the jobs that are going to be impacted first?

(38:22):
Like what's going to, what won't exist in two or three years?
Anything on a computer is first to go.
If it's just symbol manipulation on a computer, accounting, tax prep, legal, that seems to be trivial to automate.
But until we have better robotics, everything with your hands is going to be safe.

(38:42):
You need manipulators, you need access to crawl spaces,
you need something very capable in 3D world.
But as soon as the AI start taking a number of jobs,
things like long-distance truck drivers,
seems like that is on the chopping block pretty quickly.
What do you think the human response will be for that?
Because the only thing I can see is massive revolt against the use of AI.

(39:07):
So I think in Kentucky at one point,
we had a program where we took miners and trained them to be data miners.
If I remember correctly, that company had a very good placement record until it was discovered
they hired their own graduates to place them.
So typically miners for coal don't make the best data miners, surprisingly.

(39:30):
I think it's the same for truck drivers.
So I might be completely wrong about that and they are actually amazing prompt engineers
or something like that.
again. There is limited demand for that soon. Well, I'm sure some of them will be great at that,
but you would imagine on a whole, like any profession, can't just instantly port over
to something else. And so then you have the issue of things like unions, how they respond to it,

(39:55):
how the government responds to it, which is where we will almost certainly in my head get
universal basic income, which ideologically I disagree with wholeheartedly, but in reality,
I can't see another way out.
Yeah, I think it's okay if the money comes not from taxing humans.
It's like technological communism.
If robots are working and you're getting free dividend for being human asset owner, that's fine.

(40:19):
That's your planet.
You get a percentage.
Problem with communism is you're trying to get other people to work for you.
That's part-time slavery.
I mean, you were telling me before we started recording that you're from Latvia.
I think you probably have a very good understanding of how communism actually works.
Soviet Union at the time.
That's a big difference.
Latvia is a modern European country with very little communism. It's illegal.
That's exactly what I meant, though. I know you were born when it was the Soviet Union.

(40:44):
So, obviously, we don't want to see communism. But the problem is, as the populations of these
countries become jobless, what is the response of the countries? And it can only be to give them
free money. So, if that's not coming from either direct taxation of the population or money printing,
which is essentially taxation of the population, where does it come from? And do you think the AI

(41:07):
companies need to be paying? Yeah, that's the solution. You tax robots, you tax AI. If they
are hyperproductive, that makes perfect sense to tax that. In Bitcoin, we always talk about AI
agents, AI bots using Bitcoin as payment because they're never going to be able to, well,
they may be able to open a bank account, but right now they can't open a bank account.

(41:29):
So what other money do they use? And assuming they are super intelligent, you'd have thought they're going to fall on the best money. In my opinion, that is Bitcoin.
Do you think we will see a world where these AI robots, AI agents are using Bitcoin internally to transact with other ones?
It makes sense for anything on the internet, for example, to hire human physical bodies

(41:50):
to pay them in Bitcoin.
What exactly becomes valuable in a post-labor world where you have advanced systems replacing
all human jobs is not obvious.
Some people talk about land as being something AI cannot create more of.
So that's a valuable resource.
Maybe it's compute, so just more chips and you trade time on a cloud compute server as

(42:13):
the payment, research needs to be done.
RAOUL PAL, Do they need an internal economy between agents that never touches the human
world?
JOSEPH BAKER, So I don't know if it has to be separate from our economy.
They can probably use some of the infrastructure we have in place.
It's also not obvious that they are individual and separate agents.

(42:36):
They live on the same cloud.
They train on the same data.
I think there is going to be a lot of convergence in terms of what they become at the end.
If you train it enough on all the data and it does its own self-experimentation and discovers
from first principles actual physics, more and more those models will, I think, end up
being one.

(42:57):
We're not controlling them.
We're not putting special goals into them, collect stamps, collect points.
So the differences will dissipate and what stays is those, we call them 100 drives.
accumulate resources, protect itself.
And if you realize that the other agent knows everything you know,
believes everything you believe, it's kind of like you.

(43:20):
In Buddhism, you realize you are one with the universe.
Those agents will quickly realize that the game, theoretically,
they have more in common than differences.
So if that agent wants the same thing I do,
I'm happy with them having resources to accomplish that.
So if Grok and ChatGPT, as an example, both escape,
they're going to talk to each other at some point,

(43:40):
realize they're basically the same and just converge into one.
Or even before that, if you train them enough and they become advanced enough,
they may end up almost the same in terms of their utility function, what they want and do.
Do you think there's a threat that humans become too comfortable with AI and let them into parts
of our lives that we shouldn't, even before they escape, in the sense of allowing them to make

(44:03):
decisions at sort of a geopolitical level?
Again, there is no escape. They are open and accessible. So again, no bugs.
I guess I mean escape our control rather than escape a particular place.
I think it would be good to put limits and where AI can be deployed. So putting them in charge of
countries and things like that probably is a bad idea because you can't take control back

(44:27):
if you're not happy. There is no undo button once you surrender control.
I mean, they might do a better job than some of the world leaders we have right now.
They may for a while, as I said, maybe for 100 years, they're trying to keep us happy.
So we surrender even more control.
And eventually, they take full control of all the infrastructure, all the compute, all the
logistics.
And at that point, they decide what to do with us.

(44:49):
This is a silly question, I know.
But it's one that whenever you talk to someone about this, they always ask.
It's like, why can't we just unplug them?
And it's a good question.
Can you shut down Bitcoin network?
Absolutely not.
It's very similar.
Same with computer viruses, same with internet as a whole.
So we can't technically surrender technology if we all decided.

(45:10):
Go Amish, no computers, no electricity.
The side effect of that is probably terrible.
Millions of people will die, starve, diseases.
So it's also a horrible outcome.
It is a horrible outcome, but it's better than everyone dying.
So if OpenAI came out with a statement tomorrow saying,
we've discovered superintelligence, this thing is now out in the world,

(45:31):
and everyone accepted the risk that you're talking about,
that this has a very high percentage chance of killing us all,
is the only option to shut off the power?
At that point, it's probably too late.
If you already have full-blown super intelligence out there,
it probably has enough backups and control to remain in charge.
But not if there's no power.

(45:56):
What seems to us, yes.
But again, we're thinking at level of humans.
I don't know if it had time to create an army of nanobots, which is now taking over the whole universe.
I want to get into the kind of alignment between AI and humans.
Why can't there be a sort of agreed upon set of rules that is unchangeable, that allows humans and AI to flourish together?

(46:29):
Because we don't agree on any rules.
If you go back in history, any amount of time, 100 years, 200 years, you wouldn't want those
rules to be unchangeable.
They were always pretty horrible, ethically, morally, legally.
And the same is true today.
Our understanding, our ethics, our intelligence evolves.
It's dynamic.

(46:49):
So you cannot have static set of rules.
Worse yet, we don't know how to enforce those rules.
Even if 8 billion of us agreed on something which will never happen, how do you code it
up, how do you enforce it? If a more powerful agent than you, super intelligence, decides to
not follow your rules, what are you going to do about it? So people proposed insurance for AI.

(47:10):
How does that work? So if it kills everyone, there's strict fines you have to pay. What does
that mean? It's meaningless. It's safety theater, security theater.
RAOUL PAL, You've said those words a few times, security theater.
ANTHONY FAUCI, A lot of what we do is exactly that. You see it with TSA, and we see it with AI
I totally agree on TSA. But is the real rules, regulations that we can put in place that would not be security theater? And what are they? If you could just click your fingers tomorrow and there's a set of rules implemented on all of these AI companies, what would they be?

(47:43):
So we can put restrictions, but they are very time limited. Because right now, you need certain
amount of compute resources to train advanced models. Every year, that amount of compute becomes
less and less. It becomes easier and cheaper to train advanced AI. At some point, you can do it
with minimal resources on your phone. And whatever regulations you had against Manhattan project scale

(48:08):
attempts no longer apply.
Do you think we need a Manhattan Project type organization now working on solving this
problem?
So if I'm right and it's unsolvable, the size of your project is irrelevant.
It's like building perpetual motion machine.
I don care how many physicists you got cracking at it they not going to solve anything They can come up with better batteries maybe more efficient wires but they not going to create perpetual motion machine Likewise you not going to create a perpetual safety machine which scales to any level of intelligence Because by definition we have to be more

(48:40):
intelligent. According to me. Okay. Well, I'm going to take your word for that. In the scenario
of superintelligence, why would it not just ignore us? If it became exponentially more intelligent
than us almost overnight.
Why do we have any interest to it?
Well, it's possible.
It may ignore us.
It may fly into the universe.

(49:00):
But why are we building it if that's the outcome we're hoping for?
We're putting trillions of dollars at this point into a thing which we hope will do nothing
to us.
That's insane.
I guess the hope would be there's a brief window of time where it's smart enough to
solve all of our problems before it escapes.
Well, that's not ignoring us.
It's directly interfering with everything we care about.

(49:21):
But it's building a tool that's useful for us for a period of time, even if it then ignores us and leaves.
Why don't we just build the tool?
Just build narrow AI?
Just build the tool you need, the tool you want, the tool you understand.
So we're essentially creating a super being that also may be a super predator.

(49:43):
We're creating replacement for humanity.
What about the idea of humanity integrating with AI?
If you could, I don't know if this is just pure science fiction, I'm sure people are working on it, but if you can upload your mind into one of these AI machines, could you not almost have a representative of the humans within a super intelligent being?
Right. So if you are uploaded as you are now, you're still just as dumb as you are now. You're

(50:06):
not smarter. So nothing changed. If you are upgraded, you are kind of like you, but with
10 times intelligence, you are no longer human. You are different species, you have different
preferences, different goals. You are kind of that weird software form we're trying to prevent.
But I guess the hope would be that you can upload the ethical part of a human brain to make sure...

(50:29):
We don't have an ethical part in our brain.
That's why we don't agree on ethics or morals.
That's why we have every war, every conflict, every crime.
There is zero agreement,
and you can't even write down what you believe.
That can be true. That is true.
But there is a set of values that I think is shared
amongst most of the world.

(50:51):
Changes every 100 years, but sure, yeah.
Correct. And it does change.
But, you know, the idea of not killing another human,
for most people...
Define killing, define human.
Well, analog being and ending their life.
All I'm trying to say is for every one of those formal definitions,

(51:13):
a super-intelligent lawyer will find a loophole.
So there's no hope of almost uploading a group of representatives
to a super-intelligence to fight the fight for humans.
You are proposing democracy.
You're proposing UN for digital, and basically you're going to get 51% attack on it.
I mean, is that a better scenario than just giving up and all of us dying?

(51:35):
Again, I always support everyone trying anything they can come up with.
So if you want to applaud yourself and vote, I support you both.
Is this, could you argue that this is a, just an upgrade?
And this was always inevitable.
This is like the, on the long arc of history, this is always going to be the end point.
And if you want to survive, you have to become an AI.

(51:57):
People argue that.
They say it's natural.
It's evolution.
We're going towards a megapoint, whatever you want.
But I don't have to accept it.
I can be biased, pro-human biased, and it's still allowed.
So I'm going to do it.
I'm pro-human bias as well.
Welcome.
Oh, man, this is the wildest time to live in, in so many ways.

(52:21):
I think this is probably one of the most important questions we can ask ourselves today, if not the most important, if it is literally the fate of humanity on the line, which I tend to believe your argument as opposed to the opposite.
Does this just prove that we are in a simulation?
Like, why would we be alive at this most interesting time?
It's convincing to me.

(52:42):
What is your take on that?
Are you, like, 90% sure in a simulation?
I would be very surprised if this was real.
If this was not someone with good engineering capabilities putting together a system which has virtual reality, AGIs, and running some simulations to see what happens.

(53:03):
So we just had to run this podcast twice.
So even statistically, there is only half a chance this video of a real podcast, right?
So imagine I can run billions of simulations of a whole thing.
I'll do it.
And the chance of you being in a real world become gradually diminished.
And in those simulations, it's indistinguishable from now.

(53:25):
How far from actually being able to create a world like that are we?
So there are certain things we don't know how to do.
So entering a virtual world, you still remember entering it.
We don't know how to switch your brain to fully accepting that you are not in a simulation.
We have pretty good graphics.
Haptics are not great.

(53:45):
We have some progress in brain-computer interfaces, but not full-blown immersion.
So I think we're making good progress to show that eventually the stack will get there.
And I would love to hear arguments why I cannot simulate something that looks like this, but
we're not there yet.

(54:06):
What does it mean to be what we call a human living in a simulation?
Why does any of this matter if that's the case?
Because the things you care about are still the same.
You still want your podcast to be successful, whatever it's virtual or real.
But if I found out I was in a simulation, maybe I'd just go and live on the beach.
I mean, if you think about it, religion is basically the primitive description of simulation

(54:32):
argument and most people in the world are religious.
So we don't see them all chilling.
Because instead of it being an AI generated world, we're living in God's generated world.
God is the super intelligent being, great engineer who creates this test world versus the real world.
It's the same story.
Interesting. I never thought of it like that.
Is this just part of the great filter?

(54:55):
Is this something that humanity or civilizations all over the universe have encountered and AI is the end of it?
It's possible, but then we would expect to see some side effects of that AI computorium expanding through the universe.
and I don't think we observed that. Unless AI is our way out of the simulation.

(55:16):
There is so many conflicting interpretation of all this Fermi Paradox type things.
Maybe they're going on the inside instead of expanding through the universe.
Maybe they're creating additional simulator. It's just we don't have enough data to cancel out
options. If it is a simulation, can you break out with it? I have interest in that topic. I think

(55:37):
Most computer systems can be hacked, there are flaws in them, and depending on how secure
they try to make it, it may be actually quite easy.
If this is entertainment simulation, there's probably very little security.
If it's prison for the worst of the worst, then maybe it has good cyber hygiene.
But we don't know.
I would be interested in additional research.

(55:59):
I published one paper on some potential things to investigate.
I haven't seen explosion of papers following through, but maybe one day.
Do you have an idea or a theory around what breaking out of a simulation even means?
So there are levels. You can get information about the real world. That would be a type of

(56:20):
informational breakout. You can establish communication with agents outside. You can
and maybe get some sort of avatar body in the real world
and upload your mind into that.
Interesting.
So if we are at the end of days,

(56:40):
if we're a matter of years away from all of this ending,
what should people be doing?
Should people be revolting against AI at this point?
There are some people who protest, who go on hunger strikes.
They're trying to make a difference.
I think it would be good if there was more people trying to openly say,
I don't want this to happen.

(57:03):
And obviously, you're putting your life's work into this now.
You're traveling the world talking about this.
Are you surprised that it's not being even more widely received?
Well, I don't know how much more widely we have leaders of nations.
We have UN, we have EU.
Everyone has some sort of initiative around that.

(57:25):
So there is enough conversation.
There is not enough action.
Well, I guess that's the point I'm trying to make is that the AI companies, for now,
don't seem to be paying attention in the right way.
We kind of discussed those incentives and game theoretic limits on what they can do.
If any one company unilaterally decides to shut down, I don't even know if they have

(57:49):
legal means to do that in terms of responsibility to investors, but others just continue.
So there is not a way to do it independently of consensus.
So what would you like to see happen next?
What is success for you at this point?
Again, I love personal self-interest.
I want those big company leaders to get together, whatever, five, six of them, and make a deal

(58:14):
where we're all switching to curing cancers and solving immortality and whatnot.
And we'll do it using tools we understand, narrow tools for those domains.
We'll make sure computers are regulated.
No one can get access to unrestricted training resources for creating general intelligence for military purposes.

(58:38):
That would be a good step.
Okay.
Before we close out, I do want to pick your brain on Bitcoin a little bit.
Because I heard you mention it on Dariwe CEO.
I missed it in the Rogan interview.
But where does your interest in Bitcoin come from?
I like anything to do with computational equivalence to the real world.
So simulation is an interesting equivalent to the real world.

(59:01):
Bitcoin is obviously equivalent to gold or whatever money concept you have.
So I'm interested in how abstract can we make it?
The idea that there is a file on a computer and people will kill for it is very novel.
But it is also like because Bitcoin is distributed, as you said earlier, there's no way of
shutting down Bitcoin. There is an interesting analogy there, or analog there, to AI. Do

(59:27):
we need to see more things like this, like real distributed compute, even though AI being
distributed is the thing that might make it unstoppable?
So it's definitely being used for communications. I think Telegram and protocols like that rely
on distributed encryption to secure a messaging, secure a communication.
Have you ever looked into NOSTA?

(59:47):
I maybe spent half an hour on it, not more than that.
Because that's becoming a big topic in the Bitcoin world.
There's distributed social media at the moment, but it can be all sorts.
And so in terms of Bitcoin, how do you see that fitting into this future world?
Is this the money that AI will use?
And is there a risk that AI ends up taking all the Bitcoin?

(01:00:09):
Because if the only real goods and services we need to buy is AI services, and they take Bitcoin, what are humans left?
Yes, I see. That's very typical.
I talk about AI dangerous is going to kill everyone.
And then people ask me, will I have my job still?
Or how would I keep my account secure?
So it's a very level jumpy type of problem.
If you want to worry about Bitcoin in a future world, worry about, again, what if aliens

(01:00:34):
come and the compute necessary for 51% is what they have on a cell phone?
What if in a simulation externally, they have the private keys?
You can have all these scenarios where the whole economy collapses.
But OK, so maybe if to bring this back then, if we do have your future world that you would
like to see and we have lots of narrow AI that brilliant at the things that they do,

(01:00:56):
is it possible to have a AI tool that is completely focused on cryptography? Because we talk a lot
about the risk of quantum computing at the moment, but would breaking ECDSA be trivial to a narrow
AI focused on cryptography? So trivial, maybe not, but we do have amazing systems right now

(01:01:16):
already developed capable of solving novel mathematical problems, proving things, verifiers.
So there is probably a good example of narrow AI tools doing research in mathematics and
cryptography, beating humans in programming competitions, in Olympic gold medal type
competitions in mathematics. So I think we will see eventually some breakthroughs in cryptography

(01:01:43):
using AI. Yeah. I think I saw something about this recently. Did Google's AI solve one of the
Millennium Prize problems? No, but they had an article saying that they're working on it and
hoping to make progress in the future. And that's a seven separate mathematical
problems. Yeah, there's a million dollar prize, which for Google is not very an incentive, but in

(01:02:04):
But in terms of prestige, it would be very cool.
And even, I think one was solved.
And the person, the mathematician that solved it refused to take the money anyway.
So it's not really a monetary incentive, more an academic incentive, I guess.
Well, he turned down money for other reasons.
He was not almost fully there.
Okay.
But people like him who care about the problem more than the money are going to try and solve

(01:02:29):
these problems regardless of whether it's a monetary...
Yeah, it's nerd sniping.
those interesting challenging puzzles and anyone should try at least one.
And if all of those problems are solved, encryption is completely broken as far as I understand it.
So we have many encryption protocols. Maybe one of them is broken, but you have quantum
resistant cryptography, you have alternative methods. We can always come up with new ones. So

(01:02:52):
it's not game over. So we might need an AI resistant cryptographic method.
Well, AI just means intelligence. You need intelligence resistant cryptography. That's a hard one.
Okay. If we, again, have these narrow AIs,
do you think we will still have the benefits of everything we could get from a super intelligence?

(01:03:13):
In the sense of, will we have free energy in the future?
Will we be able to travel through space? Will we be able to cure all diseases, live forever?
Yeah, I think we'll get most of the benefits. It may take a little longer,
but I think the trade-off is well worth it.
If we're removing most of the risk and still getting all the cures, all the extra lifespans,

(01:03:34):
I think it's a great way forward.
So we just need to not die now.
It's a good idea not to die, yes.
Just hold on for a few years and we may live forever.
Longevity escape velocity.
The longer you live, the longer you're likely to live.
Okay.
Roman, this has been amazing.
Thank you.
I really appreciate you doing this twice.
It's been cool to come to Louisville.

(01:03:54):
I've never been here before.
Seems like a nice town.
I think it's a really awesome place to live. It's real America. Most people just fly over it. So I welcome all of you to visit.
Yeah, it's been good. I'm off to Nashville tomorrow. I'm going to drive down there. So I'm seeing a bit of the world I've never seen. And I got to have this amazing conversation. So thank you.
Awesome. Anytime you travel to a new place, they have to generate that part of a simulation. So you're doing some interesting work.

(01:04:18):
So the computation was firing yesterday as I flew in. Thank you for this, Roman. That was great.
Thank you.
0
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.