All Episodes

November 1, 2023 • 87 mins
WarGames (1983) Starring Matthew Broderick, Ally Sheedy and John Wood.

I wondered if AI could ever gain the decision-making capability to become powerful enough to decide the fate of humanity through nuclear warfare as depicted in the movie WarGames. So, I talked to Noah Healy, an expert in game theory, nuclear engineering, and economics to see how plausible it really is. Join me as Noah takes us on a deep dive into the science of fiction. At the end of the episode, Noah will rank the plausibility of AI-instigated global thermonuclear war depicted in WarGames on a 1-5 scale from pure fiction- science fact.
#wargames #gametheory #globaleconomics
Please support my new podcast by subscribing, liking, sharing, and commenting if you want me to keep doing more episodes of your favorite sci-fi concepts with experts.
Timestamps
[00:01:51] Intro
[00:08:36] Nuclear Strategy
[00:12:42] Real WarGames incidents
[00:15:16] AI decision making
[00:23:17] Turing test
[00:25:53] AI mimicking artists
[00:31:05] Uncanny Valley
[00:32:53] Global Economy
[00:41:39] Tipping Point consequences
[00:47:45] Nukes
[00:53:30] Who is in control?
[01:02:42] It's not all bad Support Let me know what episode you want to see in the future!

You can also subscribe to my Patreon to help me get better equipment and bring you higher-quality episodes in the future. https://www.patreon.com/RealityCheck631
Connect: Connect with your host, Heidi Campo www.heidicampo.com https://www.bitesz.com/show/reality-check-the-science-of-fiction/
Instagram & Threads @mrs.heidi.campo
YouTube @mrs.heidicampo
LinkedIn Heidi Campo, CSCS | LinkedIn
X (Twitter) @mrs_heidi_campo
X (Twitter) @pod_reality
Patreon: https://www.patreon.com/RealityCheck631

Guest References and Contact https://coordisc.com/ Noah Healy Noahphealy@yahoo.com https://www.linkedin.com/in/noah-healy/
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Over the years, I've seen moreof some of the silliness, but also
more of some of the relevance ascomputer technology is sort of coming more and
more online. Paul, thanks preparedfor hypercrime, activate activity, just engage
tractor value. You're late for lightspeed, lightspeed, it's too slow,
all right, reality check. Hesigns. All right, everybody, welcome

(00:23):
back to another episode. Today wehave with us Noah Heally, and he
is a marketing designer and game theoristwho is working on better economic systems.
After training in nuclear engineering, Oppenheimor anyone, he worked for tech startups
at the peak of the dot comboom before becoming fascinated by the mathematics of

(00:44):
information and computation. It led tohis work in patent design on better commodity
market design. So Noah, welcometo the show. I'm so glad to
have you here. So let's uh, let's talk real quick about the movie
War Games, which was a youknow, nineteen eighties sci fi that at

(01:06):
the time was you know, wayway ahead of its time in the future,
and now we're here the era ofchat GPT. You know, the
Oppenheimer movie just came out. It'sreally popular, so I just kind of
want to hear some of your initialthoughts, both from when you kind of
first saw the movie to where we'reat now. Sure, Heidi, So

(01:30):
when I first saw the movie asa kid and so just in the theater,
I quite liked it. Back inthe day, I didn't really,
I think, have a super strongattachment to it, unlike some of the
eighties franchises. It didn't sort ofinspiratory line. But I wasn't a breakfast

(01:57):
club kid. I remember Elie Sheetyfrom War Games and Short Circuit, not
her rat pack work or brat packwork. I guess over the years,
I've seen more of some of thesilliness, but also more of some of

(02:19):
the relevance as computer technology is sortof coming more and more online. But
that was something that as as youknow, in my background, I came
to kind of late. I wasalways been strongly in mathematics, not always
been strongly in computers. So Iwas about twenty five before I started finding

(02:40):
out about some of the deeper partsof discrete mathematics and then sort of coming
back around. The Whopper is sortof both silly and prescient at the same
time. It's silly because the technologyI actually had is entirely insufficient to purpose.

(03:05):
It's prescient because the way that they'reusing that technology has essentially been validated
for our current state of super gameengines that have largely conquered chess and checkers
and go and poker and just aboutany other game we decide to point them

(03:25):
at. So that this is goingto be one of my big questions,
because you are, you know,your game theorist of what exactly is that?
I get this a lot, andit's an interesting thing. In my

(03:46):
opinion. Game theory is probably themost flippantly named thing in human history.
Game theory is the mathematics of strategy. Uh So. It was developed by
a number of people, but oneof the earliest and largest contributors was John
von Neumann, which Oppenheimer. Hewas a major part of the Loss almost

(04:10):
effort. He was one of theprimary movers behind the implosive explosive design for
the plutonium bomb. Incredibly smart person, very fast mind, and one of
the key figures in the design ofcomputers in the first place. In fact,

(04:31):
we call the architectural design of moderncomputers von Neuman architecture because while there
have been considerable advances in the chemistryand physics, and also some meaningful advances
in layouts and design and so on. The core concept of central processor and

(04:57):
information buses and so on were alldeveloped by him as the most practical way
to design computers. That's really fascinating, and I didn't know any of that.
So it sounds like game theory,because when you think of that,
it sounds almost like more like thingslike chess and you know games. But
game theory, it sounds like,is definitely more used for military strategy.

(05:21):
Uh, well, it is.It is somewhat both. It's rather intriguing.
Game theory was actually heavily developed inthe twentieth century by think tanks like
the Rand Corporation. In fact,I've got a book over there that they
published on the subject in order tomanage nuclear strategy. So large amounts of

(05:46):
the fundamental theories of game theory areclassified information to this day. They're referred
to as the folk theorems because whileeverybody knows that they're true and they're fairly
obvious, the actual proof publications arenational secrets, and so those papers are
not available online to go look at. You can look at declarations of the

(06:10):
truth of them and you can workit out yourself without too much trouble.
Now you're saying the truth of them? Is that just the you know kind
of you know, spoiler alert ifyou haven't seen this movie. It came
out in the eighties, so sorryif you haven't seen it by now,
But it's kind of that that.Hey, it's there's not going to be
a winner if a global global warfare. Well, that gets into essentially the

(06:34):
model that wound up being used.So the decision about how to cope with
the reality of nuclear bombs, particularlythe reality of hydrogen bombs, and how
much distractive capacity they at was touse a rather simple model of chicken.

(06:54):
So chicken is a game where eitherplayer can choose to try to win,
and if the other player chooses toalso try to win, when you do,
you both lose hard. If onlyone player decides to try to win,
then they win and the other playerloses. But both players can choose
to lose. So in a gameof chicken, the ideal strategy that you

(07:21):
should employ how often you should tryto win, how often you should just
choose to lose, is dictated byhow bad the result is for you.
If you both choose to win.And this is where the build up,
the nuclear build up strategy, whichwas much criticized in the eighties, actually

(07:44):
comes from. The idea was thatby making mutually assured destruction as bad as
possible, the two major powers wouldbecome less aggressive because trying to push their
luck might cause a cascading incident wherethey simply, you know, committed mutual

(08:07):
genocide. Right, nobody would existanymore. Now have those you know,
since it's you, it's classified information. But it seems like some of these
strategies are somewhat obvious to people.As other countries step into the capacity nuclear
capacity, does that change the strategiesfor everybody? Oh, it definitely does.

(08:30):
So again, think about think abouta game of chicken. So,
if you're familiar with you know,like fifties car movies or something, you
know the two sure, right,So the two people driving each other,
and either one can choose to swerve. If if they swerve, then they
lose. If they both swerve,it's a double loss, which is still

(08:52):
embarrassing, but you know, youdidn't get killed. If they both choose
not to swerve, then you know, they both die. Imagine this scenario
as more and more cars are drivingtowards a convergent point. So every new
player makes it more dangerous to tryto win, because all it takes is

(09:15):
one other coincident person deciding to winto kill you. So as nuclear proliferation
spreads out, the sense of aggressiongoes down considerably. Now, the difficulty
with that is that human beings havea certain amount of aggression pretty intrinsically,

(09:39):
and not all of us can sublimatethose urges in the gym, and a
lot of those people wind up runningthings like national governments. So if there's
a uneradicable lower limit of aggressive intent, then the more and more players you
have in the chickens inn ear thatmore and more certain you are to reach

(10:03):
a double chicken outcome. And that'ssort of bad for everybody. And that's
where the computer during the final scenarios, If you might recall there are a
number of regional expansional conflicts and otherthings that come up if you sort of

(10:24):
go frame by frame where you've gotreally quick eyes, that everything leads to
the general dropping off outcome, andso in that case, new kinds of
structural you know, strategies are required. That was one of the big issues

(10:45):
with so called star wars or thisor you know, defense forces, is
that if one of the sides hasprotection, then they don't have an aggression
regulator mechanism anymore, and so they'llbecome much more aggressive. And so in

(11:07):
an environment where your aggression is matchedto your opponent's levels of regression, the
existence of somebody that has a defensebasically urges things towards a mutual destruction scenario.
And this is alluded to within thefilm. As the defcon level rises,

(11:31):
we hear about intelligence about the Sovietdefense conditions also coming up. And
there's that point at the end whenFulkan sort of finally is embraced Hope and
is trying to talk down the generaland the rest of the people in the
pit that they point out that theRussians are on high alert and he's like,

(11:56):
yeah, but you know, soare you? And didn't you do
it first? Isn't this what youwould have done? In fact, isn't
this what you just did? Yourcomputers told you that they were on high
alert, so you went to highalert, and then their computers told them
that you were on high alert becauseyou were on high alert, and they
went on high alert. So thatsort of de escalation strategy is requires a

(12:24):
higher order understanding of the game essentially, right, And in the movie it
really pointed out, you know,the big thing that the kids in Valkan,
they came in, they were like, their main point is why would
Russia do this? Why They're like, this is completely unprovoked for them to
point that many missiles at us rightnow. They're like, this cannot be

(12:46):
right. When all the you know, the general and everybody in the pit,
they were looking at these data pointssaying that this was real and this
was happening. But if you tooka step back and you actually analyze the
situation with human eyes rather than roboteyes, you could just see that this
computer scenario made no sense. Yeah, in that case. And there are

(13:09):
actually a handful of real life incidentswhere some systems actually did report you know,
misreport weather events as missile launches someof the things like that. And
there is there recorded, yeah,recorded events in history like nuclear miss reading

(13:31):
of the data correct. Yeah,that's terrifying. Tell me more about that.
They made a movie about one ofthe incidents with a submarine commander that
is played by Harrison Ford called Knineteen and there's another relatively easy to find
event on that you can find onlineabout a Soviet missile commander or air force

(13:58):
based commander on their western theater,which is Eastern Europe, is their western
theater that I think is supposed tobe like a flock of ducks or something
that that the radar said was wasa incursion. And in both cases,

(14:20):
the commander basically just decided not toturn the keys and sort of or not
to not to report up the possiblyfalse, you know information. And again
this is alluded to as well theopening, the opening sequence of the film
has where it's just a test,but the boys that are in the silo,

(14:45):
I want to hear a human voicebefore they press the button, or
at least yeah of them does,and it takes both of them. Yeah,
he says, I'm not gonna I'mnot gonna kill twenty million people without
a phone call, right, andthen the other guy back and forth.
But it's a protocol. It's protocol, And I almost feel like it was
kind of symbolic of almost more ofa machine way of thinking versus a human

(15:07):
way of thinking. And the guywho didn't flip the button, he was
like, absolutely not, We're aboutto kill twenty million people. I'm not
going to do that without verbal confirmation. I understand its protocol. But also
this is a problem. Yeah,well, and of course that's that problem
is exactly what prompts the entire filmin the first place, because that gives

(15:31):
Dadney Coleman's character the edge to arguefor the automatic relays, the claim that
the command authority actually has the rightto make that decision and needs the confidence
that, having made the decision,it will be carried out the way they
expect it to be. Because again, if you go back to the Chicken

(15:52):
model, if if you aren't producinga risk, if the other player knows
that you won't attempt to win,then it's in their interest to be as
aggressive as possible. So this Iam not. I'm not a subject matter
expert on AI. This is somethingyou are much more well bursed in than

(16:15):
I am. AI doesn't have emotionobviously yet, and that is one of
the bigger pieces in decision making isemotion. There is something interesting about that
element of the human design, isthat we can two different people can be

(16:36):
given the same information, but wewill come to different conclusions and act differently
based on our unique emotional responses,and that's not something that AI is necessarily
doing. It's only doing what's thelodge of it choice. Right, Well,
even that isn't really a great dichotomy. So one thing that AI really

(16:56):
sort of highlights, and this iswell known philosophically but isn't particularly well appreciated,
is that there's a real difference betweenstrategy and morality. Where strategic effectiveness

(17:19):
is something that we're currently good atgetting AIS to produce. So we can
train a AI to play chess farbetter than we can train a human being
to play chess. But a givenchess game could have a moral dimension.

(17:44):
If you're playing against a child toteach them how to play the game,
there might be a moral standing formaking less than ideal moves to provide the
child a chance to learn, notnecessarily throwing the game to the child,
although maybe also throwing the game tothe child would be a moral decision,

(18:07):
But there are moral considerations that couldbe involved with a game of chess.
And also externally, you could imaginesome sort of like you know, evil
villain type scenario where the outcome ofthis chess game will determine the lives of
hostages or something like that, kindof like saw. And that's such a

(18:30):
great metaphor because I, you know, year people talk about these concepts,
but that's such a great way toframe it. That's so easy to understand
because humans understand that sometimes losing isimportant and it's not bad winning and losing
art art. They don't have moralitytied to them. It's the scenario around
it where we derive morality. Yeah, yeah, And so what the Whopper

(18:56):
is presented as doing is running simularrelations of various strategic scenarios in order to
work out these strategically ideal response towhatever might be happening and as it happens.
That's a very good analog of bothsomething that was developed starting in the

(19:22):
late eighteen hundreds but really brought toa very high level during the Second World
War. And how existing game AIsort of super game AIS actually work is
by trying things out and seeing whatthe probability of success looks like and then

(19:44):
training themselves to be able to findand estimate these high probability of success paths.
So it's pure mathematics it, yes, yeah, So they actually mention
responses, counter responses, counter counterresponses and so on during the sort of

(20:06):
explanation of what's going on. Well, the real world is highly variable.
In many military situations and in essentiallyall game situations, you can define the
battle space to the point where youcan actually come up with plausible options.

(20:33):
But the problem is that plausible optionsexplode very rapidly. So imagine you were
trying to set up a super soccerai. Every one of the eleven players
on your team can do anything theywant from second to second, and every
one of the eleven players and anotherteam can do whatever they want from second
to second. So, given sortof one second of decision making, and

(21:00):
professional sports operates on tighter tolerances thanthat, there's an enormous number of choices
followed by an enormous number of responsesan enormous number of the choices. That
explosion is much too hard to dealwith by simply running through all the possibilities.
But what we've found out is thatwe can do something where we sample

(21:22):
the space. So what we sayis, Okay, let's say I did
this, and let's say I playa million random games from this position.
Now, all those random games aregoing to be fairly implausible, and they're
all going to be fairly bad,but they're all going to be fairly equally
bad by both sides. So ifthis is a good thing to do and

(21:45):
then fairly implausibly fairly equally bad thingshappen after that, then that good thing
should translate to winning most of thetime, and if it is a bad
thing, it should translate to losingmost of the time. And so we
find those things that translate to largeprobability winnings, and then we train a

(22:11):
neural net to behave like that thingthat is finding these better things, and
then we recycle this whole process.We plug this slightly better estimate of what's
good and what's bad into the processof playing out random situations to make the

(22:32):
random situation slightly more plausible and slightlystronger, and then retest what good and
bad options are in the context ofthe stronger system. And when you do
this cycle hundreds to hundreds of thousandsof times, you wind up with a
neural net that has gained deep strategicinsights into the tactical situation of the specific

(22:59):
scenario or game situation that you're thinkingabout. So, to put this in
terms that are very very plain tome, it's similar to early childhood development.
As a kid explores the world aroundthem, they're taking in more information
that contributes to the next time theymake a decision. Oh, the stoves

(23:22):
hot, don't touch it. Uhyeah, yeah, there's there's a lot
of that. There's another there's astory. So the Navy during World War
Two set up a division to workout combat tactics because things like submarines and
aircraft carriers had never been as aseffective as they were during World War Two,

(23:45):
and so tactical doctrine didn't exist inthis in this era, and so
there was this uh I think,I think the division that actually being run
by a person who was like onthe disabled list, and so he kind
of, you know, he hadsome vim and pulled together some stuff,
but he couldn't go out in thewater anymore. So he couldn't get a

(24:07):
lot of resources, and he wasactually forced to use a lot of female
officers, and they set up awargame room with pretty decent conditions, and
they would test out scenarios and playout a bunch of games just to kind
of see what looked like it wasworking and what didn't look like it was

(24:29):
working. And the story goes andthere's some arguments this might be slightly apocryphal,
but the thing that really sort ofkicked them up and got them into
general use and helped them suppress theeffectiveness of wolf packs on the convoys was
that an admiral visited, saw thething going on and instarted himself in as

(24:52):
one of the force commanders to youknow, he was like, oh,
I can do this, and startedsaying, hey, I'll orders for this
side. And he gets his clutcleaned and he's like, well, that's
embarrassing. Send out the man fromthe other side, and they send out
this twenty two year old girl whois just first in they have. And

(25:15):
so he's like, well, let'sgive this thing some funding and let's start
taking these recommendations, because if youknow, college girls can beat admirals,
then we need to be up inour game here. That's really interesting.
Are you familiar with the Turing test? Absolutely? Yes. So I saw

(25:37):
an article today that I skimmed overit and it said that chat GPT has
just broken the Turing test. Firstof all, could you explain what the
Turing test is and speak to thevalidity of that article as you've heard that
or not. Yeah, Yeah,sure so. Alan Turing is pretty much
the father of computation. His paperon what general computation was and how it

(26:07):
might be achieved isn't the first proposal, but it's really the most important theoretical
breakthrough in describing what computers can andcannot do. And he got a lot
of questions over the years about whetheror not machines could think and the degree

(26:30):
to which artificial intelligence might be possible, and so the Turing test was a
proposal. In an essay he wrotebasically saying that if a machine could adequately
imitate human behavior to the degree thata human wouldn't really be able to tell

(26:56):
the difference between whether they're dealing witha machine or dealing with a human,
then then that would be a signthat artificial intelligence had been achieved. In
terms of where chat GPT is uhit, it certainly has passed through that.

(27:22):
There have been a number of documentedstudies and and also I've talked to
people who have done some kind ofoff the just just colloquial, off the
cuff type things to check, andnobody is one hundred percent on determining the

(27:45):
difference between UH machine generated and humangenerated objects. There's actually a rather bizarre
conspiracy theory that some of like TheRings of Power, I think some other
recent streaming shows might have been writtenby ais uh uh and and that like

(28:10):
the so called writers, if youbuy into the theory, are are sort
of covering as they're trying to getthings going. This is a total side
note, but super bless. Thisis like, you know, zless movie
Trump versus the Illuminati. Have youseen that one? If I have not,

(28:33):
that one was definitely written by somekind of AI or very drunk college
kids. The script is something else. It's straight off the internet. Okay,
But simulations of human voices have alreadygotten to the point where they can

(28:56):
pass pass muster of human creation.Uh. There's there's videos that go on
viral that have the past presidents playingvideo games against each other and chatting online
and you know, commenting adversely aboutvarious play styles. I saw one where

(29:19):
somebody developed a model that would createa cover of a song as done by
Frank Sinatra. Wow, I sawthat. Yeah, Drake, he was
pretty upset by the fact that somebodymade a song using his voice. And
then Grimes she came out with astatement, Now she's a big, big

(29:48):
what do you want to say?She likes Ai. She likes AI a
lot. So Grimes came out withthis technology that actually allows anybody to use
her voice, and she wants tohelp push AI for but she doesn't see
it as a threat to her creativeprocess. She sees it as a collaboration.
So where Drake came out and he'strying to not let anybody use his

(30:11):
voice or his likeness, he wantsto protect that, Grimes is seeing it
as an opportunity to partner with people, and she's like, yeah, please
make music, use my boys,use my likeness. Well let's split the
profits. And so I see thatas two ways of thinking about what we
could do with this technology, right, Well, allow me to introduce you
a third one. How much ofGrimes's voice and appearance, in Drake's voice

(30:36):
and appearance belonged to him, andhow much belonged to the human species as
at all? To what extent isthe cultural and artistic impact of their work,
their contribution or the apotheosis of stylethat have been developed over decades or
centuries, and to what is owedor that legacy. Yeah, what if

(31:03):
it isn't Drake's voice that's being imitated. What if it's one of the seven
and a half billion of us thathappen to share his intonation so closely that
nobody can distinguish them. Or if, as it happens, his voice is
truly unique right now, but sometimewithin recorded history somebody has actually produced intonations.

(31:26):
What if it was theirs? Orif not that, what if it's
somebody coming up? So and that'sreally interesting because now we're talking about branding
versus identity, versus what makes ushuman, what makes us individual? I
mean, these are really really deepconversations that I do think that AI is

(31:48):
going to be challenging. Well,it's going to make us think about these
things. What makes me different fromsomebody else who is a doppelganger? Well,
and that also gets direct into thethemes that wargames is ultimately exploring.
As a kid, the opening,you know, shout fight between the general

(32:08):
and Dabney Coleman doesn't really resonate,but once you understand these issues and you
start thinking about what he's talking about. So we have presumably civilian control of
the military, that's one of thecornerstones of our government. But he's pointing
out that they've developed a super AIto actually work out nuclear strategy for themselves,

(32:30):
and so the president isn't actually incontrol within that context of America's nuclear
strategy, because the computer's designing America'snuclear strategy. The president's job is to
tell the computer yes or not tellthe computer yes, and that's it.
That's his entire job. And thenof course at the end of the film,

(32:51):
when everything's going to hell and theythink that the Russians have launched the
nukes, you know, all out, the general asks what the computer's recommendation
is, and the computer's recommendation islaunch everything right now, and he's like,
I need a computer to tell methat. So some of these situations

(33:12):
are so simple that even we cansolve them. And so there we've in
those cases perhaps given up our autonomyfor a scenario that doesn't necessarily exist.
And also the hallucination problem, themachine is just playing a game and does

(33:35):
not actually understand the content of whatit's doing. I had I had an
interesting experiment. I was playing aroundwith one of these generative ais. It
had done a good job on afew summaries of my white paper, and
so I asked it to write apress release. It wrote a very plausible

(33:57):
press release. It quoted three differentpeople. None of those people exist.
Two of those people it identified bytheir very prestigious positions in society, which
would make it incredibly easy to verifythat they don't exist. And the third
one is the co creator, whichthere are no co creators to this this

(34:20):
thing, I invented it myself,reality check science. And that's really one
of the interesting things about AI anda lot of these emerging technologies in general,
is that they do tend to looka lot more polished, but at
the end of the day, it'ssome of those human errors and just are

(34:44):
scruffiness that makes us human and makesus enjoy something more. And it's almost
like there's become this uncanny valley withlanguage and are you familiar with the Understdali,
Yeah, that's that's a very goodpoint, and I think you're getting
at something that I'm trying to promulgateis to as many people as possible that

(35:05):
polish is what the ais are actuallytraining themselves to be able to do.
The chat GPTs, the bards thatwork as the lamas. They are not
intentionally creating super intelligent agents that canthink and act as humans do. They're

(35:27):
training up style machines that are abstractingout what evocative human text looks like so
that they can create evocative human textand response. And we as people respond
to marketing messages and essays and soon in ways that are inappropriate because we're

(35:57):
responding to the style and not thesubstance. And the AIS they're probably not
better than us as stylists. Ourvery best stylists are better than the AIS.
But it's reasonable to expect that inthe near term they will become better
stylists than us. It's not reasonableto expect that they will gain more substance

(36:20):
than we have access to in areasonably short timeframe. So what are some
of the implications of this? AndI know that this is really that's a
very broad question, so I'm goingto narrow it down a little bit.
What are some of the implications ofthis right now? These technologies when it

(36:40):
strictly comes to game theory and war, Well, the primary implication is that
the mechanisms that we're presently using,the social and cultural mechanisms that we use
to sort of manage our internal communicationsand cultures and so on, are inadequate
because they're built out of a humandegree of error and judgment, and we

(37:08):
have access to a greater than humandegree of both error and judgment now,
and so we're going to have toexamine these things at the game theory level.
This kind of coping with things thatwork because of our own human limitations

(37:32):
stop working if we stop our humanlimitations. And a way to think about
this is think about traffic before carsand after cars. There was actually I
think Disney did a series of animationsto teach people how to drive on freeways,

(37:57):
and it was this whole thing ofhow you you actually have to speed
up when you get on a freewaybecause everybody's there is driving sixty miles an
hour and so if you help,yeah, right, So if you get
on the road at surface street speedsof like twenty miles an hour and just
figure, oh, I'll speed up, well, then you're going to cause
a crash or a traffic jam.And so in order to utilize this greater

(38:22):
capacity, you actually have to changeyour behavior in ways that make that capacity
useful. And that's that's where we'rethat's where we've been at for a while
now, because while there isn't alot of evidence that we're using wopperlike devices

(38:44):
to handle our geopolitical strategy that stillseems to be people talking to each other
and bribing each other and extorting eachother or whatever makes you know, diplomats
happy. People are using these technologiesin the fine ancil industry. So before
we get too far into that,and I do want to circle back to

(39:07):
that, but right now I kindof want to return back to an earlier
point you made about how the militaryis supposedly kind of run by the people,
and if you look at our wholepolitical system, it's supposed to be
something that people vote on and participatein. And when you look at kind

(39:29):
of how our society is structured,politicians in some ways have their hands tied
because they need to get voted in, and to get voted in, they
need people to like them. Soin a large way, the general population
and the whims of the general populationare going to control who gets voted in
and what they're capable of. Butwhat happens when the general population is being

(39:52):
subconsciously controlled. Now, these arethings, again that I'm not an expert
in. Maybe you can speak toit, but we have of boughts all
over the internet writing articles and puttingout content that subconsciously is brainwashing people.
It's a psyop, and people startbelieving things that may or may not be
real that could be favorable or unfavorablefor our nation's large scale political interests.

(40:20):
And when the entire society of peoplegets affected that way and they start creating
demands, then the politicians have theirhands tied and have to Can you speak
to any about it all? Yeah? That gets directly to the point that
I'm making about the inadequacies of ourcurrent strategic game theoretic structure of how we're
managing our cultures. Our society is, you know, sort of the voice

(40:47):
of the people, but we havethis capacity to create non human voices that
are relatively in distinguishable, and asthe computers get better and better at that,
they'll need less and less help fromhuman beings to sort of bridge the
gap. This might seem a littleoff topic, but I'm going to bring

(41:12):
it home, my promise. Therewas a cheating scandal in high level chess
about a year ago. There's thisguy who's improved unbelievably quickly. He was
playing a game against the best humanchess player. He won that game,
and after the game was over,the champ withdrew from the tournament, made

(41:34):
a few noises, and then afterthe tournament was over, accused him of
cheating and said that he didn't wantto play against cheaters. There's been some
drama about this. One of thebizarre things about chess is that because we
have computers that are better than usat chess, you or I could beat
the world champion in a game ofchess by just taking the moves they make,

(41:58):
asking our computer what to do,and responding with whatever the computer tells
us to do. That would bevery obvious if we did that. But
the better you are at chess,the less help you need. And so
somebody that's an actual super grand masteror nearly a super grand master could essentially
get one or maybe two pieces ofadvice during the course of a game and

(42:22):
become go from being a top fiftychess player to a top five chess player
or even number one. To bringthis home, as these generative ais are
becoming slicker and better and say theycan be a turning test twenty percent of
the time or thirty percent of thetime, the existence of that doesn't preclude

(42:46):
a human being quarterback running networks ofa dozen or one hundred or one thousand
of these agents. And the betterthose agents get, yet the more of
them one of these human quarterbacks couldreasonably coordinate. So we could see scenarios

(43:07):
as these things get better, Likeright now, there's already, as you
point out, bot networks that arebeing controlled by kids playing pranks or billionaires
that you want people to buy moreof their tissue paper or whatever. The

(43:28):
As these technologies become more and moreubiquitous, and it's going very rapidly,
the capacity for those things expands greatly, and so the value of the public
sphere diminishes. And in a worldwhere the political is simply a recapitulation of

(43:49):
the public sphere, then the politicalbecomes essentially irrelevant to reality. And that's
the challenge we have seen, andwe will see more of our societies becoming
economically, politically, and culturally irrelevantunless we develop new and more robust kinds

(44:14):
of systems that can handle non humanparticipation. And there are the current stated
strategies other than myself, are basedaround identity style things. So Sam Milton
and for example, has just backedsomething called world Coin, which seems to

(44:37):
be some sort of attempt to gethuman biometrics on a blockchain and get every
single human being to voluntarily add theirbiometric information to a blockchain to allow all
activity be traced to either an identifiedhuman being or or not and be you

(44:58):
know, balanced off one way orthe other. That strikes me as insanely
unwise, just really not that plausible. No, you know, just trying
to get people to get vaccines washard enough. Sure, getting people,
you know, it's like there's peopleout there still don't even know how to
send an email, get a levealone getting them on a blockchain. Yes.

(45:20):
Yeah. One of my favorite hindlandsis called have Space Suit. We'll
Travel and the the protagonist's father is. We eventually find out that he's highly
impressive, but he's sort of checkedout of society, and he managed the
house funds with two baskets household moneyand tax money. And at the end

(45:45):
of each year he takes the allthe money and the tax money thing and
sends it into the IRS just ina bundle, and so IRS agents come
to him. He's important enough thatthey don't just like clap him in irons
and throw him in jail and tellhim that he has to fill in his
tax forms, and he says,the law it's it's not illegal to be

(46:07):
illiterate. The law can't actually compelme to fill out tax forms. That's
all the money I owe you.If there it is in cash, you
have to accept cash, go forit. I wouldn't recommend you try this
in your own personal life, butit's it's a great little it's a great
little scene. Yeah, I don'twant to mess with the irs. So

(46:28):
I'm still I'm still a little bittraumatized about what we were saying earlier about
you know, you know, thesebots and stuff potentially affecting our society's opinion
on something, because there is atipping point with society. Malcolm Gladwell talks
about that in his book Tipping Point. Sure, once there's enough people that
believe a certain thing, it's goingto be a whole cascading effect that that's

(46:52):
how people think in general. Andif there are these bots, and now
I'm saying if hypothetically because it soundslike you know there are there Yes,
yes, yes are wrong tense.So it's not potential, it's not it
might happen. These are things thatexist and already have happened. There was
a fairly controversial study that Facebook didn'tget broken up for that. A paper

(47:17):
was released where they revealed that forwhatever reason, they decided to attempt to
intentionally affect the mood of a largefraction of their of their user base to
make them more depressed, and accordingto the study, it worked. I

(47:38):
can look around and tell you worked. Well. Yeah, well this is
above and beyond whatever simple natural effectof There's a number of sociologists that claim
that social media is killing us,and there's strong associations between increased suicide rates

(48:00):
with usage. And there's also aUS government longitudinal study where they get age
cohorts and just sort of track theirprogress through time. And it's not that
long ago that the most recent teenagecohort was added to the group and they
recorded the lowest mental health index thathas ever been recorded by an enormous margin.

(48:27):
So what does this look like forour government and political systems long term?
Is there a plan? Is therea strategy? Does it sounds because
I hate to say they that's oneof my least favorite words when talking about
anything like this, conspiracy or otherwise, Oh, they're controlling the AI.
Okay, who's controlling the AI?Who's controlling the bots? Are they aware

(48:51):
of these potential negative outcomes? Arethey trying to do that? Is it
doing it autononymously? I can't thinkthat word. Is it doing it with
or without our control? Because itsounds like this is a everybody lose a
situation. Well, yeah, soI would say I would say that they
isn't a great word. I wouldsay we have this problem. Whose we

(49:15):
We, the human beings that arepresently alive, have this problem. We
have access to extraordinary new capacity,and we have we have not yet built
a system that can productively use thatcapacity for it. So she's controlling it
at this point. We are usingthe systems of control that have existed before

(49:38):
the capacity existed. So we areoperating governments that operate on the principles that
industrial society developed for governments. Wehave corporations doing large amounts of economic activity
built around the success of the corporationsthat created cars and the computers in the

(50:00):
first place, and the telephones andeverything else. So where did these where
do these I guess these bots comefrom the ones that show up on social
media. You know, let's justtake the Facebook example. So Facebook did
this depression study that was effective.Why, as far as I can tell
from their paper, they just wantedto see if they could, which is

(50:22):
very you know, Nazi medical researchof them, but definitely definitely wizard people
found right, but a lot ofthe bots. So I read about a
guy recently. He's got a YouTubechannel and as an experiment basically just to

(50:43):
see what the plausibility with current technologylooks like, he decided to create some
fake human beings and he was relativelyabove board, like you know, their
name, things like alias, fakename, but they still have faces that
looked like human faces and a backgroundhistory and so on. And he was

(51:05):
able to go these are profiles,yes, these are online profiles on multiple
linked up social media networks, andhe was able to find people that sold,
you know, profiles that you coulduse to hook into your thing.
And he wanted to see how plausibleand how many he could get for I

(51:28):
think it was fifty dollars and hehad some technical skills as well. What
he discovered was the existence of onehundred thousand plus networks of bodied IDs that
exist on multiple channels that were allheating up with the political that they'd been

(51:57):
very high during. Anonymous people whobuilt them up and would buy and sell
them. So there's there are businessesyou know, like I'll, I say,
have no soul and technical experience.So I go out and put together
a quarter of a million profiles.And then if you would like this show

(52:21):
to become very popular, you couldhave one hundred thousand profiles subscribe to it,
right and I do, I didsee. I kind of just observed
this anecdotally myself. Instagram kind ofstarted off as people just sharing pictures of,
you know, things that made themhappy, and then escalated into the
influencer era, which I feel likethat era is kind of dying right now.

(52:42):
But people were buying followers and alot of those followers were fake and
it's but it led to the superthesinflated superstar statuses of these people that never
were superstars to begin with. Andso I see where some of these bots
got created from. But it soundslike the implications of these things go way

(53:06):
beyond what they were designed for,because they were just designed to make someone
a quick box so someone else couldget famous. But now they're going way
beyond that design and they're influencing society, which is influencing politics, which could
ultimately influence our our war games,our nuclear strikes. Yes, yeah,

(53:28):
yeah, Well the nuclear you know, inventory probably still works. Uh,
you know, we haven't tested fora few decades, but they are doing
a lot of stuff to make surethat it keeps working. And so that's
risk. I was going to say, do they still keep nuclear systems on
kind of the old school turn keymethod for specifically kind of that safety reason

(53:52):
that was implied in war games.They don't want a machine pulling the trigger
so far as so far as Iknow, I mean, that kind of
stuff is under top secret security clearance, lock and key. But yeah,
nuclear control is something that the EnergyDepartment is fairly serious about. And I

(54:16):
haven't heard anything about, like,you know, nukes showing up in theater.
They are perhaps not as careful aswe might have liked. Several nuclear
devices have been lost through the years. What do you mean lost? Well,
for example, the original exploration thatfound the Titanic and took those first

(54:42):
pictures of it, that was acover story. A Soviet nuclear submarine actually
sank in that area, and sothe federal government funded a secret mission to
find and retrieve material from that submarine. And uh, and the cover story

(55:05):
was that you know, some somerandom millionaire like you know, got crazy
and that they wanted to look atthe Titanic. Uh. They were in
that feels way too parallel for what'sgoing on right now. They they were
able to uh find and recover thematerials so rapidly that they actually had time
left on the grant and they werelike, well, let's, you know,

(55:28):
go take a look for the Titanic. It's not like we can like
leave and be like, Nope,we couldn't find the Titanic. Uh,
I guess we're gonna wrap up early. And they found the Titanic too,
and so they took the pictures anddid the cover story check any science.
Yeah, okay, so there sothere. So nukes are going missing,
it's not like I lost my keysthe other day. Oops, we can't

(55:51):
find the nukes anymore. Well,there's some other incidences that are not quite
that extreme. So one thing thathappens with nuclear power plant is that because
we don't have fuel reprocessing, weinter the spent fuel rods on site.
So various power plants basically have aswinging pool that has fuel rods in them

(56:13):
that have highly nuclear material that aredecaying. The rods themselves because they're in
water, and we're in boiling waterduring operation or very high pressure, very
high tempature water can have a certainamount of decay. So one of the
things that happened after nine to elevenwas a wide scale audit of where everything

(56:37):
was, and there was a fuelrod. I think the story I heard
was it was somewhere in North Carolinawhich had effectively rusted through, and so
like the bottom couple feet of therod had dropped off and hit the bottom
of the pool. But the monitoringsystem had noticed that the rod was lighter

(56:59):
than it was supposed to be,and they were like, oh, it's
at the bottom of the pool.That's an incredibly radioactive environment. We're not
even go able to look. We'rejust going to know it's on the bottom
of the pool. But it wastechnically missing. And so when when after
nine to eleven, when everybody gotreally paranoid and was like, let's get
all the t's dotted and all theeyes crossed or whatever, they had to

(57:22):
get a special purpose, highly radiationresistant robot to go into their pool and
go down to the bottom and takea picture of the rod and like pick
it up and shake it around orwhatever to determine that they had the right
mass and that it wasn't actually lost. So that's a case where I believe
it was over a decade that nuclearperil was lost by being a few feet

(57:44):
away from where it was supposed tobe. So so we're talking about the
syop, we're talking about missing nukes. What are some of the other implications
of AI with global security. Well, the Biggie is again something that comes
up in the film, the plausibilityof the hallucination. So in the case

(58:08):
of in the case of you know, my little toy, where I'm just
like having it, you know,show me something, the fact that it
makes up three people, all ofwhom I can trivially verify don't exist,
isn't that big a deal. Thecase where it makes up an incoming nuclear
strike, where you can't verify thatit doesn't exist until you ask some poor

(58:32):
kid on a radio tower in Maineto sit around and find out whether or
not he gets vaporized, is amuch bigger deal, because it might be
irresponsible to wait around and find outwhether or not that kid gets vaporized if
it's going to impair your ability tocope with the aftermath so well. It

(58:52):
especially is a lot of our governmentsand societies, our first world societies are
moving more into space. We're becomingI mean, we are moving towards being
an interplanetary species. And once wecan't have eyes on these things, we
are going to strictly rely on thisdata. Well, and that's where the

(59:12):
financial implications are also quite bad,because AIS of these calibers have been engaged
in market activity and as the primarymovers of market activity for decades now,
and so things like the two thousandand eight financial crisis and the people talking
about the commercial real estate death spirrulesbecause the downtowns were empied out by COVID

(59:37):
and then lots of people don't wantto go back to work. So is
that one and a half trillion ofhighly livered commercial property worth any money or
is it worth no money? Andthose are very different outcomes. But the
markets that should manage and handle thesedecisions and transitions are are dominated by AI

(01:00:02):
machines that are attempting to eke outmicro second advantage is and not considering the
underlying information and attentions of the people. Is AI controlling the stock market right
now? For all practical purposes,I would say AI has been in fundamental
control of the stock market for decades. That is just really crazy for me

(01:00:30):
to think about, because I forthe most part, you know, you
kind of just think about it.It's like it's they, it's they are
in control. There's people who showup at Wall Street and there's people doing
their jobs and they use AI asa tool. Are not even just AI,
they are using computers and technology astools. And to really realize through
this conversation just how deeply rooted AIis in our society at every level,

(01:00:53):
from government to finances to even justthe way we think it really can be
a little bit of a frightening thought. So tell me this. I kind
of want to know just how widespreadAI is in our day to day lives
and in its usage and what's it'slike failability, like like what is it?

(01:01:21):
Does it? Does it have agoal? Like? Is it you
peaked it out a little bit?Uh? Well, technological ubiquity is is
just the reality. Unless unless you'resomehow living off grid on food that you
raised from seeds that you bread yourselfin some place that nobody cares about enough

(01:01:43):
to take away from you, You'reyou're dealing with electronic systems constantly, your
credit score is being you know,continually readjusted and so on, to the
extent that your economic life is beingcontrolled by market places. Ninety seven to
ninety eight percent of market activity andnever given market is being handled by professional

(01:02:08):
trading houses. And those professional tradinghouses are implementing algorithms to do their trading
for them at the pace and scaleand with the information content that exists.
Humans are outside of capacity. Soare there ethics committees in place with that?
Because that so that right there justsounds like a glaring problem. Because

(01:02:31):
if you have if you have thesethese powerhouses controlling the market, and their
best interest is to serve their currentclients who are already wealthy, I those
systems are probably going to I thinkit would be the more wrong. Yes,
I think it would be a wrongterm to use control. They're the

(01:02:53):
dominant players within the marketplace. Sothe what's actually going on is that the
markets are losing their effectiveness because ofthis, because what markets do is aggregate
opinions into a collective consensus, theconsensus that results in the most money outcome.

(01:03:20):
But just as when we added playersto the chicken game, and the
game became much much more dangerous totry to be aggressive in. When we
add players to a consensus game,we can create a system where those new
players can make money, So wecount financial earnings as part of GDP.

(01:03:53):
So a couple of years back,there was the whole big thing with Robin
Hood and all these kids on Redditgot online and they decided that they wanted
to become a player and they changedthe game for a short time. Yeah,
that's that's a that's a that's avery visible and obvious case of what's

(01:04:17):
going on. But what has actuallybeen happening for decades is that effectively larger
numbers of bots than that have beengetting in and they have changed the nature
of the game to the point wherethe markets probably have completely lost their ability

(01:04:38):
to manage our economy in a waythat is conducive to us getting to keep
having an economy. Uh. Andso that's so, where is ros Algul
and how do we find him andtell him to stop? Well again,
so this is not a syst situationthat requires some sort of string pulling mastermind.

(01:05:03):
You know, moriarity doesn't have tobe doing this to us, just
as the Industrial age degraded the abilityof monarchies to handle the military realities of
their situation, because if you canmass produce guns, having a small aristocracy

(01:05:28):
of trained combatants to be high grademilitary is relatively useless in the face of
the ability to produce a very verylarge number of very very accurate weapons and
are a very very large populace withthem, and then have them massacre small
elite groups in sort of mass chargedscenarios, and then that developed even further

(01:05:54):
into trench warfare and tanks and planesand other things. That meant that an
entirely different kind of military social organizationwould be necessary in order to create a
stable and functioning state. It seemslike AI can both level the playing field
and completely obliterate or change the game. It's capable of both of those things

(01:06:18):
well effectively bilevantly a playfield it doesobliterate and change the game. So imagine,
right, yeah, imagine a worldwhere poker was an important activity and
people got promotions at work and gotto be president of the states, and
you know, run fortune five hundredcompanies based on how wealthy played poker.

(01:06:42):
Well, computers play poker better thanwe do, so a human being willing
to you know, wear a pairof headphones and do whatever the computer tells
them to do, will be betterat playing poker than any human being could
ever hope to be. And sosuch a human would become President States and
we're unfortune five hundred companies and soon. A world where poker ability was

(01:07:09):
being used to decide the sort ofsocial rank ordering of human beings would probably
work out okay, because while wewould lose out a few geniuses that were,
you know, particularly bad at pokerfor whatever reason, and we'd pick
up you're out, yeah, right, and we'd pick up a few you
know, detritus people that were especiallygood at poker but really shouldn't be in

(01:07:32):
a boardroom. In general, smarterpeople are smarter than dumber people, you
know, more more self controlled peopleare more self controlled and less self controlled
people. A lot of the traitsthat that we find useful in human beings
align with a lot of the othertraits that we find useful in human beings,
and so we such a society couldbe gotten away with. But if

(01:07:56):
poker bots were introduced to a society, the playing field gets leveled. Suddenly,
their selection process doesn't exist anymore.And it's all over. Well,
we use college essays as our mechanismto decide a big piece of how things
go. Chat GBT produces college essaysthat are way better than we produce,

(01:08:18):
and it does it in seconds,and that's really just such a fascinating because
it's you know, I think oneof the first things that we really need
to talk about is identifying the gamebecause AI is going to change the game.
But what game and for who?And is this bad? Because if

(01:08:38):
you're talking about emerging technologies, somepeople are going to be scared. But
if you are going to make somethingmore accessible to someone who is disabled,
blind, death, dyslexic, youknow, name whatever disability they have,
and a technology that comes out thatlevels the playing field for them. And

(01:08:58):
now that person becomes a major playerand they put somebody else like their threat
to that other person. Now thatchanges the game. But it's not bad,
right, Well, and think aboutthe ending of wargames where the people
actually do pull back, and that'sa very evocative. I mean, it's
bizarre that they manage to make watchinga silent animation of wireframe, you know,

(01:09:27):
warfare at this tense, suspenseful moment. But imagine the Whopper actually exists
and can really do what it saysit does. And we sit down the
world leaders of the UN Security Council, which is more or less congruent to
the nuclear powers, and have itshow them the rational outcomes of their policies

(01:09:58):
until they come to an agreement thatisn't self destructive. Would that be possible?
Is that a better outcome? Ascomputers make it possible to find these
fine gradations where chat GPT has thisstyle thing that allows it to write something

(01:10:23):
that looks exactly like a great collegeessay in spite of not actually knowing as
much as a highly educated college studentdoes. Can we find those distinctions of
what does and doesn't matter and howto capture ideas as ideas and value those
things and not just valuing the styleand flash that turns those ideas into things

(01:10:46):
that we are willing to read orgrade. And I really feel like you
just hit the nail on the headright there, And that just identifies such
a big difference between humans and thetechnologies that we create. Is is we
do still have the ability to becreative. The AI can only create what
we've taught it how to create.It can't actually create and that is something

(01:11:11):
that's very special and unique to humans. But it does again level the playing
field. I think about Grimes withher her voice technology. Now someone you
know, any person can pick up, you know, a garage band and
use the AI technology with Grimes,and now they can be a music artist
and they share in the royalties witha very famous, established music artists.

(01:11:33):
And that's not necessarily a bad thing. That opens up the door for creative
people to be more expressive. Itlowers the barrier of entry. I actually
have a friend who they're total conspiracytheorists, but I love them to death,
and their whole conspiracy theory is thata lot of the AI fear mongering

(01:11:53):
movies from the Matrix, A Terminatorand on are all movies that were created
by the upper elite to make thelower and middle classes afraid of AI and
these technologies with the explicit purpose thatwe would resist the technologies when they started
to show up, because the eliteknew that once these technologies came out,

(01:12:15):
it would level the playing field andanybody could be rich and famous. So
that's my friend's conspiracy theory. Well, two points. James Cameron apparently recently
came out with a statement concerning AI, saying I told you so, intimating
the terminator was an ANTIAI messaging.There's also a reasonably popular YouTube channel,

(01:12:44):
Ryan George, you ever heard ofpitch meeting? I'll have to check it
out. So this guy, he'sa comedian, and he acts out a
skit of how a writer could pitcha modern blockbuster to a producer and sort
of it's fun of the ridiculous decisionsthat go into whatever it is. And
so he had one. I cannotremember which series it was, but it

(01:13:08):
was one of the Disney Marvel series, and the the writer is pitching some
kind of pro AI message and theproducer and they're both him, you know,
in shot reverse shot, but theproducer says, yes, our our

(01:13:30):
script. AI is suggesting that weproduce media with that kind of message,
actually rather aggressively now that I thinkabout it. So yeah, I think
it was she HAULK because like that, there's a there's a twist in that
where Marvel's actually being run by AIS. But anyhow, I could definitely see

(01:13:53):
Marvel and Disney being run by abunch of bots. That would not surprise
me. Yeah, So any now, uh yeah, there there definitely is
some some kinds of uh moves inthat direction that we can see, But
I'm more convinced that it's a caseof technology not being evenly useful. So

(01:14:24):
communication and information technology has been mostuseful so far to the people who are
in sort of the fame industries,uh, you know, communication, and
to the financial industries, which isalso a communication industry. So yeah,
Elon Musk can tweet something and it'llget way more reactions than I will,

(01:14:44):
despite the value of what either ofus might tweet. I might tweet something
of more value, but his willstill get more recognition. Right. Uh,
But our societies were built on theexistence of fame and financial wealth and
so on, where the interhuman capacitieswere where they were because we were all

(01:15:05):
humans. So if we introduce atechnology that makes the humans that are good
at getting famous and the humans thatare good at at coming up with good
deals for themselves to trade in cando more of what they're doing. Then
unless we structure our society in sucha way that increasing the amount of fame

(01:15:30):
and the amount of financial wealth isalso economically and culturally beneficial to us in
general, then and this kind ofplays into the economics design that you have
come up with, right, yes, yeah, So what I've done is
essentially used information theory to find away to measure financial information which allows the

(01:15:54):
deal space to be directly affected,so that humans or ais, if they're
developed, can generate deals not forthemselves but for society in general to take
advantage of and then get rewarded,not based on the sort of degree of
error that counterparties that they happen tobe able to find engage in. So

(01:16:18):
it's not how bad a deal thatyou can impose on somebody that doesn't know
any better, it's how good adeal you can offer to people that don't
know any better to make their profitincrease and then share in that profit.
And that's really cool. And Ido like that because having owned my own
business for the last ten years,you're really not short of people out there

(01:16:42):
that are trying to take advantage ofwhat you don't know. And the more
experienced I got in my business,the easier it was for me to sniff
those people out. But it wouldhave really saved me a lot of heartache
in the early years of my businessif those people just didn't exist to begin
with, and if people who didapproach me with a proposition were genuinely had
my best interests at heart. Mutualinterests. Sure. Yeah, And there's

(01:17:10):
a lot of opportunity in creating tooverload the term safe spaces for people to
be economically you know, viable,and that doing so would actually increase economic
activity. The same thing goes goingto like the film, they don't really

(01:17:33):
do a re examination of what sortsof strategies and what sort of structures for
what sort of strategies would need tobe engaged in. But that's that's once
again going back to we this isa challenge for us. We have a
government that is structurally incapable of dealingwith the cultural forms that we can and

(01:17:59):
have built, and unless all ofus want to go Amish and just say
no more social media, we're alsogoing to have to reform government in ways
that identify what its valuable services actuallyare and how those services can be put

(01:18:21):
together in ways that function with thekind of cultures that we can actually build
where a nineteen year old can pretendto be a billionaire. Wow, that's
a little I mean, this isI feel like we could do so many
more episodes on this because as AIbecomes a bigger part of our lives.

(01:18:45):
These topics are going to become moreand more relevant, and there's going to
be just so much more ongoing dialogue. We're kind of getting ready for a
reality check, But I just wantedto ask you before we score the plausibility
of this kind on set, arethere any other thoughts that you had about
war games or some of your expertiseas it relates to the film. I

(01:19:12):
think I think we covered most ofthe high points. I think another good
turning point is sort of Falcon's nihilismto hope turn There's an unfortunate fact that
a lot of people confronting the sizeand scope of these issues simply check out.

(01:19:34):
And actually the unibomber claims that that'swhat happened to him. His thing
covers the ongoing computerization and his beliefthat that would overtake and then crash human
society, and that therefore you shouldjust mail bombs to people. I think

(01:19:59):
hope is a better option myself,but but it is. It is a
personal struggle to try in the faceof this inhuman capacity. Because Fulcan was
dealing with this machine that had createdthat simply would not learn futility, and

(01:20:23):
so he adopted future movies. Also, she was also dealing with the grief
of the loss of his child,and this film didn't really go into but
that's where he drew his nihilism fromto begin with. And I think the
kids showing up, like the girlshe says, she's like, I'm only
seventeen, and she's like and thenhe goes back and he's like, yeah,

(01:20:45):
well maybe we prolong a little bitand you'll grow up and you'll have
a kid, but they're going tostill die. And so it was really
a lot of his nihilism was justcoming from a great source of pain.
But I think that's probably a lotof people. A lot of people are
in a lot of pain, andthat's where thatism comes from. So I
think that that's just a reminder anda call to society to look out for
and take care of each other.Yeah, I think that's I think that's

(01:21:08):
good and if you can muster someenthusiasm, try to be as infectious as
possible. Absolutely, and I dobelieve in humanity. I think that we
you know, I said this,I've said this pretty much every episode so
far this season. Is there's goodstuff we're coming up with for everything bad

(01:21:28):
and dark and dangerous. There's somethingamazing that they were coming up with.
For all of the terrible potential outcomesof AI, there's also a lot of
really really cool things that are comingout with it too, that are going
to make people's lives just so muchbetter. Oh. Absolutely, yeah.
If you'd introduced to say sixteenth centuryEngland, the concept that modern life expectancies,

(01:21:55):
you know, travel speeds, foodoptions, clothing options would would exist,
that that air conditioning could be athing. You know, if they
didn't burn you at the steak asa witch, then then you know,
yeah, they'd they'd be amazed andstunned at things like human flight. And

(01:22:19):
one of the one of the thingsthat's weird is that some of this fruit
has remained load hanging for a veryvery long time. So seventeen eighty three
was the first public demonstration of ahot air balloon. The technologies that would
enable you to build a hot airballoon if you knew that such a thing

(01:22:39):
was possible and decided to do it, are probably somewhere between seven and nine
thousand years old. That's when highfat flax was finally developed so people could
have built hot air balloons before thefounding of SUMER if they wanted to,

(01:23:00):
but we didn't think of it.One reason why I just really love sci
fi is people come up with theseideas and then someone grows up watching these
sci fi films, and then whenthey grow up, they become a scientist
and they draw from their inspiration.I was just watching something the other day
that NASA put out, and they'reputting some AI into their space systems and

(01:23:24):
they're jokingly calling it hol but abenevolent how And I just think that that's
so funny because it's like we allgrew up watching some of these films and
now these technologies are becoming real.Yeah. Well, and in the sequel
twenty ten, how actually is shownto be benevolent. His maniacal behavior in

(01:23:44):
the first one was actually the resultof a misapplied order causing misalignment in his
behavior. Basically, the military gavehim a secret command saying that is number
one part already was mission success,and so when the crew goes against his
recommendation and it doesn't work out aswell as he thought his recommendation would work

(01:24:11):
out, he realizes that the existenceof the crew is a potential risks the
mission, and therefore it is nowhis duty to eliminate the crews that the
mission can be as successful as possible. Yeah, so let's uh, let's
do our reality check. So lookingat our one to five scale, let's

(01:24:33):
talk a little bit about the currentstate of AI research and our technological capacities
at this time, and give me, give me that one to five scale.
How plausible is it for the conceptof war games where AI technology exterminates

(01:24:53):
humanity? Are we at a one? Are we at a five? Well,
the film, as the film inits setting, is highly implausible because
the technicol the technological capacity that it'sit's implying, doesn't yet exist. But
the AI that it's it's suggesting hasbeen around for about a decade now,

(01:25:16):
a little bit longer than that,so we're at full plausibility there. I
don't think that they're presently plugged intoour geopolitical systems, so I'd put it
somewhere around a three in terms ofits capacity to cause nuclear armageddon. But

(01:25:36):
they are plugged into our financial systems, and and they've been causing flash crashes.
The eighty seven hyper you know,super crash was actually an instance of
an algorithmic flash crash, so soan economical armageddon. Economic armageddon is something

(01:25:58):
that we are watching unfold, SoI would give that a five. So
that's a five. Yes, fantastic. Can't wait to be a home buyer
in economical armageddon. All right,Well, do you have any kind of
just final thoughts on this topic?And then as any you know, self

(01:26:20):
promotion, Where can people find moreabout you or some of the projects that
you're working on. Let me knowlike where, and then I'll list all
of these things in the description.Okay, yeah, sure, Well I
have a website called cord disk dotcom C or d I s C.
If anyone would like to reach outto me, you can find me at

(01:26:41):
Noah P. Heally at yahoo dotcom. Or can I be on LinkedIn?
That's my only social media. I'mjust Noah Healy on LinkedIn. There's
there's patent information you can find,there's a white paper you can download and
read if you're interested. And there'ssome video that's like up when you Tube
and on my site that explains whatI'm up to. Uh, and yeah,

(01:27:06):
we're we're kind of living in thewar games world, so let's let's
get together and build a society that'sthat's robust enough to cope with it.
Awesome. Thank you so much,Don, This was a fantastic conversation.
Thanks for having me here, Heidi, I had a lot of fun.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.