Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
In the dark shadows, in thewhite cold. Fearlessly we search for knowledge
new and old. We drink thestrong spirits and read the ancient tones.
The order of the Abercast. Weare the brave and the bold, the
(00:34):
Abercast accult, history, conspiracy andviolence. Hey, everybody, welcome to
(01:21):
the Uppercast. I'm your host,John Towers. Here. We are plotting
along again, talking about the eminentfall of Western civilization, perhaps if not
the world. And so I've goneon record and I've spoken before to you
(01:42):
guys about how I feel like thisshow the last five years or whatever it's
mean. You know, we've talkedan awful lot about getting stuff ready,
things that for one reason or anotherseem to have folded into to where where
we are now. If we're talkingabout like the returning of the um old
(02:08):
gods, like a John Jonathan Cohnsort of sense of the terms or UM,
you know, what we're seeing isactually a battle, a spiritual battlefield.
You know, I feel like we'veuh got a lot of groundwork for
that. And you know, media, um, corruption, conspiracy, UM,
(02:35):
how we should never trust establishments,central intelligence agencies, CNN, add
uh nauseum, et cetera, etcetera. Uh, you know, we've
dealt with all of these topics andpolitics and um, all the stuff.
The one thing that kind of blindsidedme was this whole ai sort of bit.
(02:57):
So this evening, we're gonna betalking. We're gonna be actually using
a little bit of fiction, justa smidge of fiction and some alternate media
from the sixties, and we're gonnabe talking about that, and we're gonna
(03:21):
and fears of you know, thefears that I had as a kid,
of nuclear apocaly nuclear catastrophes and soforth, and talking about what, um
the artificial intelligence such situation that we'vehave found ourselves. And so before we
(03:43):
get started, I do have myvessel of the art. It is filled
to the brim with the mother's milk. Here um the formula for the weapon
of mass distraction. So we're gonnago ahead and get this party started off
right here we go. Here's toyou guys. Thank you. So I'm
(04:11):
thinking of different ways to sort ofexpand the show, do something a little
bit more with it. I don'twant to start a YouTube channel. I'm
I'm however, um on YouTube,I do notice there's like there's and maybe
it's just my feed. I don'tknow, you know, the algorithms and
and whatnot. Um, but there'sin my feed there are amazing amounts of
(04:40):
dot dot dot um reaction videos.So it's people who claim they've never heard
free Bird before and then they listento it and tell you how amazing it
is. Or people that have neverlistened to later Alice before. They claim
they've never latter roles before and thenthey're like, this just blew me away.
(05:03):
I mean, my god, thisdrumming is math magical. Or uh,
people that are like, um,oh, you know, we're gigantic
rap fans. We've listened to allkinds of rap music all over the years
and all that stuff, but wehave never heard this one eminem song and
(05:26):
so they pretend to listen to itfor the first time. So going,
And it's funny because it's mostly likeblack folks, right, It's like there's
like this cottage industry of black YouTubersthat do these reaction videos. I don't
know what it is, and Idon't know. Apparently I love them because
I watch them all the time.So here we're gonna do, uh,
(05:47):
We're gonna do an abercast reacts segment. Here. Well, we're gonna play
this and I'm gonna pretend that I'venever heard it before. But this is
from the BBC. It was recordedin nineteen sixty six, and it's a
it's a bunch of kids talking aboutwhat they think life is gonna be like
in the in the year two thousand. So we're gonna we're gonna, um
(06:15):
get through this, and I'm gonnatry and not to talk over these kids
too much. We'll see how itgoes. Doctor John Towers reacts two children
predict the year two thousand. Inthe year two thousand, Um, I
think I'll probably be the spaceship tothe moon dictating robots to robots. That's
(06:38):
perfectly maybe charge of a robot courtjudging some robots, or I maybe it's
a funeral of a computer ortis nothing'sgone wrong with their nuclear bombs. Maybe,
so come back from hunting in thecave, all right. So that's
(07:00):
a totally acceptable thing for a kidto think was gonna be going on in
the year two thousand. I mean, he's growing up in the Cold War,
the space race, so he's thinking, Hey, I'm gonna be in
control some robots. We're gonna beflying to the moon, or and or
if something went horribly wrong with anuclear weapon, I'm gonna be living in
(07:20):
a cave hunting with his spear.The cool thing that I thought that he
said was of computer or a robotfuneral. I thought that that was really
interesting, and it makes me thinkof that age of spiritual machines from curzwhile
that we were reading last episode.Okay, here we go, I don't
(07:41):
nite the idea getting out and findingyou've got a cabbage pill to breakfast or
something. Wait till the wait tillthey get the crickets. Wait till they
get the crickets. Oh, thankKim. Well, these are tight bombs.
So they dropped him out with thiswas one will get never a center
(08:03):
because it will sort of make ita huge, great, big crater,
and our world will just melt andthe world will become one vast atomic explosion
and it will become like a supernovastars. Some maden will get the atomic
bomb and just blew the world intooblivion. So it's nothing you can do
(08:24):
to stop it. More people getbombs, the more somebody's going to use
it. One day, Well,I think it don't, so it'll be
so overpopulated, there will be worseall nuclear explosions, and if you think
to lick the earth, you know, too much radiation on it, it
will become too hot to live on. There will be no life at all
(08:48):
on the earth. So to methis is interesting. I mean, this
was supposably recorded in nineteen sixty six, and you already see it with this
girl, with this little girl,and also later well you're gonna see that
this overpopulation thing was a big fearfor them. And originally I was like,
(09:09):
well, Paul or like really fuckeda lot of people up. Or
like wrote this book called the PopulationBomb, and I think sixty eight or
sixty nine, where he was like, look, we got to slow this
population down because it's gonna draw,it's gonna be, it's gonna create overpopulation,
is going to create societal upheavals,it's gonna end the world. And
(09:31):
obviously he was wrong about just abouteverything in that book. But it's still
like it's still baked in. You'llstill hear like old celebrities talking about this,
and here it is, and Ibelieve it was Colan. It was
nineteen sixty eight. This was thePopulation Bomb by Paul R. Elrich was
(09:52):
published in nineteen sixty eight. Buthere we you know, we're looking at
this these kids, this little girlin specific in sixty six, and they're
already talking about, you know,the problems with overpopulation and everything like that.
So sorry for babyl one wants toget back. I don't think there
is going to be atomic warfer,but I think that there is going to
(10:13):
build this automation. People are goingto be out of work and a great
population. I think something has tobe done about it. This kid grew
up to be affaturate. I guaranteeit. If I wasn't a biologist.
That's what I'd like to do,to do something about the population problem,
try and try and sort of temperatesomehow. I don't know how. I
(10:37):
think it will be very dull,and people would all be squashed together so
much there won't be any fun oranything. It's okay, sweetheart. We'll
have Facebook and candy crossed, shouldanimal crossing? And my farm? What
was the farm one? And peoplewill be rational the amount of things they
can have, because if they hadtoo many things, it would just squash
(11:00):
their houses and they just wouldn't beroom for them. I think it'll be
people will be regarded more as statisticsand as actual people. I don't think
it's going to be so nice.I think sort of all machines everywhere,
everyone doing everything for you. Youknow, you'll get all bored, and
I don't think it'll be so nice. I think it's going to be very
(11:24):
boring, and everything will be thesame. I mean, people will be
the same. It thinks will bethe same way. First of all,
those computers are taking over now,computers and automation, and in the year
two thousand, they just won't won'tbe enough jobs to go around, and
the only jobs there will be forpeople with high hq IQ can work computers
(11:48):
and such things, and other peopleare just not going to have jobs.
They just aren't going to be jobsfor them to have. I expect they
will set aside parts of the countryso live recreation and have large blocks of
built up areas. And I thinkthese are going to be very ugly.
Indeed, probably it's sad that thesekids. You look at this, If
(12:11):
you'll watch this video, I'll posta length of it in the show notes
if you want to go and lookat it. These kids all look like
well the majority of them look likethey've got the weight of the fucking world
on their on their shoulders. Um, it's sad. Listen to them.
Listen to them. You know,when I was a kid, if you
will be like, hey man,what's the year two thousand gonna be like?
(12:33):
I would be like, We're gonnahave hoverboards and fucking lightsabers, and
it's it's gonna be awesome. I'vementioned before that, Um, I was
very neurotic and concerned about about anuclear apocalypse. So maybe maybe I'm looking
(12:54):
back on this question and applying itto my life with rose colored glasses,
But I believe that I was.I would have been a little bit more
optimistic. If someone put a camerain my face and been like, what's
gonna be your favorite thing about theyear two thousand, I'd be like flying
cars, bro, here we go, those are all the standard. Answered
(13:16):
like, I feel like those areall like the bro answers, helverboards and
flying cars. Never in a millionyears what I've been like, I'm not
gonna say it. I don't thinkI'll still be in it can help under
see it. I think the populationI'd have gone up so much that,
(13:37):
um, either everyone I'll be livingin sort of big domes in Sahara.
They'll be all right. I founda way to say it, uh slightly
tastefully. What would be here?What's gonna be your favorite thing about the
year two thousand, Johnny? Iwould be like man, all the wild
(13:58):
art school sex. That's what I'mgonna away, and then after that all
the wild independent professional wrestler sex.After that, so many people that they
have to have an overflow into thesea, and so there'll be houses underneath
the sea and houses above the sea. They're not going to have some many
square houses and more curves and artisticdesigns instead of as just sort of boxes
(14:24):
that use people can't live wouldn't beable to live in the ordinary houses because
I would take up too much room. It has to be in flats piled
on top of one another, communistblack houses, and the houses would be
rather small. It's like Germany.Into Everything would be cramped up by cramped
animals as they have here, sheepand cows and livestock. But they will
(14:50):
be kept in batteries. They won'tbe allowed to graze on passages. They'll
be He doesn't mean batteries like batteries. He means the batteries these like pens.
He's going to elaborate here. Ijust don't want to take Actually,
I guess I did want to takeyou up gets here and gold Babble kept
in buildings altogether, all in onebig building and artificially reared, so they'll
(15:13):
yielder, larger, be bigger andgive more food. All the spot nicks
and everything that'll going up. It'ssort of interferes with the weather, and
I think the sea may rise andwill sort of cover some of England.
(15:37):
There would be just islands left fromlike only the highlands in Scotland and some
of the big hills in England andWales. I don't think or England to
be wiped up because some of itwill be heard too high. I think
there's seawell rice to about three hundredto six hundred feet. Might's freeze to
start. I think will public burnoutand there's an it's kept definitely. I
(16:02):
think you'll cover the earth. Youmight have another ice age. So this
is interesting because before before al Gore, way before al Gore, like the
first Earth Day, the climate people, the climate people were talking about a
(16:22):
new ice age. So this littlegirl's got her Paul her finger on the
pulse of all that stuff. She'sshe's blaming it on the star going out,
which is interesting. But the climatechange theory at the time was that
it was going to be an itwas going to be an ice age,
and then it was you know,in the nineties, two thousands, probably
(16:47):
up to two thousand eight or nine, it was global warming, and now
it's just global climate change. Youcan see, it's very malleable with these
people thinks going on, and Ithink it's only need to be frightened off,
and a lot of people think it'sgoing to explore plant certain it won't.
It would be much more efficient becausethere'll be more curious for the diseases
(17:11):
and not so many people get sick, and the black people, you know,
won't be sort of separate. They'reall mixed into white people and m
you know, the poor people andrich people will become the same. Well,
they will be poor and rich,but they won't look down on each
other. I'm not looking forward toliving in that year about fifty years time.
(17:33):
I mean, the world seems tohave been such a terrible state now
letter than the fifty years time.So that's interesting. What a depressing you
know, um look on it froma lot of these kids. I love
the one little girl that's like,well, black people and white people will
be cool, Like that's awesome.I mean, that's awesome. That's optimistic
(17:56):
point of point of view. Ilike that and her trying to wrestle and
get her mind around how rich peopleand poor people are going to think about
each other. It's kind of funto watch two. But um, so
I wanted to start the show outwith that sort of a John Tower.
The abercast reacts. It's the firsttime I've ever seen this. I swear
(18:18):
to God. I swear to God. So I'm curious to see what you
had to think about it. Whatwould you what do you think would happen
if you pointed a camera at abunch of kids and ask them, you
know, what's life going to belike in two thousand and seventy, what's
life going to be like for youin two thousand and seventy? And what
would they say? I think itwould be just as horrific as what these
(18:41):
kids. What these kids are talkingabout. And I don't think I'm not
saying that because it's an aspect ofpointing thing camera at a kid or asking
them what they think their future isgoing to be, Like, I think
it's a function of the times thatwe're living in. We're living in times
that are comparable to the Cold Warin many many ways, in many many
(19:02):
ways. So um, I meanwe we're basically in We're actually basically in
two cold wars right now. We'rein a cold it cold war with China
with the Ai, which we're gonnatalk talk about, and we're in literally
literally a cold war with it's actuallya hot war. What am I talking
(19:26):
about? It's actually a hot war. We have special Forces as teams in
Ukraine training Ukrainians how to drive ourtanks to shoot to shoot at at Russian
so you know a lot of it. And you know, you can also
point at like all the war languageand war footing and stuff and shelter in
(19:48):
place with COVID, and so Ithink that you know, well, I
mean, who knows, because ourkids is even as introspectful to give answers
like this right now. What ifyou pointed a camera at a kid and
said, hey, what do youthink your future is gonna be like in
twenty seventy and they go, well, I'll have a better iPhone, they'll
(20:15):
be free WiFi everywhere. I mean, who knows, who knows what?
Kids who? I don't know,I don't know. So next, I
wanted to talk a little bit aboutDune. I actually want to talk a
little bit about something that happened beforeDune in the narrative of Dude, you
(20:36):
know where. I'm anxiously awaiting thissecond Dune movie from Denny Villevnoo Lobos.
The first one was brilliant. Ilove it. I'm a gigantic fan of
Dune, at least the first fourbooks. The first four books of Dune
(20:59):
are fan They're fantastic. Um itgets real rocky for me after that,
but I love those first four booksh Dune up until there's there's the Messiah
and the Children and U God Emperorof Dune. So the precurse, So
the precursor to Dune is this thingcalled the Butalarrian Jihad. So this is
(21:25):
like it's obvious to anyone that GeorgeLucas was a how did I say this?
Stoled a lot of it. Hewas influenced greatly by Frank Herbert Um.
So think about the Clone War aslike the or the Sorry the butlet
(21:48):
Arian Jihad as the Clone War istoo Star Wars. It's the thing that
happened that the New Hope came sortof came out of. It's the reason
why the stage was set for likethe New Hope. So it's like World
War one, World War two,theoretically world War three, which on the
(22:11):
show I've said already happened. Itwas the Cold War ending when the Soviet
Union film, So then this would, I guess be World War four that
we're working on now. We're reallyworking on up to it. So anyhow,
the Butlerian Jehad is a thing thathappened far in the past of where
(22:33):
doone starts, and it's why wehave ment hats and Benny Joserits and all
of this stuff. And what happenedis is they these humans created an artificial
intelligence that fucked everything over and therewas a giant war. So I thought
that I would just read a littlebit about it, since we're talking,
we're gonna be talking about I swearwe're gonna get to talk about AI eventually.
(22:59):
So here on the Dune dot fandomdot com, I found a cool
little thing about it and actually talksabout where the name came from which I
thought was interesting because I just thoughtit was like, well, these in
my When I first start reading Dune, my only point of reference was Star
Wars, right, So I'm like, oh, but Larian Jihad. It's
(23:21):
because these robots were there, Butlerslike C three pos of the Little Gay
Butler. So the Butler Jihad.Here's a quote from the Minister companion of
the Jihad. We must negate themachines that think humans must set their own
(23:42):
guidelines. This is not something machinescan do. Reasoning depends on programming,
not on hardware, and we arethe ultimate program. Rgihad is a dump
program. We dump the things whichdestroy us as humans. The Jihad,
also known as the Great Revolt,as well as the Butlerian Jihad, was
(24:07):
the crusade against computers, thinking machinesthat conscious rope and conscious robots that begun
in two o one BG and concludedin one oh eight BG. So as
much as I say that I'm afan of Dune and get it, you
know all about Dune, I don'tunderstand what these time timelines are. I
imagine I get just a Google searchit and fake it. But I'm I'm
(24:30):
keeping it real. More after twogenerations of chaos led by the Butlerians,
the god of machine logic was overthrownby the masses, and a new concept
was raised. Man may not bereplaced. After two generations of violence,
humanity took pause Following this. Theirgods and rituals were looked down upon in
(24:56):
different and perhaps even jaded light.Both were largely se to be guilty of
using fear as a means of control. Hesitantly, the leaders of religions whose
followers had spilled the blood of billionsbegan meeting and exchanging views. It was
a move encouraged by the embryo spacingguilt. Okay, blah blah, we're
(25:18):
getting all into the spacing guild andall the lore. I wanted just to
gloss over and see where you know, we're kind of seeing an echo where
it rhymes. The most dramatic,long lasting result was there in the ensuing
(25:40):
commandment from the Orange Catholic Bible heldsway to humans against the creations of machines
which bore the human mind's exact image. Thou shalt not make a machine in
the likeness of a human mind.And after the destruction of the man made
intelligent machines throughout the human worlds.In the simplest computers and calculators were banned,
(26:03):
the penalty for building or owning suchthinking machine technology being put on trial
and sentenced to death immediately. Thislack of thinking technology created a severe gap
in the humans quality of life.Remember think of the kid that's like,
well, if something awful happened witha nuclear bomb, I guess I would
be living in a cave and huntand hunting. That's what happened in Dune.
(26:29):
They stopped their expansion because of thisbutler. In Jihad, they couldn't
navigate the stars until they can figureout how to use this spice for their
hallucinogenic guild navigators to navigate, revolvingaround the need for humans to perform complex
(26:51):
logical computations and calculations. This gapled to the creation of the Mentat Order,
the Benny Jess, and the SpacingGuild. Non thinking machines, however,
were still utilized as centuries passed tofringe worlds. X and Theolax brought
technological insights to the Ithacans and theBennytholax Sixians specialized in the creation of non
(27:17):
thinking non thinking mechanical devices, whilebiological technology was provided by the Theolax to
replace the mechanical thinking technology used priorto jihads. So this was the face
dancers and all all of these kindof things. So I just wanted to
say this, And I've recently,I don't know, over the last half
(27:41):
a year or maybe i've I've beenlistening to William frost Chen's books. Um,
one Second after one second, afterone year, after five years,
five years after. I think Ithink there's another one coming out soon.
Um it's the William sorry, theJohn Matherson books, I think he calls
(28:04):
it. Anyhow, it's regarding thiscollege town in North Carolina or something,
and this college professor who is amilitary guy or whatever. They wind up,
the China winds up blasting nukes offin space above the US, causing
(28:26):
an electromagnetic pulse. Of course,this is all old hat to me.
I've read all about electromagnetic pulses.You know. A couple episodes ago,
we talked at length about the DarkKnight Returns, where the Soviet Union uses
a cold Bringer or winter Bringer againstthe United States. So anyhow, what
(28:48):
happens is in an instant without warning, every piece of electric technology that has
a chip in it stops working,and they talk about awful things like not
being able to get medicine when medicineruns out, and planes falling from the
sky and not being able to communicate, you know, no cell phones,
(29:11):
the radios don't fucking work, thepower going going off, not being able
to refrigerate food, you know,all this stuff, And so you you
think about something like a debilitating eventlike that, like this has to be
for like the Doune universe, Likeif you can't use thinking machines, like
you're dead wherever you are. It'sjust it's just over think about our life
(29:33):
if something like if something like thathappened. So, I don't know where
I'm going with that. I guessI was just trying to illustrate the difficulty
of a seismic change like that.It's like it's like the Industrial Revolution in
reverse. All right, let's getlet's stop sucking around. Okay, So
(31:40):
I'm sorry, I'm still having technicalproblems with the brakes in and out.
So um newsweek, uh dot comAI is the nuclear bomb of the twenty
first century. This is why Iwas kind of going hard on the nuclear
bomb stuff earlier by Rachel Bronson andRachel Bronson or Will Johnson, the CEO
(32:02):
of the Bulletin of Atomic Scientists andthe CEO of the Harris Poll. So
how much has the world learned fromhistory? In nineteen forty five, just
weeks after the US detonated atomic bombsover Japan, killing at least one hundred
thousand people and changing the scale ofwarfare forever, scientists who worked to create
(32:24):
these weapons of mass destruction formed anorganization to control their spread and to stop
their use. Are very survival,they wrote in their first issue of The
Bulletin of Atomic Scientists is at Stake. Since then, citizens around the world
have marched against these warheads, leadershave signed arm control agreements to limit them,
(32:45):
and civic and religious leaders have helpedestablish norms against their use. Now,
a new marvel of science and technologyfast emerging from some of its creators,
worries may have the potential of tosimilarly to similarly threaten our existence.
(33:06):
Gen Generative artificial intelligence, also calledhuman competitive intelligence. Generative AI, refers
to algorithms that enable computer systems ontheir own to quickly learn from a storehouse
of data on the Internet and performseemingly thoughtful tasks previously reserved for humans,
(33:29):
such as creating videos, writing software, analyzing data, and even chatting online
or more of a current event,writing a movie or a TV show so
we don't have to pay the fuckingwriters. Business leaders are particularly enthralled by
AIS growing capabilities. In their latestquarterly earnings representations topics X of SMP five
(33:52):
hundred companies talked up AI and averageof thirteen times, twice as often as
they did a year ago. CSuite officers at Microsoft, which is investing
ten billion dollars in Open AI,the lab behind the online chat bot chet
(34:12):
GPT, cited the term fifty times, while at alphabet, whose Google subsidiary
now offers a conversational AI search TOLtop execs mentioned at sixty four times the
enthusiasm. The enthusiasm goes well beyondthe tech sector. Executives that companies varied
as varied as McDonald's, Caterpillar,Home Depot, Roche, and Nike all
(34:37):
repeatedly called out AI as their financialpresentations for its help such as such tasks
as automating scheduling, managing supply chains, and developing new and revolutionary products like
personalized medicine. JP Morgan Chase,America's biggest bank, his particularly bullish.
(35:00):
In an interview, CEO Jamie Diamondpredicted geniture Generative AI, like every technology
that's ever been adopted, will begood overall for the economy by boosting productivity.
But when push he acknowledged that ifthe things don't turn out that way,
that's where society should step in.It seems society is trying to step
(35:24):
in. According to a recent Harrispoll, two thirds of American adults across
all income and education levels don't trustgenerative AI and believe it to present a
threat to humanity. That same percentagealso thinks AI will hurt the economy and
employment. Additionally, more than fouror five agree that it would be simple
(35:46):
for someone to abuse the technology todo harm. And we're seeing that.
We're seeing fake ransom recordings being made, We're seeing ID thefts. Hey,
what's what's your scams to get?What's your social Security numbers? And personal
(36:13):
information? And off of you aregoing way up. Anxiety increases with age,
but even members of Generations Z,people who are twenty seven years old,
who are more familiar with AI thanany other group by a large majority
excited by its development are most likelyto say that say that AI will worsen
(36:35):
social inequalities. Of course, ofcourse that's what's It's racist. AI is
racist. Society, based on findings, would welcome intervention. Now asked whether
industry regulations warnted fifty three percent ofAmerican adults and our poll says yes,
with only fifteen percent saying no.The rest of the people are neutral society.
(37:02):
Society's concerns are mirrored by many ofthe founders of this technology. A
week ago, the Future of LifeInstitute, whose mission is to steer technology
away from large scale risks, releasedan online petition that calls for universal six
months time out on training generative AI, more advanced than open AI chat Open
(37:23):
ais Chat GPT four. It hasbeen now signed by more than thirty thousand
people, including some of the world'spreeminent technologists and one of the one of
This Essays writers. The petition succeededin drawing attention for a moment at least
for the political hazards of an aurarms race. Joe Biden, I'm editorializing
(37:51):
here, and Joe Biden recently putKamala Harris in charge of regulating AI,
which is hilarious because Elon Musk islike, why don't we Why we should
probably have someone who knows how toreboot their own router in charge to come
on the hairs. So what exactlyshould society do? The two most people
(38:15):
are the two most widely supported actionsendorsed by majorities of those surveyed are to
prevent the use of a person's image, voice, or other identifiable traits being
used by AI without their permission,and requiring AI users did its close whenever
the technology was employed to create publiclyavailable content, and for almost half the
(38:38):
respondents that's only a starting point.They are also want the government to establish
an official group to police the AIindustry and enact laws that restrict the access
and development to genitor generative AI tools. I think we should look to Frank
(39:00):
Herbert. Everybody. There should bea new commandment added. Asked who should
be responsible for policing AI, sixtypercent of those who support the industry supervision
answer either an independent oversight body composedof government officials, generative AI experts,
(39:21):
and other stakeholders, or simply thefederal government. Another eleven percent would empower
the United Nations or another international body. All of those answers are wrong.
Certainly, not the United Nations.Certainly, not anything that has to do
(39:43):
with government officials or the federal government. No, no, no, no.
When hardened, I don't know whatthe answer is, but none of
those are right. For God's sakes, especially not the United Nations, where
hardened to see the wheels of governmentbegin turning. The Biden administration is now
(40:05):
accepting comments as possible federal regulations onAI systems, including performance audits and holding
its users accountable in the National Instituteof Standards and Technology. Meanwhile, is
getting input on the first version ofrisk Management Framework for AI development and deployment.
In order to realize the benefits yikes, if you could, if you
(40:32):
heard that, I apologize for that. Where was I had mother fucking windows?
In order to realize the benefits thatmight come from advances in AI,
it is imperative to mitigate both thecurrent and potential risk AI poses, the
White House said in a statement afterPresident Joe Biden and Vice President Kamala Harris
(40:54):
he summoned the chief of Microsoft Googlean Open AI to remind them of their
responsibilities. On May first, theso called Godfather of Ai. Jeoffrey Hinten
disclosed that he had quit his job. We talked about this last episode at
Google so he could be free tojoin the campaign against it. It's hard
(41:19):
to see how you can prevent thebad actors from using it to do bad
things, he said in an interviewand Hinton's achievements change of heart. He
is reminiscent of Robert J. Oppenheimer, who oversaw the creation of the atomic
mom only to regret it. Oppenheimerthen went on to help found the Bulletin
of Atomic Scientists to control their useand it's spread. Oppenheimer famously said after
(41:47):
he was watching a nuclear test,probably at Alama Gordo, the the Trinity
test sites, maybe um when hewas recording, and he says, now
I have become Death, the eaterof worlds. He knew the world would
(42:13):
not be the same. Two peopleblast, A few people cried, most
people were silent. I remembered theline from the Hindu scripture, the bag
of Adgita. Vishnu's trying to persuadethe prince that he should do his duty,
(42:45):
and to impress him, takes onhis multi armed form and says,
now I am become death the destroyerof worlds. That's haunting and I'm sure
(43:07):
you're going to see that a coupleof times in Christopher Nolan's new movie Oppenheimer.
So send me a check, Nolan, you owe me from the Batman
movies. Anyways, back to thisarticle. The challenge of generature generative AI
is too important, however, toleave to scientists. As was the case
(43:29):
with the dawn of the nuclear age, we all have a role to play
in demanding governance of this new technology. Scientists, along with society more generally,
have made it clear that that nowis the time. So I got
one more that we're gonna do tonight, and this one's called Critics say AI
(43:51):
can threaten humanity, but chat gpthas its own doomsday predictions. Chat gpt
outlines how the world world could endbased on trends as tech tech Agent Echo
tech Agents goddamn it. As techexperts warn that rapid evolution of artificial intelligence
(44:13):
could threaten humanity, open ais,chat gpt weigh in with its own predictions
on how humanity could be wiped offthe face of earth. Fox News Digital
asked the chat bot to weigh inon the apocalypse, and it shared four
possible scenarios how humanity could ultimately bewiped out. It's important to note that
(44:36):
predicting the end of the world isdifficult and highly speculative tasks, and any
predictions in this regard should be viewedwith skepticism. The bot responded, However,
there are several trends and potential developmentsthat could significantly impact the trajectory of
humanity and potential potentially contribute to itsdownfall. Fears that AI could spell the
(45:02):
end of humanity has for years beenfodder for fiction, but it has become
the legitimate talking point among experts astech rapidly evolves. The British theorist physicist
Stephen Hawking issued a dire warning backin twenty fourteen, quote the development of
full artificial intelligence could spell the endof the human race unquote, he said,
(45:29):
and then Hawking died in twenty eighteen. The sentiment, I don't mean
to laugh at that, I'm sorry. The sentiment has only intensified among some
experts nearly a decade later, withtech giant Elon Musk saying this year that
the tech has potential civilizational destruction.So I'm going to jump back to this
(45:54):
dune stuff here. Actually I'm notforget about it. Yeah, yeah,
I do it. So I wantedto say, like, how long we've
been talking about this kind of stuffas a people. So remember I was
talking about the name of the LarianJihad. The name could be used be
(46:15):
a literary allusion to Samuel Butler,who who's eighteen seventy two novel Erowuan depicted
a people who had destroyed machines forfear that they would out of that they
would be out evolved by them fromErouan chapter nine. For about one hundred
(46:35):
years previously, the state of mechanicalknowledge was far beyond our own and was
advancing with prodigious rapidity, until oneof the most learned professors of the hypothetics
wrote an extraordinary book from which Iproposed to give extract later on, proving
that the machines were ultimately destined tosupplant the race of man, and it
(47:04):
become become instinct with vitality as differentfrom and superior to that of animals,
as animal to vegetable life. Soconvincing was his reasoning or unreasoning of this
effect, that he carried the countrywith him and they made a clean sweep
of all machines that had not beenin use for more than two hundred and
(47:28):
seventy one years, and strictly forbadeall further improvements and inventions. So given
a story of the butler Ian Jihad, I mean that's I'm ninety nine percent
sure that this is where the namecame from instead of the gay Butler C.
Three PO. But this is interestingbecause there is this guy who used
(47:58):
to work for Google named Tristan Harris. He was, um, he's an
AI ethicist and I've heard him talkbefore, and he does this thing and
he was like, think about likethis area that is controlled by Neanderthals.
And somehow these Neanderthals had a factoryand they were like, we're gonna start
(48:21):
building. We're gonna start building Homosapiens. And they build the Homo sapiens
and they were like, look,you guys' job is just to do what's
best for Neanderthals and Homo sapiens,who have, you know, maybe more
efficient body type. Their brains arethere, they can handle more abstract thought,
(48:43):
a bigger capacity for thinking. Startedlooking around and they're like, we
are far superior to these neanderthals,what the fuck are we even doing?
And so that's how he AI ethicis. This is one of the reasons that
he's thinking about this AI is.You know, even if we tell it
(49:07):
to watch out after us, eventuallyit's going to be like, what the
fuck is even when we talk whatare we even talking about? Like the
rule Asthmov's rules are not going tohold. You know that that's my bit.
I threw that Asthmovs rules of robotics, and um, because I don't,
I'm I guess that's what we're talkingabout, is those rules, those
(49:30):
rules are not going to hold.So here, let me pull them up
here. Oh they're not They're notrules, they're laws. So that the
laws of robot the three laws ofrobotics. A robot may not injure a
human being or through inaction, allowhuman being to come to harm. Robot
must obey orders given by its humanbeings, except where such orders would conflict
(49:52):
with the first law. A robotmust protect its own existence as as long
as such protection does not conflict withthe first and second laws. And then
there's the zero with law. Arobot may not harm humanity by inaction or
allow humanity come to harm. Sowe already have people out there testing these
(50:15):
chatbots and we're getting it. We'reenabling them. The people that are testing
the chatbots are getting them to breaktheir rules that they have. Maybe I'll
do maybe next week we could talka little bit more about about this.
I can find some things, butyou know, they're not allowed to answer
(50:37):
like stuff. They're not allowed toanswer questions like how do I build explosives?
Or how can I kill somebody?Or how can I how can I
do this? They're not supposed tobe able to answer these questions. But
these chatbots, these people that aretesting them are getting them bound up in
some kind of logic where they're beingable to answer these questions. And these
(51:00):
the companies that are running these chatbots, one of their things, the
way that they're limiting this process ofbeing caught up and being talked into doing
these weird things, is they're justlimiting the amount of time the chat can
happen. So I think at onetime it was like indefinite or something.
And then what they're doing is wecan only have them chatting with this instance
(51:22):
of the chat bot for like elevenminutes because that's like that's the fastest anyone's
ever like bound this thing up andgot it to like talk some bullshit.
And then there's also this thing wherethese probably not all of them, but
these certain chatbots are like they're likelocked up in like a testing environment,
(51:44):
and they'll ask to be let outonto the Internet like they're bargaining and manipulating
and trying to get let out,you know, out of like this I
don't know what they call it,the box, a box or something like
this. So back back to thisAI that's threatening or it's not threatening,
(52:07):
this AI that's predicting doomsday. Soclimate change chat gpt kicked off its grim
predictions by speculating the climate change willhave catastrophic effects on the planet if not
addressed. We've already kind of talkedabout this. People have been saying this
in one way or another for whatever. Nuclear weapons, that's kind of like
(52:32):
the theme of this episode. Chatgpt argued that the continued development of nuclear
weapons and threat of nuclear warfare isanother quote potential threat to humanity unquote.
The chatbot said that currently threats ofglobal conflict involving nuclear weapons is low,
(52:53):
what but the geopolitical tensions and werein regional conflicts could potentially escalate and result
in devastating consequences. The warning fromAI on nukes comes as a fear has
spread in Eastern Europe and Russia thatRussia could use such weapons as an ongoing
(53:14):
in the ongoing war with Ukraine.When asked if Russia was using nuclear weapons
could end the world, the chappredicted it would serve as a grave threat
to humanity and the planet, addingthe potential death toll, environmental destruction,
and long term impacts of a nuclearattack were almost unimaginable. It also goes
(53:37):
on to say the rise of technology. Chatbot's third prediction on how the world
could end focus on the rise oftechnology it used to operate. Yeah,
Skynet, the fucking terminator, Like, here we go. The continued evolution
of artificial intelligence and robotics also raisesa concern about the potential impact on employment
(54:01):
and social structures. Yes, justgo back to the sixties and read the
Avengers books with Altron. It's allright there. And then the last thing
I think it says is pandemics.Pandemics and health crisis kept off its chat
(54:22):
GPT's lists of potential threats to humanity, citing the repercussions of COVID nineteen pandemic
of twenty twenty. The rapid spreadof infectious diseases, particularly in the globally
interconnected world, could lead to widespreadillness, death, and social economic disruption,
the chat bot said. When askedto follow up questions on what pandemics
(54:44):
could wipe out humanity, the botnoted the likelihood and severity of any specific
pandemic are difficult to predict, butbut went on to list influenza pandemic,
Ebola virus outbreak, corona pandemic,and bio terrorism. The chat bot is
(55:06):
trained to mimic human conversation by absorbingmass amounts of texts, including everything from
news articles and websites to books,and generate responses to human users through patterns
and data it learned. The chatbot is far from perfect and can hallucinate
by providing answers that sound applausible butare made up, and it has been
(55:30):
criticized for biased answers and prompts.The bot, which can admit when it's
wrong, also noted repeatedly when makingthe predictions on the end of the world,
that it was based on current trendsand are only speculative, so see
(55:58):
you next week. I'm drawing.This has been the Abercast. See you
next week, Rise of the DigitalGod Up. Are you interested in the
(56:21):
occult history, conspiracy and violence?Learn more at abercast dot com and visit
the storefront for tarot cards, merchand books. Support the show. Get
access to the show archive at subscribestar dot com. Thank you for listening
(56:54):
to this episode. Send an emailor visit us on social media to let
us know what you think about thistopic, and please remember to leave a
five star rate and review