All Episodes

January 30, 2024 63 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Media, Well, come back to Behind the Bastards. I don't
know why I did that, kind of like it was
like I was doing a Halloween opening. Didn't work at all,
horrible idea. But I'm very happy to announce our guest
today back after a long hiatus from this show, but
not from our hearts, Iffy and whatdy Wall.

Speaker 2 (00:24):
How's it going going to be back?

Speaker 1 (00:27):
Yeah? Oh so happy to have you back. If he
you are a significant part of the dropout, a wonderful
channel on YouTube that is the survivor of college humor,
which is kind of like we're ken in the comedy
internet comedy world.

Speaker 3 (00:42):
Yeah, and you guys have been doing some really cool
stuff lately, like some of the funnest, like most interesting
like quasi game show comedy bits like I've watched hours
of it.

Speaker 2 (00:53):
Yeah, yeah, I was, you know, I was talking to
Sam and Dave's the kind of like higher ups there,
which is, you know, which is a testament to how
cool the company is, because no, I haven't spoke to
the higher ups at ICC or any of the other companies,
but I was like, oh yeah, this is like where

(01:13):
like Panel Comedy lives and now after Midnight's back. But
I feel like Americans run away from panel comedy shows
and I like an excuse to riff and goof with friends.

Speaker 1 (01:24):
Yes, I think, and I think that YouTube and the
kind of what people get out of a lot of
like streaming stuff too, like not like streaming TV, but like,
you know, streamers is that goofing with friends thing. Like
I watch red letter media stuff for a lot of
the same reason, and I feel like there's a way
to save that stuff. It's just not putting it on
television at eleven thirty at night necessarily.

Speaker 2 (01:47):
Yeah, yeah, yeah, people should be a little more lucid
when they're taking in that kind of media. But yeah,
where else are you gonna watch? You know, a block
trying to think of the name of the show it was.
It was the one with the guy he has the
it's not a top hat, it's like a fedora almost,
and he's a detective and all dads love it.

Speaker 1 (02:10):
Bosh Yes, oh god, I don't even use the tools
that Bosh makes. They love Bosh the Bosh heads. Yeah,
I would laugh. But I just got into Reacher, which
is like it's, you know, it's the stupidest thing on TV,
but I love it. It's like, it's like I used

(02:32):
to watch a lot of Walker Texas Rangers again the
inheritor of that, but not racist or as racist, I
guess exactly say not at all.

Speaker 2 (02:40):
Look, that's why I'm not even going to judge, because
I remember one time, me and em we were up
in the cabin in right Wood and we needed to
kill Tom and we were like, let's watch this nine
to one one show whatever, just see and we were
like seven episodes in locked. You're like, hold on, this
is this? This kind of goes Angela Bassid know what
she's doing.

Speaker 1 (03:00):
Mm hm oh yeah, no, she's she's a pro. And
you know, you can't get work like that or work
like you and your colleagues do at the dropout without
human beings actually doing things. Which is why today we're
here to talk about AI, and particularly we're going to
talk about how a lot of the conversation and a
lot of the fandom around AI has turned into more

(03:20):
or less occult. So that's that's that's the premise of
this episode. Iffie, I want to start by going back
to a place I was about a week ago as
we record this, the Consumer Electronics Show. Every year they've
been holding it for a long time now, and it's
kind of how the tech industry talks about itself and
what it has planned for the future. A lot of

(03:42):
it's hype, you know, they're kind of talking up what's
coming out that they're hoping we'll get buzzed, we'll make money,
but you also get an idea of like what do
you think we want and what are you trying to
get us excited about? And I think the most revealing
product that I saw this year was the Rabbit R
one and it's it's a little square shaped gadgets a screen,
it's got a little camera that can swivel, and it's

(04:03):
an AI that basically you talk to it like you
would an Alexa, but it can use your apps and
it's supposed to reduce friction in your life by basically
routing every move you make online through this machine intelligence.
So you tell it what you want to do and
it does it instead of like you using like physically
using your smartphone as much. You still have to click

(04:24):
it sometimes. And I want to play you a clip
of this where this is the CEO Jesse Lu's keynote speech.

Speaker 4 (04:31):
Our smartphones has become the best device to kill time.
Instead of saving them. It's just harder for them to
do things. Many people before us have tried to build
a simpler and more intuitive computers with AI. A decade ago,
companies like Apple, Microsoft, and Amazon made Siri, Contana and Alixa.

(04:55):
With these smart speakers, often they either don't know what
you're talking about or failed to accomplish the past. We
ask for recent achievements in large language models, however, or LMS,
a type of AI technology, have made it much easier
for machines to understand you. The popularity of LM chatbots

(05:20):
over the past years has shown that the natural language
based experience is the pass forward.

Speaker 1 (05:27):
Now. I don't know that I entirely agree with that,
because I think the biggest influence that these chatbots have
had on me is that whenever I try to deal
with like an airline or something, I get stuck on
chat GPT and it's a pain in the as to
do anything.

Speaker 2 (05:42):
What's so funny about the rise in AI right now
is like, if we really think about it, and god
damn it, Robert, you threaded the needle right there. When
you really think about it, all AI is is just
the evolution of the shittiest part of calling. Yes, yes, all,
the one thing that we do as soon as we

(06:04):
call is like zero zero zero, let me get straight
to a human.

Speaker 1 (06:07):
Let me talk to a human.

Speaker 2 (06:09):
And the and these these like eggheads were like, what
if we did more of that? What if we remove
your solace from this sad eyes of your life?

Speaker 1 (06:22):
And it's it's the kind of thing he seems so
off from my experience where he's like the problem with
phones is that it's too hard to do things. No,
it's too easy for me to order a bunch of
junk food and have a stranger deliver it. That's been
a problem for me, right, Oh yeah, it's too easy
for me to waste six hours on Twitter, Like that's
what the issue is.

Speaker 2 (06:42):
Do you know how much the easiness of everything has
made me wake up the next morning with cold food
on my porch that I never touched.

Speaker 1 (06:51):
Yes, because it's too easy, It was too easy.

Speaker 2 (06:54):
Sometimes we need those those those like bumps in the
road to keep drunk stone diffy from ordering a like
triple double burger that I'm hoping gets to me and
before I fall asleep, it never does.

Speaker 1 (07:09):
I've had so many problems as with my phone, but
I don't think I've ever had the problem of like
this thing's just too hard to use. Yeah, that's just
not a problem I know that anyone has. But but
Jesse complains that, you know, there's too much friction with
smartphones and his device. That rabbit is going to let you,
Like you can just tell it book me a flight

(07:29):
or a hotel on Expedia and you don't even have
to know, like it'll just pick a hotel for you
a lot of the time, or like what flight you
know it thinks is the most efficient. Jesse's goal is
to basically create AI agents for customers, which like live
in this little device you wear and act as you
online to handle tasks you normally use your phone to do. So,

(07:50):
you can tell your rabbit to book you an uber,
you can have a book you a flight, or you
can have it plan your trip to a foreign country.
Nobody sounds really bad. It sounds so fucking bad.

Speaker 5 (08:01):
Rabbit's like, hey, how would you like a middle speech
seed on.

Speaker 1 (08:06):
Sufficient as hell? Yeah, spirits so cheap. Now you can
you can direct it more. But then that just seems like, well, yeah,
that's what I'm already doing on my computer. Why is
it easiest just to like work through a vocal chatbot
that might not understand me, or at least will be
as much friction as like, Yeah, when I touch my
phone and it hits the wrong thing, right, I just
don't see that I'm saving much here. No one also

(08:28):
seems to know how Rabbit's going to integrate with all
these apps, because that means their device has to have
access to them for you, and that's kind of a
big ask for all of these different companies. That said,
and no one knows, by the way, how secure it's
going to be. But no one at CES was listening either,
because the first ten thousand pre order models that opened
at CES sold out instantly. How does it mean? That

(08:51):
doesn't mean a lot of normal people are going to
buy it? It means a lot of tech freaks wanted
this thing. Yeah, that is the thing.

Speaker 2 (08:58):
Too, is like, yeah, if you're at c you're already
you're already drinking the kool aid. Yes, I you know,
And I'm very much It's so funny because I'm very
much that guy, or I was very much that guy
who wanted to be on that cusp of technology. And
I feel like, you know, in my early twenties, it
seems so cool because you're like, yeah, I want to
be iron man, I want to just have full control.

(09:21):
And then you kind of get to the point where
you start if you have enough self reflection, you notice
that you're just kind of you're using technology to do
things that you could just do for much simpler and cheaper, Like,
for example, like when you get these apps and then
I'm like sitting in a photo editor for like fifty

(09:41):
minutes just to make it look like it was shot
on film, And I was like, I can buy a
disposable camera like that exists. And what's great is when
you now when you develop them, they put it, they
email it to you so it can go on Instagram.
It's going to take longer, but arguably you might get
some post flip clarity that pick that didn't look good

(10:01):
isn't now on the internet, or you know, you can
like spread it out. But yeah, I am getting.

Speaker 1 (10:08):
Very you are, like, you are so much more thoughtful
in that than like anybody involved in this product has
been over the course of their entire life. Now, a
couple of skeptics who have given reviews have already noted problems.
Richard Lawler of the verbs was like, this thing is
not built for left handed people to use, Like they
forgotten that left handed people existed, and so they designed

(10:29):
it in a way that's specifically a pain in the ass. Which, yes,
there's also they brag they have this camera that can
like move on its own, so we can cover stuff
in front of it or behind it, And a commenter
on Lawler's article was like, it's a pretty fundamental design
principle that you don't add moving parts if you don't
need them. And there's plenty of space in this for
a camera in the front and the back, which is

(10:50):
one less point of failure, one less thing for shit
to get gunked up in. This is actually bad design.
They're bragging about this, but it's a bad idea.

Speaker 2 (10:57):
Yeah.

Speaker 1 (10:58):
Yeah, there's a couple of other issues in there. You know,
we'll see it looks like it's going to be a
lot thicker than a smartphone. I just don't know the
degree to which a regular it's the same Google glass issue, Right,
do you want a second thing that you have to
keep on you like alongside your you know.

Speaker 2 (11:12):
Just going to be my question. So I'm glad, So
is this a something that should be replacing your phone
or is this another thing that you are supposed to
be holding on to.

Speaker 1 (11:22):
I think the goal is for it to eventually replace it,
but at this point you will still need to have both,
so like carry another thing, you know.

Speaker 2 (11:31):
You know, walking around like a drug dealer. You have
two different It's kind of bit.

Speaker 1 (11:35):
It's a big device and it's it's not tiny and
it has a big screen. So it's just like, well,
you just made a different kind of smartphone. Again, what
am I gaining? Yeah? What he's holding in this video? Correct? Yeah?

Speaker 2 (11:46):
Yeah, yeah, yeah, okay, well and what ye what more
is like that? On top of tech person, it's definitely
the person who carries a power bank with them. So
that is a thirst and that's me. So that's three
things you're rocking with. I know I have so many
power banks and have and ADHD has blocked every single

(12:08):
one of them from being useful because you do, I
forget to charge them or I forget to bring them.

Speaker 1 (12:14):
It also just doesn't look comfortable. It's bigger than his hand.
Oh yes, he's not a teenage We'll get a better
look at in a second, because I want to show
you when he's talking he's trying to make the case
for this and one of like, he spends a significant
portion of his not very long keynote on him Rick
rolling himself in the most iffy You've gotta fucking watch this.
It is so painful.

Speaker 4 (12:36):
R One has an eye, an onboard camera designed for
advanced computer vision. It can analyze surroundings and take actions
in real time. To activate the eye, just double tap
the button. Oh funny seeing you here, Rick.

Speaker 1 (12:56):
It's a picture of Rick Astley that he points it at.

Speaker 6 (13:00):
Let me take a look. I'm never gonna get you off.

Speaker 1 (13:12):
Enjoy what am I getting? Rip roll in my own keynotes?

Speaker 2 (13:20):
The next one okay tech keynotes spe to take impromt.

Speaker 1 (13:27):
Class one like and I don't recommend that to normal people.

Speaker 2 (13:33):
Because like you're you're like, no, selling your own joke,
just like what you're not even laughing at it? Why
am I gonna laugh?

Speaker 1 (13:43):
I will say this that was scripted almost exactly the
way Tim Robinson would have written it. Oh man, But
I bet Sam Richardson could have delivered that bit better.
So that is very funny. But I find this next
clip more disturbing because it shows this kind of desire

(14:05):
that the people that are the early adopters here have
not just for more convenience, but to hand over like
the power to choose to a robot that's basically just
pulling the first advertised result from Google Like it's kind
of messed up.

Speaker 4 (14:20):
Ooh, I can also use our one to order foot
get me a twelve inch pizza from Pizza Huts. Denvers
will hear the most ordered option on the app.

Speaker 6 (14:29):
Is fine ordering a twelve inch pizza from Pizza Hut.
Since you mentioned that the most ordered option is fine,
I will select that for you. I just created an
order for a twelve inch pizza. It's going to be
hand tossed with a classic marinara sauce and topped with
regular cheese. Please confirm your order.

Speaker 4 (14:51):
That sounds really good. I just come from an order.

Speaker 1 (14:53):
He has to click it again, just like on a smartphone.
He guess, like, look at the device and click it.

Speaker 2 (14:57):
Here's a freebie for any of even Richard Lawlor, who
has been in the game for a long time, but
like any of you tech bloggers, I just do a
side by side with this video and physically do everything
he's doing and the real time, because a pizza would

(15:17):
have been ordered way faster if you would have just
pulled out your phone.

Speaker 1 (15:21):
Well and you could get the pizza you want. Yeah,
I like the twelve.

Speaker 5 (15:25):
Inch Classic marinera most ordered pizza.

Speaker 2 (15:28):
I want this shit. I actually like that's so weird.

Speaker 1 (15:31):
Yeah yeah. Also, who orders a twelve inch pizza from
Pizza Hut?

Speaker 2 (15:35):
Nobody? Yeah?

Speaker 1 (15:38):
Then no way? Is that the most ordered product from
Pizza Hut. I don't believe it.

Speaker 2 (15:43):
It was definitely paid nonsense.

Speaker 1 (15:45):
Yes, yeah, it's just like yeah, and they're the next clip.
I don't think we'll actually play it, but like it is,
it's him saying, hey, plan out like a three day
vacation in London for me, And as far as I
can tell, the AI goes for like the first top
ten list of things to do in London that it finds,
which was probably written by an AI, and then makes
an itinerary based on those and it's like, first off,

(16:08):
are you that basic? Second, planning a vacation is fun?
Is that not a thing that you want to do?

Speaker 2 (16:14):
Yeah? You're so right? Why would you? Because I look,
the reason you would go to like a travel agent
is because they are experts at it. They're gonna find
the most fun thing for you outside of that, Yeah,
I want to plan the cool stuff I'm gonna do,
you know, And and yeah, what about people with fears,
you know? Or people without skills, which is definitely going

(16:35):
to be a large margin of people who do this.
So you you're in London, now you're you're, you're, you're,
And then it takes you on a trip to Malta
to go scuba diving. You don't even know how to swim,
and now you sit in therey like you already paid
for bro, Yeah.

Speaker 1 (16:51):
You let your fuck for you. It's on you. It's silly, right,
and I don't want to be I'm gonna say this
is not the most direct parallel to cult shit will have,
But watching this, I couldn't help but think about a
cult that was like the subject of our second episode
for this year, The Finders, And it was one of
those things. The guy Marion Petty who ran it was
like running games is the way he framed it. And

(17:13):
people would join the cult and give up their agency,
and he'd tell them go take a job in this city,
or like go follow this guy and take notes on
him for a year, or like have a kid and
raise it. This way, and this is stuff like this
is really common within cults. One of the appeals of
a cult to a lot of people is that you
both get a sense of purpose by following the cult

(17:33):
and whatever things that says it going to do, and
you give up the burden of having to choose a
life for yourself. And this is such a common thing
in cult dynamics that psychologist Robert Lifton, who's kind of
one of the foundational minds and studying cults, described it
as voluntary self surrender, and it's a major characteristic of
a cult. Many of the finders, we're not you know,

(17:54):
these are not dumb people. These are not like rubes.
These are not hillbillies as they're often portrayed in our
popular media. A lot of the finders had Ivy League degrees.
One of them owned an oil company, and these guys
still handed their lives over to a cult leader. Haruki Murakami,
writing about ohm Shinriko, which is the cult that set
off a bunch of poison gas in the Tokyo subway,

(18:16):
noted that many of its members were doctors or engineers
who quote actively sought to be controlled. I found a
lot of this really information on the fundamental characteristics of
what makes something a cult In an article by Zoe
Heller published for The New Yorker back in twenty twenty one.
At the time, she was kind of looking at QAnon
and trying to decide, there's not like a clear guy

(18:36):
and that's the cult leader, and there's not like a
geographic center to this, and usually there is with cults
in history. Does this still qualify? And I think a
lot of people would agree that, like Yad does. I
think a lot of experts tend to agree that Yead does.
And when she was looking at the QAnon movement as
a cult, Heller noted this. Robert Lifton suggests that people
with certain kinds of personal history are more likely to

(18:56):
experience such a longing, those with quote an early sense
of confusion or dislocation, or at the opposite extreme, an
early experience of unusually intense family milieu control. But he
stresses that the capacity for totalist submission lurks in all
of us and is probably rooted in childhood, the prolonged
period of dependence during which we have no choice but
to attribute to our parents and exaggerated omnipotence. And I

(19:20):
found particularly the bit where he's talking about like, yeah,
if you an early sense of confusion or dislocation makes
people crave this kind of to give up this kind
of control and responsibility. The people running these AI companies
and maybe not necessarily the very top, because I think
those tend to be pretty cynical, realistic human beings. But
like a lot of the people who are in them,
and a lot of the people who are latching onto

(19:40):
AI as a fandom online are people whose childhoods and adolescences,
like all of ours, were shaped by nine to eleven,
the dislocation and change that that caused, and their young adulthoods.
A lot of these people, like us, will have come
of age around the time of the two thousand and
eight crash. Many of the people who are younger in
the AI fan base are you know, maybe zoomers and stuff,
and you know, a lot of them are people who

(20:01):
have really ugly ideas about like artists shouldn't charge for
shit or whatever. Yes, but also these are people who
a lot of them came into their careers went into
stem fields because they were told coming up the tech
industry is the safest place to make, you know, a
good living for yourself and that all fell apart a
couple of years ago, right, it started to at least
tech laughs began so again dislocation, chaos, the sense that

(20:23):
like what else am I going to entrust my life to?
I thought I had a plan and it fell apart.

Speaker 2 (20:29):
Yeah, you know, I this is this is where if
he's going to get real, it's philosophical and big brereer.
But I just finished All about Love by Bell Hooks,
and you know she often talks about like the wandering
life with lovelessness and that, and that's searching for it
and not having having it. And I feel like that

(20:49):
goes hand in hand with what you're saying right where
it's like I want a sense of belonging and I
want to feel like I'm a part of people, and
whether that is running into target trampling people for Stanley cups,
or it's being a part of like what you perceive
to be the next big thing. Like I think that
is the biggest kind of selling point for a lot

(21:10):
of these AI people who's like this is the future,
Like that is almost every person who starts a fifty
tweet thread with shitty examples of why AI rocks starts
itting with this is the future and you just got
to get over it. And there's so many people who
just want to be on the ground floor of that.
They want to be the people who were on it.
Because how many times even I, you know, when you

(21:33):
have that like time machine question, you're like, oh, Stock
and Starbucks, ooh uh an Apple. You just want to
be there before it gets big. So when it really
at the end of the day, it all comes down
to commerce, you want to be at the top when
it all shifts. And that is actually the danger in
this for me is the commerce. I think about it
often because you know, like you're saying it, like orders

(21:58):
the top Google search. Google is kindrently in courts right
now fighting against you know, basically shaking down companies to
see who would be the top one. So like the
future of this being actually you know, a useful app
kind of lives now in that case because if Google
wins and they can put whoever's on top, that's only

(22:19):
going to make it more valuable where they place who's
on top. Because people are using these weird rabbits, you know, yeah, exactly,
it's yeah, it is to them. They see the beginning
of the futures, and I feel like, to me, I'm
just looking at all the ways it can be abused,
because if we just look at everything that has come
before us, we have to think of the ways that

(22:42):
it has been abused.

Speaker 1 (22:43):
And all the ways it'll be a worse future, you know.
And I think I really liked that you brought up
the panic they try to inciteen the rest of us,
the like the fomo where it's like this is the future,
get on board or you're gonna get left behind, y right?
That is that is the cult recruitment tactic, right, And
what they're trying to do. I just brought up that

(23:03):
a lot of the people who are most vulnerable to
this are the folks who like, yeah, they have this
sense of like insecurity, dislocation, and they see getting on
board with this early they feel like a sense of
security there. And by saying you're going to get left behind,
this is the only way forward, you won't be competitive
if you don't embrace this stuff that they're trying to
induce that sense of fear and dislocation to make people vulnerable.

(23:26):
And I want to read another quote from that art
from that New Yorker article, The Less control we feel
we have over our circumstances, the more likely we are
to entrust our fates to a higher power. A classic
example of this relationship was provided by the anthropologist Bronislam
Molina Whisky, who found that fishermen in the Trobyan Islands
off the coast of New Guinea engaged in more magic

(23:46):
rituals the further out to see they went. And I
think we all feel like we're getting further out to
see these days, right, Like, it's not hard to see why.
I like, yeah, I'm near the shore or what. I
don't believe in anything but what's right in front of me.
And then like you can't say anything but water, and
you're like, no, there's a god and I can keep
them happy.

Speaker 2 (24:07):
Yes, yes, indeed, it's it's tough. Yeah, everyone's just kind
of grasping at what they can to just bolster themselves.
And sometimes yeah, you're grasping at some weird stuff.

Speaker 1 (24:21):
Yeah yeah, yeah, And it's you know, it's noted often
by I think a lot of particularly atheists on the Internet,
that like church attendance is down people who identify as
part of an organized religion like that that is at
its lowest level basically. Ever, and this is true. These
are real trends and they have real effects. But I
don't think the fact that less people are religious in
the traditional sense means they're less superstitious or spiritual than

(24:44):
they ever. It's just that what they invest with that
belief has changed, in part because they've seen the world
dislocate so far out of what most priests and another
sort of like religious heads are capable of sort of
explaining or comforting them over right, It's like, oh, religion
is less comforting in a world as advanced as ours
for most people. Now, this may seem like a reach

(25:07):
still to kind of call what's going on around AI
a cult, and I get that. I ask you to
bear with me here, and I do want to note
there's nothing wrong with the inherent technology that we often
call AI, or at least not with all of it.
That's A because it's used as such a wide banner
term for stuff is very just like a text recognition
program that can listen to human voice and create an
on the fly transcription. That's an AI. That's an example

(25:30):
of that kind of technology, right like it gets folded
in there. That's one of the things in AI has
to do recognizing language in like facial recognition too, recognizing faces.
If you're ever going to have an actual artificial intelligence,
those are two of the baseline capabilities that it needs.
Chatbots obviously are a big part of this, along with
like the Sundry tools that are being used now to
clone voices, to generate deep fakes and fuel our now

(25:52):
constant trip into the Uncanny Valley. Cees featured some real
products that actually did harness the promise of machine learning
in ways that I thought, we're cool as I noted
it could happen here. There's like this telescope. It uses
machine learning to like basically clean up images that you
take with it at night when there's like a lot
of light pollutions so you can see more clearly. And
I'm like, yeah, that's dope. That's great, But that lived

(26:13):
alongside a lot of nonsense, you know. Chat GPT for
dogs was a real sin I saw, and like, there
was an AI assisted fleshlight to help you not be
a premature.

Speaker 2 (26:25):
Because of course that's the one that popped on my tongueline.
It's like and it was like and then they gamified
it where you go to different planets, you defeat the planet.
So I'm like, what you You keep talking about beating
the planets? So how do I lose? Is it when
I bust about bussing loss? Because you're now introducing shame

(26:45):
to sex again and I thought we finally got out
moving best that. Yeah, I can't beat.

Speaker 1 (26:50):
Level those kind of bad ideas. That's all par for
the course for CEES. But what I saw this year,
in last year, not just at CEES, just over the
year in the tech industry from futurist fanboys and titans
of industry like Mark Andersson, is a kind of unhinged
messianic fervor that compares better that to scientology than it

(27:11):
does to the iPhone. And I mean that literally. Mark
Andreesen is the co founder of Netscape and the capital
firm Andresen Horowitz. He is one of the most influential
investors in tech history, and he's put more money into
AI's startups than almost anyone else. Last year, he published
something called the Techno Optimist Manifesto on the Andresen Horowitz website.

(27:31):
On the surface, it's a pean to the promise of
AI and an exhortation to embrace the promise of technology
and disregard pessimism. Plenty of people called the Peace Out
for its logical fallacies. For example, it ignores that a
lot of tech pessimism is due to real harm caused
by some of the companies Andresen invested in, like Facebook.
What's attracted less attention is the messianic overtones of everything.

(27:53):
Andresen believes quote, we believe artificial intelligence can save lives
if we let it. Medicine, along Manye with many other fields,
is in the Stone age compared to what we can
achieve with joined human and machine intelligence working on new cures.
There are scores of common causes of death that can
be fixed with AI, from plane crashes to pandemics to
wartime friendly fire. Now he's right that there's some medical

(28:15):
uses for AI. It's being used right now to help
improve the ability to recognize certain kinds of cancer, and
there's the potential for stuff like in home devices that
let you scan your skin to see if you're developing
a melanoma. And there's debate still over how useful it's
going to be in medical research. I've talked to recently
some experts and I've read some stuff that like there
are some reasons for caution too, for some of the

(28:36):
same reasons we should have caution with this everywhere. There's
also disinformation that spread medically with AI, even to doctors,
and some of the patterns that using this stuff gets
medical professionals into can make them discount certain diagnoses as well.
So I don't say that to like say, there's not
going to be some significant uses for some of the
way this technology works medically. Some aspects of AI will

(28:57):
save lives. It's just the evidence right now doesn't see
just it's going to completely revolutionize medical science. It's another
advancement that will be good in some ways, and there
will be some negative aspects of it too.

Speaker 2 (29:07):
Right.

Speaker 1 (29:08):
It's also very much not fair to say that, like
we're going to reduce deaths for human beings as a
result of AI, because right now the nation of Israel
is using an AI program called the Gospel to assist
it in aiming its air strikes, which have been widely
condemned for their out exceptional outstanding, in many cases genocidal
level of civilian casualties. Yes, it's just outrageous.

Speaker 2 (29:31):
Yeah, oh, one hundred percent, And you know, you know
that's exactly what's going on as a genocide, and you
know the language and a lot of these speeches says
that say as much fair, yeah, even more so. Yeah,
like like you're saying another thing I want to point out,
which you might have been about to say, and I'm
already jumping ahead. Is how I think it was chet

(29:52):
GPT that has quietly switched their terms of service to
say that it wouldn't be used for like weapons and.

Speaker 1 (30:00):
Oh yeah, to hurt people, and that's for sure.

Speaker 2 (30:03):
Yeah, and now now it has it has been quietly
scrubbed from those terms of service, and we do we
need to talk about that, Yeah, because there's just so
many things that we have grown accustomed to with tech
that I think is dangerous as we get into things
that have more room for error. Because we're used to

(30:24):
updated terms of services on our iPhone, right every time
we grab an eye an update, it's like, here's a
new terms of service and you just kind of scroll
through it and you go yeah, because you're like, yeah,
you know, this is just a phone. It's not gonna,
you know, be used for anything weird. Yet uh so
you're you're comfortable. But like when you're doing the same
thing with these chat GPT machine learning situations where you're

(30:46):
you're agreeing, and you're agreeing, Okay, I will help this
thing learn, and now you are just actively helping it
learn how to be an assassin. Like what happens there?

Speaker 1 (31:01):
Yeah, And it's again it's this back and forth where
on one hand, there is some technology like AI enabled
robots that can go run onto a battlefield and pick
up an injured soldier. I have no desire to see
some random private bleed to death in a foreign country,
fine with that. Or anti missile missiles right, using AI
to intercept and stop a missile from blowing up in

(31:21):
a civilian area sounds fine, Like I don't. I don't
want random people to die from missiles, But it's also
going to be used to target those missiles. And to say, like,
based on some shit we analyzed on Twitter or whatever,
we think wiping out this grid square of apartment buildings
will really get a lot of the bad guys, and based.

Speaker 2 (31:38):
On we should blow them up exactly. It's crazy.

Speaker 1 (31:43):
It's just it's there's certainly it's certainly not fair to
say there won't be benefits, but it's absolutely unclear in
every field of endeavor whether or not they will outweigh
the harms, right, And even if they do, to what extent,
you know, because a lot of what I'm saying suggests
that even if the benefits outweigh the harms in a
lot of fields, it's still I'm not going because of
the extent of the harms. In part, it's still not
going to be a massive sea change.

Speaker 2 (32:05):
Right.

Speaker 1 (32:05):
There are a lot of reasons for caution, but Mark
has no time for doubters. In fact, to him, doubting
the benefits of AGI artificial general intelligence is the only
true sin of his religion. Quote. We believe any deceleration
of AI will cost lives deaths that were preventable by
the AI that was prevented from existing. Is a form

(32:27):
of murder. And that's fucked up. That's really dangerous to
start talking like that. Oh yeah, and this is the
more direct cult comment here. I want you to compare
the claim Mark made above that slowing down AI is
identical to murder. I want you to compare that to
the claims the Church of Scientologies makes. Because the Church

(32:48):
of Scientology, they have this list of practices and the
beliefs that they call tech, right, and they believe that
by taking on tech, by engaging with it, people can
become clear of all of their flaws, and and by
doing that you can help you can fix all of
the problems in the world. Right, the Church of Scientology
on his websites claims that its followers will quote rid
the planet of insanity, war and crime, and in its

(33:10):
place create a civilization in which sanity and piece exist.
How is that in any way different from Mark Andresen
saying all of the shit that he's saying, right, that
it's going to like create this this We're going to
revolutionize medicine, We're going to like d friendly fire, We're
going to cure pandemics, we're going to stop car crash.
Asks what is the difference? Right and Scientology uses that

(33:32):
claim that scientology tech is so necessary it's going to
fix all these problems. So anyone who gets in the
way of the Church of Scientology and the deployment of
this tech for mankind's benefit is subject to what they
call fair game. A person declared fair game quote may
be deprived of property or injured by any means by
any Scientologist. And again, Mark Andersson has not said that

(33:54):
in his Techno Optimist manifesto. In fact, he makes some
claims about like none of no people are our enemies.

Speaker 2 (33:59):
Right.

Speaker 1 (34:00):
If you're saying you're a murderer for slowing this down,
it's not her to see how some people might adopt
a practice like fair game eventually, right, that's how where
else does this go?

Speaker 2 (34:10):
Is my way? What do we do with murderers? What
does I feel like the general rule across all creeds? Yeah,
across all beliefs is typically murderers are bad and should die.

Speaker 1 (34:20):
That is at least to be punished. Right, there's a
punishment for murders. Most people agree. Yeah, speaking of murder,
you know who has never committed a murder?

Speaker 2 (34:31):
You cannot say that, like Blake. Okay, all of our ads.

Speaker 1 (34:36):
You know who, I can't prove we're involved in any murders,
just like I have no evidence that Jamie Loftis had
any role with Okay, Okay.

Speaker 5 (34:45):
What I'm saying one one, my girl is innocent, and
two you can't say that about ours about this, Okay, Well,
we don't pick them.

Speaker 1 (34:55):
If he By the way, we're spreading a rumor online
that Jamie Loftis was possibly involved in a series of
murders in Grand Rapids, Michigan. It's a bit, it's it's
it's a good bits.

Speaker 5 (35:08):
So you know who knows innocent?

Speaker 2 (35:13):
Uh huh.

Speaker 1 (35:14):
Anyway, here's here's some ads. Oh We're back. So the
more you dig into Anderson's theology, the more it starts
to seem like a form of techno capitalist Christianity. AI
is the savior in the cases of devices like the Rabbit,

(35:35):
it might literally become our own personal Jesus. And who
you might ask, is God. Quote. We believe the market
economy is a discovery machine, a form of intelligence, an exploratory,
evolutionary adaptive system. Now God, through this concept of reality
capitalism itself and capitalize the sea there because it's a deity.

(35:59):
Hassen to bring artificial general intelligence into being. All the
jobs lost, all the incoherent flats and choking our Internet,
all the Amazon drop shippers using chat GPT to write
product descriptions, these are but the market expressing its will.
Artists have to be plagiarized. Children need to be presented
with hours of procedurally generated slop and lies on YouTube

(36:21):
so that we one day can reach the promise land
of artificial general intelligence. Iffy isn't it worth it.

Speaker 2 (36:29):
Oh my god, it's so isn't And it's I didn't know.

Speaker 4 (36:32):
You know.

Speaker 2 (36:33):
One of the biggest criticism with AI is that it
you know, it is one of the effects of when
you know, creativity and commerce meets commerce will always try
and kill creativity because it is commerce is more concerned
with the the the the the buck than it is
the outcome or what it takes to get set buck.

(36:53):
And that was that was going to be a whole thing.
I was going to drop at some point, and they
just said it for me. They just send like like
they I didn't. I thought it was more veiled. I
thought it was more hidden. But you know, that is
that is why you can you can try and say
that it is ethical to take from artists because we're
making it easier for you. Your fingie's hurt doing all that drawing.

(37:16):
But if it learns from you, and and and and
and and you know, uh, don't ask how you're gonna
get paid, But if it learns from you, we can.
We can give the people what they want without causing
you all this labor. You know, but who gets the money.
It's it's always the dork behind the computer who did
the code that is just essentially stealing from all of

(37:38):
these people and learning from them and then just producing
this amalgation of everything they've done.

Speaker 1 (37:46):
Yeah. Yeah, absolutely. Also, I should know I'm trying to
be consistent about this. I wrote it down and then
I think slipped into it. It's Andres and Mark Andrees
and andresen Horowitz. Uh, it's just a weird name that
I'm not used to say. I wrote this down and
then immediately forgot to correct myself at the start of
the podcast. Again, folks, hack and a fraud. But you
know who can own well, actually, I won't say only

(38:08):
humans can be hacks and frauds like that. Because the
AI is absolutely mispronounced shit and get shit wrong too.
I guess that maybe they are getting conscious. Can they
build an AI that's as much a hack and a
fraud as I am. We'll see. So no, thank you, Sophie,
I appreciate it. AGI is treated as an inevitability by

(38:31):
people like Sam Altman of open Ai, who need it
to be at least perceived as inevitable so their company
can have the largest possible IPO. Right, there's a lot
of money on the line and in the people with
money believing all of the promises that Andresen is making.
This messianic fervor has also been adopted by squadrons of
less influential tech executives who simply need AI to be
real because it solves a financial problem. Venture capital funding

(38:55):
for big tech collapsed in the months before chat GPT
hit public consciousness. The reason cees was packed with so
many random AI branded products was that sticking those two
letters on a new company is like they treat it
like a talisman, right, It's this ritual to bring back
the rainy season. You know, if you throw AI in
your shit, people might buy it. Yeah, And it's you know,
there's versions of this, like laptop makers are throwing AI

(39:17):
in everything they do now just because like laptop sales
soared during the start of the pandemic, but they plummeted
because people, for one thing, don't need to buy laptops
all the fucking time. Most people wear them out right, yes, exactly.
And again this comes in. If you can get people
in the cold, if you can get them scared that
they're going to fall behind without AI, then maybe they'll
buy a new AI and I have lap cup top
because they're like, well, this is what I got to

(39:38):
do to stay competitive. You know, the terminology that these
rich tech executives use around AI is generally more grounded
than Andreson's prophesying, right, but it's just as irrational. The
most unhinged thing I heard in person at CES was
from Bartley Richardson, an AI infrastructure manager at Nvidia, who
opened a panel on deep fakes by announcing, I think

(40:00):
everybody has a co pilot. Everybody's making a co pilot.
Everybody wants a co pilot. Right, there's going to be
a Bartley co pilot maybe sometime in the future. That's
just a great way to accelerate us as humans, right.
And it's funny he's named Bartley. If you if you
know your old Star Trek and you can remember Barkley,
the the Sad insign. He sounds like that guy and

(40:23):
resembles him.

Speaker 2 (40:26):
Oh man, What's what's funny about that speech is it
sounds like he's trying to convince himself to that Yeah, yeah,
we were not wasting our time, are we?

Speaker 5 (40:38):
Yeah?

Speaker 1 (40:39):
Yeah again. Later in a separate panel, in Nvidia in
house council Nikki Pope, who's like the only skeptic they
let On cited internal research showing consumer trust and brands
fell whenever they used AI. This gels with research published
last December that found around twenty five percent of customers
trust decisions made by AI less than those made by people.

(41:01):
No one on stage bothered to ask Bartley. It was like, Okay,
you want to use this thing. We know your own
company has data that it makes companies less trustworthy. Are
you worried that if you use it people won't trust you?

Speaker 5 (41:13):
Like?

Speaker 1 (41:15):
Is that not in your head? And that's that was
kind of a pattern at CEES all of the benefits
of AI, with some very very specific exceptions. Most of
the benefits of AI were touted in vague terms. It'll
make your company nimble, it'll make it more efficient, you know,
it'll accelerate you harms though, while they were discussed less often,
they were discussed with a terrible specificity that stood out

(41:36):
next to the vagueness. One of the guys in the
deep Fake panel was Ben Coleman, and he's the CEO
of Reality Defender, which is a company that detects artificially
generated media. Right their job is like, let you know
if something's AI generated. And he claims that his company
expects half a trillion dollars in fraud worldwide this year

(41:57):
just from voice cloning AI. Not fraud from AI, just
the fake voice AI happy trillion dollars.

Speaker 2 (42:06):
Yeah. I here's the thing too that I think is
the scariest part of AI that I don't think we've
talked about yet is that everyone can use it. You know,
like it's not this isn't a thing that is like,
oh well, it's it's you know, these companies is too expensive.
People are priced out, and we just got to hope
that everyone's good. You're getting like SpongeBob rapping Kendrick Lyrics

(42:28):
on TikTok. You know, we my buddy Bennieam jokingly like
promoted his show and made it look like he was
having a FaceTime with Obama, and it was like pretty good.
The only reason you knew it was it was fake
was because of just the nature of the video. But like,
at what point is someone going to stop and go, hey,
we shouldn't have technology where someone can impersonate world leaders.

Speaker 1 (42:53):
Yeah, it's and you know, to be honest, that's not
even because just because like I think everyone is ready
for the idea that like, yeah, people are faking Obama
or by because we've done little versions of that for
years now. I think the scariest thing is people aren't
ready for their loved ones to be imitated by AI.
And that is a thing that is happening in twenty

(43:13):
twenty three. And this has happened in a lot to
a lot of people. There was a specific case that
kind of went viral of this mother who got a
call from what sounded like her kidnapped daughter and like,
the AI generated the voice of her daughter, and then
a guy was like, give us money or will fucking
murder her? Right, and her kid was never kidnapped. She
very nearly sent them money because who wouldn't write yea

(43:34):
like if you don't know that that's a thing that
can do. Who would not y like, that's a that's
a rock and it's the AI was able to clone
her daughter's voice because her daughter has a TikTok, right,
it doesn't take that much, you know. And this is
why by the way that people are talking about ways
to mitigate this, I think one of them is like
have a family password or something where it's like, all right,

(43:55):
if I'm fucking kidnapped, I'm going to say the password
you know, so that you know some random person with
your TikTok won't know it or at least has to
try harder to guess it. So great, thanks to AI,
now we have to have passwords for our families in
real life. Cool.

Speaker 2 (44:08):
Yeah, if you want to see that Thanksgiving, you got
to know the password.

Speaker 1 (44:12):
Yeah. Fucking great. At CEES and at the substacks and
newsletters of all these AI cultists, there's no time to
dwell on problems like these. Full steam ahead is the
only serious suggestion they make. You should all be excited.
Google's VP of Engineering Bashad Bazzati tells us during a
panel discussion with McDonald's executive if you're not using AI,

(44:35):
bishod warned, you're missing out. And I heard versions variations
of the same sentiment over and over again. Right, not
just this stuff is great, but like you're kind of
doomed if you don't use it, And I will give
Nicky Pope was the only skeptic really who had a
speaking role in CEES. It is not coincidental that she
was an academic and a black woman, because her background

(44:57):
is studying algorithmic bias in the justice system. Yes, and
so she she she had some really good points about
like the actual dangers this stuff had. She was on
this the panel, she was almost governing AI risk and
kind of her like the partner on the panel. The
guy she was talking with was Adobe VP Alexandrew Costin,
and she urged the audience, I want you to think

(45:18):
about the direct harm algorithmic bias could do to marginalized communities. Quote,
if we create AI that desperately treats one group tremendously
in favor of another group, the group that has disadvantaged
or disenfranchised, that's an existential threat to that group. And
she was specifically like people talk about the existential threat
of an AI going crazy and killing us all, but

(45:38):
like that's not as realistic as what we know will happen.

Speaker 2 (45:41):
Yes, oh yeah. When you have these banks that like
automatically you know, are using AI to try and approve
you know, loans, and Deonte sends his name in and
they're like not approved.

Speaker 1 (45:54):
Yeah exactly.

Speaker 2 (45:55):
He's like nah, good, And.

Speaker 1 (45:58):
I am glad she was there. She again she still
you know, works for a company it's gonna make money
off of this. She's not like a doomer on it,
but like at least one person was being like, could
we please acknowledge there are dangerous I.

Speaker 2 (46:10):
Know, because here's the thing is and I truly believe this,
And you were basically saying this earlier that AI as
a tool is fine. You know, yeah, when you win,
it is. And a tool is something that is always
held and used by a human that there's the checks
and balances. It is only as evil as the person
who's using it. And that is just any item, physical

(46:34):
or digital, will ever, you know, will always be under
But the moment you're like, I'm going to give you
free reign based on information, and how many times has
an article gone online that was like, it's scanned Reddit
or it's scanned Twitter and it's racist. Now you know
there's and we still like went full steam ahead with

(46:56):
producing this and thinking we're right and we know, especially
when you see a lot of these tech leaders being
predominantly white men, and we know that in general, most
white men don't care about protecting marginalized people. They care
about getting their bottom dollar. They don't see they see
it as a as a rare occurrence because they don't

(47:18):
live that existence. They don't have the data pun intended
to build something to defend against it because it's not
a real problem to them because they don't see it
because and that is beyond just them being them and
more into as humans. A lot of times, if you
don't do the work to see it and understand what
happens to other people outside of your perspective, you're just

(47:41):
gonna believe that it's it's not real and or people
are exaggerating, or it's this and it's that. And when
you are a gung ho and you have drunk the
kool aid that is the AI kool aid, and you
are telling people that this is the future, we have
to do it, you're gonna push ahead. But like we've
literally have seen a clear cut example of what happens

(48:04):
when you push past safety and and you just do
what you want to do just because you have a
whole bunch of money and a mad Catz controller, Like,
it gets dangerous.

Speaker 1 (48:15):
So as you started talking about like the dangers of
certain tools, right and how they the value of the tools,
I literally looked over at the gun on my table, right,
and we all agree, even people who really like them,
there should be regulation. And I think the vast majority
like agree more regulation, but they are like, again, not

(48:35):
to say that it's sufficient, but there are a lot
of laws about like where you can carry a gun legally,
how you can buy a gun, right, and because people
understand that, like, yeah, if a tool is that powerful,
there should be limitations and things and things that you
can do that get them taken away from you forever.

Speaker 4 (48:54):
Right.

Speaker 1 (48:54):
Yes, I don't know how we do that with AI,
but I don't think that's a reason not to try. Yeah,
you do what.

Speaker 2 (49:02):
In that movie Hackers, and it's like you're just being
from the Internet till you graduate at high school or
whatever it was.

Speaker 1 (49:08):
Yeah. Yeah, Now, Costin claimed that the biggest issue again,
so nicky Pope is like, yeah, I think this stuff
could really hurt marginalized communities, and Alexandrew Costin from Adobe
responded that like, well, I agree, but the biggest risk
of generative AI isn't fraud or plagiarism. It's not using AI.

Speaker 4 (49:28):
Right.

Speaker 1 (49:29):
He claims, like this is as big as the Internet,
and we all just have to get on board. And
then I'm gonna read verbatim how he ends this particular statement.
I think humanity will find a way to tame it
to our best interest. Hopefully great, cool, no way, why awesome, awesome,

(49:52):
And the whole week was like that again, these really specific,
devastating harms and then vague claims of like, yeah, we're
all just gonna have to debt. And I brought up
scientology earlier. But when I think about touting like vague
claims of world saving benefits alongside, and it's going to
hurt too, when you have to accept the pain, I
think of Keith Ranieri, right, the next Sium guy. We

(50:12):
all remember Keith. You know, like most cult leaders, Ranieri
promised his mostly female followers, you'll get all these benefits.
I'm going to like heal you. You'll be extra productive,
you'll be super good in your business, super good in
your career. But you have to follow my commands because
they have to retrain you on some stuff, and so
it's going to be uncomfortable, right, And the end result
of this is a bunch of them branded them their

(50:32):
flesh and partook in sex trafficking. You know, the techs,
these tech executives are not Ranieri, but I think they
see money in aping some of his tactics. Right, the
benefits are so good, we just have to accept some pain.
You know I got to hurt you to rebuild you better. Now,
all of the free money right now is going to AI,
and these guys know the best way to chase it

(50:54):
is to throw logic to the wind and promise the
masses that if we just let this technology run roughshod
over every field of human endeavor, it will be worth
it in the end. This is rational for them, because
they're going to make a lot of money, but it
is an irrational thing for us to let them do.
Why would we want to put artists and illustrators who
we like out of a job. Why would we accept

(51:15):
a world where it is impossible to talk to a
human when you have a problem, and you're instead thrown
to a churning swarm of chatbots. Why would we let
Sam Altman hoover up the world's knowledge and resell it
back to us. We wouldn't, and we won't unless he
can convince us that doing so is the only way
to solve the problems that scare us. Climate change, the
cure for cancer, and into war, or at least and

(51:36):
into the fear that we will all be victimized by
crime or terrorism. All of these have been touted as
benefits of the coming AI age if we can just
reach the AI Promise Land, and we're going to talk
about some of the people who believe in that promised
land and what they think it'll be like. But first, Iffy,
you know, it is the real promise Land, the only
actual paradise any of us will ever know. What by

(52:00):
from the sponsors of this show, of course, all right,
here's an ad All right, we're back. So I want
to talk about Silicon Valley's latest subculture, emphasis on the
cult Effective Accelerationism or E slash acc e AC, I

(52:25):
think is probably how you could pronounce it. The gist
of this movement fits with Mark Andrewson's manifesto AI development
must be accelerated without restriction, no matter the cost. E
act has been covered by a number of journalists, but
most of that coverage misses how very spiritual some of
it seems. One of the inaugural documents of the entire
belief system opens with the statement accelerationism is simply the

(52:47):
self awareness of capitalism, which has scarcely begun. Again, we
see a statement that AI has somehow enmeshed itself with
capitalism's ability to understand itself. It is some way intelligent
and can know itself. I don't know how else you
interpret this, but as belief in a god built by
atheists who like money a lot. The argument continues that
nothing matters more than extending the quote light of consciousness

(53:10):
into the stars, a belief elon Musk himself is championed.
AI is the force the market will use to do this,
and quote this force cannot be stopped. This is followed
by wild claims that next generation life forms will be
created inevitably by AI, and then a few sentences down
you get the kicker. Those who are first to usher
in and control the hyper parameters of AI slash techno

(53:32):
capital have immense agency over the future of consciousness. So
AA is not just a god. It's a god we
can build, and we can use it to shape our future,
the future of our reality, to our own whims. And again,
some of these guys will acknowledge maybe it'll kill all
of us, but as long as it makes a technology
that spreads to the stars, that's worth it, because we've
kept the light of consciousness alive. Wow, that's not I

(53:53):
don't think the mainstream view, but you can definitely find
people saying that shit and they'll be like, if you
attempt to slow this process down, there are risks, and
they're saying the same thing, andres you stop it from
doing all these wonderful things. But also I do kind
of view that as a veiled threat, right, because if
AI is the only way to spread the light of
consciousness to the void, and that is the only thing

(54:16):
that matters, what do you do to the people who
seek to stop you, right, who seek to stop AI?
I actually am fine with extending the light of consciousness
into space. I'm a big fan of Star Trek. I
just don't believe that the intelligent, hyper aware capitalism is
the thing to do it. Again, too much of a
Star Trek guy for that. When I look at the

(54:36):
people who want to follow Mark Andresen's vision, who find
what these each people are saying is not just compelling
but inspiring. I think of another passage from that New
Yorker article by Zoe Heller quote, not passive victims, they
themselves sought to be controlled, Hiki Murakami wrote of the
members of om Shinriko, the cult Hu's seren gas attack
on the Tokyo subway in nineteen ninety five killed thirteen people.

(54:58):
In his book Underground Murka, he describes most own members
as having deposited all their precious personal holdings of selfhood
in the spiritual bank of the cults leader Shoko Asahara,
submitting to a higher authority, to someone else's account of
reality was, he claims their aim. Now the EAC Manifesto
newsletter thing use the term techno capital in conjunction with AI.

(55:19):
This is a word that you can find a few
different definitions on because it's it's a wonky, like weird
academia philosophy term, and there's a number of folks who have,
you know, will argue about how it ought to be described.
But this is broadly kind of the same thing that
Andresen is referring to when he talks about the market
as this intelligent discovery organism. Right, And while there are

(55:41):
a few different ways you'll see this defined, the EAC
people and Andresen himself are thinking about how philosopher Nick Land,
who's the guy who's generally credited with like popularizing the
term technocapitalism, defines it. Land is one of many advocates
of the idea of a technological singularity, the point where
technological growth driven by improvements in computing becomes irreversible, the

(56:02):
moment at which a super intelligent machine begins inventing more
and more of itself and expanding tech in a way
that humans can't. As one of Gland's fans summarized in
a medium post, a runaway reaction of self improvement loops
will almost instantaneously create a coherent superintelligent machine. It is
man's last invention. The most notable of industries AI, nanotechnology, femtotechnology,

(56:26):
and genetic engineering will erupt with rapid advancements, quickly exceeding
human intelligence. Now, obviously the way Land writes is again
kind of worth reading, but dense, perhaps two dents for
an entertainment podcast. So I'm going to read again from
a sub stack called Regress Studies by a writer named
Santi Ruiz, kind of talking about this idea of techno

(56:47):
capitalism that Land has from more critical standpoint, quote Nick Land,
who coined the term is a misanthrope. He doesn't like
humans much. So the idea that there could be an
entity coming, already being born, drawing itself into existence by
hyperstitionally preying on the dreams of humanity cannibalizing their desires.
Wearing the invisible hand is an ideological skin. He's into

(57:08):
that technoeconomic interactivity crumble social order in autosophisticating machine runway,
as he would put it. And that's good. You're being
colonized by an entity that doesn't care about you, except
insofar as you make a good host. We'll talk about
hyperstition in a little bit here. So Land is the
guru of accelerationism. You might not be surprised to learn

(57:28):
that he has a devoted following among the far right.
This is because he is quite racist, anti democratic, and
obsessed with eugenics. Now, his eugenics are not your gram
Pappy's eugenics. For him, it involves gene editing, which will
be available to greater extents than every thanks to AI.
Land claims to disnet like white nationalists and conventional racists
because they don't see the whole picture. Quote and this

(57:51):
is me quoting from one of Land's publications, Racial identarianism
and visages a conservation of comparative genetic isolation, generally to
by boundaries corresponding to conspicuous phenotypic variation. It is race
realist in that it admits to seeing what everyone does
in fact see, which is to say, consistent patterns of
striking correlated multidimensional variety between human populations or subspecies. Its

(58:15):
unrealism lies in its projections. That's pretty racist. Land is
listed by name and Andresen's manifesto as someone you should
read for a better understanding of the wonderful, optimistic future
he and his illke plan for us. He cites extensively
Grigory Cochrane, who posits that space travel spreading to the
stars will solve our race problem because it's a natural filter,

(58:39):
basically saying some races won't make it into space, so
we don't need to be to be violent, like, we
just have to spread to space and that will do
our eugenics part of it for us. So that's cool.

Speaker 2 (58:52):
Yeah, you know, I'm stuck on this, like, you know,
journey to the Stars through f thing, because I don't know, Robert,
do you play Warhammer forty two?

Speaker 1 (59:04):
Oh? God if he ify, of course I play. I've
been playing Warhammer forty K most of my life.

Speaker 2 (59:11):
Okay, because this is sound a very adeptis mechanics, Yeah right, yes, absolutely,
And I'm like, I'm like, what is going on here,
I'm deep and well trader, and now I'm like, no,
this is them.

Speaker 1 (59:24):
What is fun is that like Warhammer forty thousand, the
deep Fluff envisions a society that it's like a hybrid
between the Federation and Star Trek, and what these ai
Eyak people dream of that it is this utopia for
like ten thousand years because they developed thinking machines and
then all of the thinking machines turn on them and
murder everybody. And so in the future we just lobottomize

(59:46):
people who commit crimes and turn them into computers for
us because we can't have intelligent machines anymore.

Speaker 2 (59:53):
So we're just we're going that's what it looks like.
We're marching towards with these folks.

Speaker 1 (59:58):
Yeah, And obviously the thing that the Warhammer people are
are inspired by is like the but Larry and g
Hot and Dune, which is the more artful version of
that story with less orcs, which makes it inferior in
my mind. But I do love Dune.

Speaker 2 (01:00:12):
So you need more red because the everything Yeah.

Speaker 1 (01:00:16):
Red makes it go faster.

Speaker 2 (01:00:17):
Yeah.

Speaker 1 (01:00:18):
So Land concludes by imagining both racists and anti racists
binding together in defense of the concept of race. Right,
that's what the result of AI is going to be.
The racists need race. You know, we're going to get
so good at gene editing, the racists will get angry
and the anti racists will get angry because they're so
in love with the concept of race. Well, we're just
going to improve people annihilate racial differences through moving to

(01:00:42):
the stars and the natural filter that that implies. By
the way, the name of the article Land wrote all
this and is called hyper racism. So cool guy Glad
Mark Andresen cites him in his manifest were Glad the
biggest venture capital guy in the country. I was like, yeah,
read this, dude. Yeah, that's the sequel to Flom's hyperrealism.
So that's yeah, it is interesting. And these guys don't

(01:01:04):
tend to cide it as much. I think you get
in some of the deeper stuff. They're all talking about
these technocapitalist concepts that Nick Land plays with. They don't
talk about what I think is actually one of his
most sort of insightful points, which's about a concept called hyperstition.
And in brief, hyperstition is like creating things in fiction
that become real and the process by which that happens.
I think about that a lot when I think about

(01:01:25):
things like the bootlearry and jihad, the war against the
Intelligent Machines and the do in Dune, or you know
what happened in the Warhammer forty thousand universe. But I
also think about how part of why these people are
targeting creators, writers, actors, musicians, artists like pen and paper,
you know, painting artists and stuff, is because the only

(01:01:46):
way out of this future they have envisioned is in
imagining a better one and then making it real, right like,
and that is a thing that creatives have a role
in doing. So if you can kill that ability, hand
it over to the machines that you can control, maybe
you can stop them from this path of resistance. Motherfuckers,

(01:02:07):
I know.

Speaker 2 (01:02:08):
They're on there with They're like, you want something better,
We got to take it away from it?

Speaker 1 (01:02:11):
Yeah, fuck you. So anyway, I think that's going to
end it for us. In part one, you know, this
this whole investigation in much more condensed form, just kind
of really focusing specifically on the argument there's cult dynamics
of the fandom is being published in an article on
Rolling Stone. I'll probably edit in like the title or
something here so you can find it. But check that out.

(01:02:34):
That's kind of the more easily shareable version, more condensed, iffy.
Where should people check out you and your stuff?

Speaker 6 (01:02:40):
Oh?

Speaker 2 (01:02:40):
Man, I'm if you wide away on Twitter and Instagram,
you know, so definitely peep me there. Listen to our
relationship pod with if you and Emmy if you want
to hear us talk about relationship stuff and uh, yeah,
you have Max Film for movies, but if you go
to if you widy Way, you'll find all that stuff.
And of course watch drop something absolutely has been to

(01:03:03):
be announced next week.

Speaker 1 (01:03:05):
Yes, watch drop Out something cool is coming soon and
it's also an extremely human endeavor. Yes, yes, just like
this show is, so go with. Uh, I don't know
whatever god you worship or the machine god you plan
to meme into being Goodbye.

Speaker 6 (01:03:28):
Behind the Bastards is a production of cool Zone Media.
For more from cool Zone Media, visit our website Coolzonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.

Behind the Bastards News

Advertise With Us

Follow Us On

Host

Robert Evans

Robert Evans

Show Links

StoreRSSAbout

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.