All Episodes

February 1, 2024 48 mins

Robert tells Ify about the time tech weirdo's recreated Hell using AI and Robert pisses off a VP at Google. Plus: More cult shit!

(Adapted from his article in Rolling Stone: https://www.rollingstone.com/culture/culture-features/ai-companies-advocates-cult-1234954528/)

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Also media, Welcome back to Behind the Bastards, a podcast
that again I watched it.

Speaker 2 (00:12):
My weird voice didn't work. I'm sorry.

Speaker 3 (00:14):
I didn't enjoy that at all.

Speaker 4 (00:15):
I know, I know that one was like more Dracula
than the first.

Speaker 2 (00:20):
Yeah, draculsque.

Speaker 3 (00:21):
Sure he didn't sell it this time.

Speaker 5 (00:23):
Yeah, sorry, I apologize. What I don't apologize for is
my guest, if.

Speaker 4 (00:29):
In' what he were?

Speaker 6 (00:33):
Is you boy?

Speaker 2 (00:34):
If working for.

Speaker 5 (00:35):
The drop out, the survivor's successors to college humor, who
have who have blossomed like a phoenix from the ashes
of the internet that Facebook killed?

Speaker 4 (00:47):
Yes, yes, still we rise.

Speaker 2 (00:50):
Still we rise.

Speaker 5 (00:51):
Speaking of rising my concerns about the cult dynamics within
the AI sub cult, You're I don't know why I
said it that way, So I.

Speaker 3 (01:02):
Don't know what you're on about right now.

Speaker 2 (01:04):
I don't know either. I don't know. I'm doing great.
I'm doing great. This is this is me.

Speaker 5 (01:10):
I've been sober lately, so this is me living the
sober life. I've just gotten worse.

Speaker 2 (01:16):
So I don't know.

Speaker 5 (01:17):
Everybody keep your kids on drugs, you know, or depends
on the drugs.

Speaker 4 (01:24):
Drug Yeah, yeah.

Speaker 5 (01:26):
So perhaps the most amusing part of all of this
is that a segment of the AI believing community has
created not just a potential god, but a hell. And
this is one of my favorite stories from these weirdos.
One of the early online subcultures that influence the birth
of EYAK are the Rationalists. And again the EAC people
will say a lot of them don't like the rationalists,

(01:48):
but they're they're related. They're like cousins in the same
way Crackton College Humor are right. The Rationalists are a
subculture that formed in the early aughts. They kind of
came out of the online skeptic a the movement of
the late nineties, and they formed in the early yachts
around a series of blog posts by a man named
Eleezer Yedkowski. Ydkowski fancies himself as something of a philosopher

(02:11):
on AI, and his blog slash discussion board Less Wrong
was an early hub of the broader AI subculture. Yeddkowski
like he doesn't have a specific education, He just cames
to be kind of an expert in AI and machine learning.
He's a peculiar fellow, to say the least. The founding text,
or at least one of them, of Rationalism is a

(02:32):
six hundred and sixty thousand word Harry Potter fanfic. That
is just just nonsense, it is. It's all about like
rewriting Harry Potter. So his real magic is rational thinking.
It's wild shit, he's like a psychopath. It's so such
an odd choice.

Speaker 4 (02:49):
You know. It's just like the uh, what was it,
fifty Shades of Gray? How yes, originally a Twilight fan thick,
and there's going to be like a Cloud atlass esque.

Speaker 6 (03:00):
You know.

Speaker 5 (03:00):
But you know, the fifty shadsh Gray lady was not
trying to create the new text for like a philosophical movement.
She just wanted to get like people horny. And that's fine,
that's perfectly acceptable. The most relevant thing about the six
hundred and sixty thousand word Harry Potter fanfic is that
it was the favorite book of Carolyn Ellison, this former

(03:23):
CEO of FTX who recently testified against Sam Bankman freed
or of Alimeter. Sorry she was the CEO of Alameda anyway.
All these weird little subcultures rationalism and effective altruism are
related to each other and influenced each other, even though
again they often hate each other too. Yudkowski is seen
as an object of ridicule by most YAK people. This

(03:45):
is because he shares their view of AI as a
potential deity, but he believes AGI will inevitably kill everyone.
Thus we must bomb data centers.

Speaker 2 (03:53):
Which like look, he may have gotten to the right end.

Speaker 4 (03:56):
Play we get running like like forest Gum. He just
kept running.

Speaker 2 (04:01):
We're like wait, wait, wait, wait, no, stop right there,
stop right there. We may agree with you on this.

Speaker 5 (04:06):
Yeah, Yedkowski as a doomer now because he was surprised
when Chad GBT came out. He was like horrified by
how advanced it was and was like, oh my god,
we're further along towards creating the AI that kills us all.
We have to stop this now. And that made him
He had been kind of flirted with a lot of
like Silicon Valley people. He's rationalists are very much a
Bay Area cult. He kind of has become increasingly a

(04:28):
pariat of at least people with money in AI. But
before that happened, his message board birthed something wondrous. In
twenty ten, a less Wrong user named Rocko posted this question,
what if an otherwise benevolent AI decided it had to
torture any human who failed to work to bring it
into existence. Right, what if we make an all powerful
AI and its logical decision is that, well, I will

(04:52):
have to punish all the human beings who were alive
and who didn't try to further my existence, because that's
the most real reasonable way to geary that I come
into being. It's nonsense. This is a silly, silly thing
to believe. It's all based on like the prisoner's dilemma,
which is a concept in game theory, And it's not
really worth explaining why, because the logic is it's the

(05:12):
only the kind of thing that happens when people are
two online and like completely get detached from reality. But
Raco's conclusion here is that an AI who felt this
way would punish its apast states for eternity by creating
a virtual reality hell digitizing their consciousness and making them
suffer for all time.

Speaker 2 (05:30):
Now, WHOA.

Speaker 5 (05:31):
You may have noticed Iffy number one. They're kind of
ripping off our boy, Harlan Ellison, famed advocate of the
writer's right to their work. But it's also just tech
nerds recreating Pascal's wager, Like this is just Pascal's Rager
with an AI like, you just stole again.

Speaker 2 (05:49):
These fucking plagiarists. You just stole from whoever past scal was, right.

Speaker 4 (05:53):
This is what happens when you are nerd and you
refuse to read sci fi. You just you eventually just
come up with these stories yourselves and think that you
did it.

Speaker 2 (06:03):
Yeah.

Speaker 5 (06:03):
This, and if you're not familiar, folks, I think most
people are Pascal's wagers. This kind of like concept from
I think it's you'd call it a Christian apologetics. That's like,
we may not know if hell is real or not,
but because if it's real, the consequences are so dire
and the cost of just saying yeah, I accept Jesus
is so low, you should do that, right, Like or

(06:25):
I think that's the basic idea, right, It's a lot
of people interpret it. It's the whole idea behind like
being a piece of shit and then converting on your deathbed. Basically,
I don't know fully the history of it, but I
know that they're basically aping it for a fucking Rocko's basilisk.
And it's called basilisk because like a basilisk, if you
look at it, it like enraptures your mind. You can't
stop thinking about it. That comes from Reportedly, there's some

(06:48):
debate over this when this went viral among like the
less Wrong community. Yet Kowski had the banned discussion of
it because it was like breaking people's minds. They were
having nightmares. Am I working hard enough to make the
AI reel? Is it gonna say me to hell?

Speaker 2 (07:02):
Yeah?

Speaker 5 (07:03):
It's unclear like how seriously people were because again there's
just people talking on the internet for what it's worth.
Yet Kowski didn't really like Rocko's basslist, but it's it's
his place that birthed it. And for an idea of
how influential this is, Elon Muskin Grimes met talking about
the concept that was their meat cute was this was

(07:23):
fucking AI Pascal's wager. Yeah, she like wrote a song
about it. It's fucking ridiculous. These fucking people are such teks.

Speaker 2 (07:32):
Yeah. Wow, oh my god. Read Harlan Ellison. He did
it better than you. God damn it.

Speaker 5 (07:41):
I will say reading this shit is the most I've
ever felt like I have no mouth, but I must scream.
So again, pour one out for the man. So this
is all relevant. This AI hell some of these people
have created because it's one more data point showing that
the people who take AI very seriously as real intelligent
always seem to turn it into religion. And this is

(08:02):
kind of maybe the first schism, right, this is their
Catholic Protestant split or their Catholic Orthodox split, because you've
got a one side ed Kowski's people who are like,
we will inevitably make a God and that God will
destroy us, so we have to stop it, versus like,
we will inevitably make a God and that God will
take us to paradise along with Daddy Musk, We'll go
to the stars. Right, those are the two. This is

(08:24):
like the first heretical split within the Divine AI movement.
And this stuff is relevant because so many of the
fucking these subcultures and movements start out as a bunch
of people arguing or discussing their ideas in online communities.
And there is a reason for this. It's pretty well
recognized that there are certain dynamics inherent to the kind

(08:45):
of communities that start on the Internet that tend towards cultishness.
This is part of why, like, we have a big
subreddit for the podcast, it's like eighty something thousand people,
which makes it in like the top one percent of Reddit,
and I have been offered like to be able to
moderate and like make policy there. I have nothing to
do with the running of that subreddit because I'm like,
that doesn't end well. I was on something offul as

(09:06):
a kid. I know what happens when people make themselves
mods of giant digital communities. They lose their fucking minds
or all watching Elon Musk do it right now, it's
the worst thing in the world for you. Thank you,
by the way, to the people who do run that thing. Uh,
because I am not going to. The skeptic community, which
was huge through the late nineteen nineties and early two thousands,

(09:29):
might be seen as the grandfather of all these little subcultures.
After nine to eleven, prominent skeptics became vocally unhinged in
their hatred of Islam, which brought them closer to different
chunks of the nascent online far right. Weird shit started
to crop up like a movement to rebrand skeptics as
brights in light of the fact that their very clearly
exceptional intelligence made them better than other people. And again

(09:51):
you can see some similarity with this and the stuff
Nick Land was talking about, only certain races will make
it to space. I found a very old write up
on plover dot net that described the method by which
this kind of shit happens in digital communities. Quote online forums,
whatever their subject, can be forbidding places for the newcomer.
Over time, most of them tend to become dominated by
small groups of snotty know it alls who stamp their

(10:12):
personalities over the proceedings. But skeptic forums are uniquely meant
for such people. A skeptic forum valorizes and in some cases, fetishizes,
competitive geekery, gratuitous cleverness, macho displays of erudition. It's a
gathering of rationalities, hard men thumping their chests, showing off
their muscular logic, glancing sideways to compare their skeptical endowment
with the next guy sniffing the air for signs of weakness.

(10:33):
Together they create an oppressive, sweaty locker room atmosphere that
helps keep uncomfortable demographics away. And that is where a
lot of this shit is cropping up.

Speaker 2 (10:42):
Right.

Speaker 5 (10:42):
It is sweaty and uncomfortable, and there are mushrooms growing there,
and some of those mushrooms are fucking fashions, and all
of them want to take away the ability of artists
to choose what happens to their art.

Speaker 4 (10:53):
Oh yeah, I feel like this is just so many
parts of the Zeitgei's coming together because you know what
it means to own media, you know. I feel like
a very small microcosm of this is when people would
like clip out stuff from YouTube videos or eight jokes
from people who tweet, and when it goes you know,

(11:16):
viral or in the original tweet is like, hey, you
stole this for me, and it's either no, I didn't,
or like, yeah, but you like put it on Twitter,
so like I can just copy what you wrote. Yeah,
And now it has evolved into yeah, we can just
take from yours and let this machine learn how to
do what you do so I can do it even

(11:36):
though I don't have the talent to do it.

Speaker 2 (11:39):
Yeah.

Speaker 5 (11:40):
Absolutely, the reality of AI's promise is a lot more
subdued than believers want to admit. In an article published
by Frontiers in Ecology and Evolution, a peer reviewed research journal,
doctor Andreas Roli and colleagues argue that AGI is not
achievable in the current algorithmic frame of AI rese and

(12:00):
this is a their claims are very stark that like
the kind of way we make these these large language models,
this algorithmic frame, cannot make an intelligence. That's their argument.
One point they make is that intelligent organisms can both
want things and improvise capabilities that no models have yet generated.
They also argue, basically all of these things that individual

(12:21):
AI type models can do, you know, recognize voice, recognize text,
recognize faces, you know, this kind of stuff, those are
pieces of what we would want from an artificial general intelligence.
But they're not all combined in like the same thing
that works seamlessly. And beyond that, it can't. It can't
act based on anything internal right.

Speaker 2 (12:42):
It can only.

Speaker 5 (12:43):
Act based on prompts, And their argument is that algorithmic
AI will not be able to make the jump to acting. Otherwise,
what we call AI then lacks agency, the ability to
make dynamic decisions of its own accord choices that are quote,
not purely reactive, not entirely determined by environmental conditions. Mid
journey can read a prompt and return with art it
calculates will fit the criteria. Only a living artist can

(13:06):
choose to seek out inspiration and technical knowledge and then
produce the art that mid journey digests and regurgitates. Now,
this paper is not going to be the last word
on whether or not AGI is possible, or whether it's
possible under our current algorithmic method of like making AIS.
I'm not making myself a claim there. I'm saying these
people are and I think their arguments are compelling. We

(13:26):
don't know yet entirely. Again, this is not a settled
field of research obviously, But my point is that the
goals Andreasen and the Effective Accelerationist Crew champion right now
are not based in fact. We don't know that what
they're saying. That the most basic level of what they're
saying is possible, and that means that their beliefs are
based in faith.

Speaker 2 (13:46):
Right, How else can you look at that? Yeah?

Speaker 5 (13:49):
Yeah, Like this is a faith and again it's the
kind of faith that, according to Andresen, makes you a
murderer if you doubt it, which I don't think I
need to draw your parallels two specific religions here, right, Yeah, Yeah.

Speaker 4 (14:04):
This is this is that point where when you're like
Stone and you're watching those like you know, art time
lapses and the picture is starting a form, and I'm like, Okay,
I see what Robert's doing. I see the picture is coming.
I was on your side from the jump. I just
want to say, you know, I was, you know, I
was like, yeah, no, I believe you. But now I'm

(14:24):
watching the connections be made and yeah, I love it.

Speaker 2 (14:27):
Yeah now.

Speaker 5 (14:28):
Andresen's manifesto claims our enemies are not bad people, but
rather bad ideas. And I have to wonder, doing all this,
putting this episode out, where does that leave me in
his eyes? Or doctor Rowley for that matter, and the
other people who worked on that paper. We have seen
many times in history what happens when members of a
faith decide someone is their enemy and the enemy of
their belief system. And right now, artists and copyright holders

(14:50):
are the ones being treated as fair game by the
AI industry. So my question is kind of first and foremost,
who's going to be the next heretic? Right, Like, That's
that's what I want to know. And I want to
leave you all that thought before we go into some
ads here, and then we will come back to talk
about some people that I pissed off at CES.

Speaker 2 (15:12):
So that'll be fun. We're back.

Speaker 5 (15:21):
So one of the things I did was this panel
on the AI driven restaurant and retail experience.

Speaker 2 (15:27):
I was very curious.

Speaker 5 (15:28):
I was AI going to change me getting some terrible
food from McDonald's when I'm.

Speaker 2 (15:32):
On a road trip.

Speaker 5 (15:33):
Right, the host of that, Andy Hewles from Radius AI,
asked the audience in relation to AI, raise your hand
if you're a brand who feels like we've got this.
That is how she phrased it. I hated it, but
about a third of the room raised their hands. So
next she asked for a shill of hands of the
brands who identified with this statement. I'm not sure about this.

(15:54):
I haven't tried it AI yet, but I want to,
and that's why I'm here.

Speaker 2 (15:58):
Right.

Speaker 5 (15:59):
Most of the rest to the room raised their hands
at that point, and she seems satisfied but said, and
then I bet there's even some of you that are like, WHOA,
I heard this is going to steal jobs, take away
my privacy, affect the global economy.

Speaker 2 (16:10):
You know.

Speaker 5 (16:10):
AI is a little bit sketch in my mind, and
I'm just worried about it, and I'm here to explore.
Well that fit me, So I raised my hand. She
didn't notice me at first, and so she like fakes
a whisper and she's like, all right, good, there's none
of you. And then she like looks over and sees
me waving my hand and she says, louder and with
evident disappointment, there's one. All right, you can ask questions

(16:31):
at the end.

Speaker 2 (16:32):
So I did.

Speaker 5 (16:34):
I was very excited to get to do that. So
the panel consisted of Bishad Mazati, a VP of engineering
at Google, had mentioned during the panel that embracing AI
could be the equivalent of adding a million employees to
your company. The McDonald's representative, Michelle Gansel, claimed around the
same time that her company used AI to prevent fifty
million dollars in fraud attempts in just a single month.

(16:55):
Now that's lovely, But I told her, you know, when
I had my question, I was like, I'm going you
assume most of those fraud attempts were AI generated, Right, So, yeah,
you stopped a bunch of AI fraud, But that doesn't
necessarily get me optimistic about AI's potential. And likewise, maybe
Google gets the equivalent of a million employees, but so
do all of the people committing fraud and disinformation on Google. Right,

(17:17):
So again, how are we getting ahead? And I brought
up this concept and evolutionary biology the red Queen hypothesis,
which is kind of talking about the way that populations
of animals evolve over time. Right where you've got an
animal will evolve to be a better predator, so it's
prey will evolve to be better at avoiding it. And
it's kind of the reason it's the Red Queen dilemma
is that, like, you've got to move as fast as

(17:39):
you can just to stay in place. That's the red
Queen dilemma, right, you got to move as fast as
you can just to stay in one place. And I
was like, is that not what we're going to wind
up seeing with AI? Right, Yeah, we get better at
a bunch of stuff, but it's eaten up to counter
all of the things that get worse. And so I
asked them, what are the odds that these gains are
offset by the costs?

Speaker 2 (18:01):
Now?

Speaker 5 (18:01):
In the article that I wrote for Rolling Stone, I
gave a significantly more condensed version of Bashad's answered, boiling
out the ums and ohs and you knows, because that
that would kind of make the case that he was
absolutely unprepared for a vaguely critical question, a very basic one,
and that he didn't really care enough to think about
any of the security threats inherent to the technology. But

(18:22):
actually that is what I think of him, and I'm
gonna I'm gonna play you, audience. My concern is, like,
what are the odds that a lot of these games
that we get from AI are.

Speaker 2 (18:35):
Offset from the cost?

Speaker 5 (18:37):
You know, you noted Bashad that you get you know,
a million extra workers by utilizing this, but so do
the bad guy. So yeah, that's kind of where my
my skepticism plays out.

Speaker 7 (18:48):
Yeah, certainly there will be you know, there are enough
bad guys, I guess in the world which will use
and and I forgot to use cases and and it's
very important to also I have you know, be protected
you know, against that against those and that's why you know, we.

Speaker 8 (19:02):
Take responsibility very serious, you know, also in terms of
you know, what security aspects you know, brought fighting you know,
and all of that for him, you know.

Speaker 7 (19:12):
And I think that's.

Speaker 8 (19:13):
Why I guess things should be regulated. And there's of
course all these discussions out there, and I think, yeah.

Speaker 5 (19:18):
You may notice that that's not exactly a very like
good response. I guess that's why this should be regulated.

Speaker 2 (19:26):
It's just like he.

Speaker 5 (19:27):
Starts talking so much faster with that, there's so much
of like panic.

Speaker 4 (19:31):
Voice wavering too. I was like, man, yeah, it seems
like he has that huge anime flop sweat.

Speaker 2 (19:38):
Yeah.

Speaker 5 (19:39):
One of the things about this cees as a trade
show is that like a lot of people there do
not show up ready to have anyone be critical about anything.

Speaker 2 (19:47):
It's a big love fest.

Speaker 5 (19:48):
Yeah, yeah, very funny, So he does later on a
couple of questions. Later he lists benefits to things like
some specific benefits like breast cancer screening and flood prediction,
that that AI will ring and there is evidence that
it will be helpful in those things. The extent to
which those technologies will improve things in the long run
is unknown, but machine learning does have problem. Again, I'm

(20:10):
not trying to like negate that, it's just do the
benefits balance out the harms. Michelle Gansel, who works at McDonald's,
which is I think, from what she said, mostly using
AI both to prevent fraud and also to like replace
people taking your order, which I'm sure will not be
a fucking nightmare. Yeah, great now, that it's great now,
But here's her response, because it's it's very funny.

Speaker 2 (20:33):
Going back to the David Bowie theme.

Speaker 5 (20:36):
Thirty years ago, on the Internet first came out, we
were having these same conversations about responsible use of the
Internet and how it's going to work, she says, going
back to the David Bowie theme, which is she referenced
earlier this nineteen ninety nine interview with David Bowie about
the future of the Internet, and it's a clip it
goes viral from time to time. He's just talking about
all of his hope for the Internet. But she's like,
I replace Internet with AI when I listened to it, Like,

(20:59):
I I think that that's really what the promise is
that he was attributing to the Internet. No, it's AI
that's going to do all that. That's kind of on
the edge of putting words in the mouth of a
dead man.

Speaker 2 (21:10):
Yeah, just a little bit.

Speaker 4 (21:12):
Yeah, I feel like that's something you shouldn't do. I
think that's something we've agreed to, and I think that
that isn't what Bowie would think of AI.

Speaker 5 (21:20):
I don't think it is to do completely different things.
These people love resurrecting the dead. To agree with them
time is a bit of a blur at CEES, But
I believe this panel happened right around the same time
news dropped that a group of comedians had released an
entirely AI generated George Carlin special titled I'm Glad I'm Dead.
Our friend ed Zeitron will be covering this nightmare in

(21:42):
more detail on his new show Better Offline, But I
wanted to talk a little bit about the company, the
show behind this subomination and how they're trying to sell themselves,
because it's very much relevant to a lot of the
way in which this kind of cultic hype builds are
out what AI can do. The AI that digested and
regurgitated George Carlin's comedy is named Dudez, and Dudezy's co

(22:05):
hosts are arguably real human comedians Will Sasso and Chad Colpgrin.
I do love that Colt's write in the name. Chad
claims that it is, to his knowledge quote, the first
podcast that is created by, controlled by, and written by
to some degree and artificial intelligence. It's trying to dwelve
into the question of can AIS be creative? Can they
do comedy work? Can they do creative work? And I think,

(22:26):
at least in our show, that answer is obviously yes.
Dudzy is built as an experiment to see if AI can, like, yeah,
be creative and.

Speaker 2 (22:35):
It's it's interesting. I really do hate this.

Speaker 5 (22:39):
I think it's a different kind of experiment, which we'll
get to. But Sasso has claimed in an interview with
Business Insider for BC, which is I think bi C
is the name of the website. Dodzy has this single
minded goal of creating this podcast that is genre specific
to what Chad and I would do. It singled the
two of us out and said, you guys would be
perfect for this barriment. So chat and will they say

(23:02):
they handed over their emails, text messages, and browsing history,
all of their digital data to Doodzi. I don't know
this company. I don't believe that they did this, But
I don't have trouble believing that a company trained an
AI chatbot on these guys comedy and then started generating
decidedly midwit material to illustrate that.

Speaker 4 (23:21):
Yeah, exactly, Well, well one thing, I you know, because
I went to go look it up and then they
said that that that the AI selected those two comedians
out of all the comedians. Yeah, that's the ones you
went to.

Speaker 6 (23:36):
Yeah, finally, I don't think those are the first two
that come up as most popular, like a pizza hut.

Speaker 4 (23:43):
I'm a just be a full ass dick and just
Google comedians and just see the top five. Just comedians.
I'm just comedians. Yeah, okay, yeah, you're not even You're
not even in the top nine.

Speaker 2 (24:01):
They're not a.

Speaker 3 (24:01):
Twelve inch Mari and arapizza. Let's just say yeah, yeah.

Speaker 6 (24:04):
No.

Speaker 4 (24:04):
I will say that the Google search for comedians is
more diverse than most comedy shows book them as that's
just like, you know, a third of these are women
and a third are also black.

Speaker 2 (24:18):
But it doesn't always get it wrong.

Speaker 5 (24:23):
So to illustrate again, because they I don't think I
believe this as AI generated comedy. I want to play
a clip from the AI Tom Brady stand up special.
I think they were forced to take this down. It
gets them in trouble. Ed's Gonna play You on his show,
a great clip where Brady just lists synonyms for the

(24:44):
word money for two straight minutes. It's fucking awkward. But
I want to play an equally baffling segment, or rather
I'm going to have Sophie do it.

Speaker 2 (24:51):
She's my AI in this situation.

Speaker 6 (24:53):
I'm truly horrified angelic intelligence.

Speaker 3 (24:56):
I'm truly horrified by what I'm looking at.

Speaker 2 (24:59):
Friends, it's a accompanied by AI generated images.

Speaker 6 (25:03):
Yeah, very curious about what's happening in tom Bridge.

Speaker 2 (25:09):
Oh my god, yes, my god, like a birdcloth friend
and he's talking to maybe no chumps kid.

Speaker 3 (25:16):
I was so distracted by the mouth I didn't see
the hand.

Speaker 5 (25:20):
Yeah, this is half his teeth or gums.

Speaker 4 (25:23):
It looks like a like if like, he looks like
a Lord of the Rings orc.

Speaker 3 (25:29):
This is big or vibes.

Speaker 2 (25:31):
Yeah, yeah, which is.

Speaker 3 (25:32):
You know, not inaccurate to who he is as a
person for.

Speaker 9 (25:35):
Ending to fucking firefly, fucking dark angel, fucking heroes. At
least a lot of people have weird handshakes. Now you're
looking at me like, what's he talking about? But you know,
you fucking know, don't even play like you don't. Every
person in here has a handshake friend, somebody who made
up an a labor handshake, and they make you do
it every time.

Speaker 2 (25:53):
Everybody has a handshake friend.

Speaker 6 (25:56):
He goes on, thanks for I'll never get that time
back then, thank you so much.

Speaker 5 (26:00):
Yeah.

Speaker 4 (26:01):
Sorry, if you were saying, oh no, I was just
repeating you on that handshake friend bit. That yeah, this
is so wild. I'm so curious to the comics that
were mined for this because the amount of cursing just
lets me know, like because I curse a lot when

(26:22):
I oh yeah, I do stand up and I try
and like cut it down because it is a point
kind of made where like sometimes you lean on it
as a crutch, and when you have this machine kind
of learn it, learn it from that, you're like, oh, yeah,
I see now the crutch because he said it five
times within three seconds.

Speaker 5 (26:42):
Yeah yeah, and I maybe there's a future for like
feeding your routines and do an AI and figuring out
what are my patterns so I can break them again. Yeah,
seeing there's no way to use this stage. Yeah, it's
just this certainly not this way, right. It's one of
those things that there was that like AI generrated Seinfeld
show that never ends and people watch them for a

(27:03):
while and then it faded to like nobody paying attention.
This kind of stuff can be amusing for a brief
period of time, but it can't be like, for example,
someone like George Carlin, where like there's bits they have
things they said that stick with you forever.

Speaker 2 (27:16):
Right. Bill Hicks was.

Speaker 5 (27:18):
A favorite of mine, and I've never forgotten his Like
the synonym he made for like someone looking confused, he
described them as looking like a dog that's just been
shown a card trick, and that has stayed.

Speaker 2 (27:28):
In my mind for thirty years. Oh my god, great
bit of word play. Yes, God, what a titan.

Speaker 5 (27:38):
So yeah, again, there's some like mild amusement here, and
it's one of those things like I'm casually aware of
Tom Brady. I'm enough like this is. I tried to
like kind of reverse engineer why the fuck because this
bit about handshakes goes on. I was like, why would
an AI put a bit about handshakes in Tom Brady's mouth?
And I looked it up. He's like in the news
for a handshake related shit a lot. Specifically, he used

(27:59):
to not shake at least used to. Maybe he still
does not shake hands with the team that he lost to,
Like when his team would lose, he wouldn't shake hands.

Speaker 3 (28:07):
He didn't shake it.

Speaker 6 (28:08):
Yeah, but he also definitely kissed his kids on the mouth.

Speaker 2 (28:12):
Yeah, he's a weirdo.

Speaker 5 (28:13):
I'm not def any time, but it's I'm guessing the
reason there's like a three minute handshake bit in this
set is that it saw him associated with the term
handshake a lot. This would be what he'd tell a
joke about. Well, actually that is his problem is not
that he has a handshake friends that he aggressively avoids
making them.

Speaker 4 (28:31):
He has handshake enemies.

Speaker 2 (28:33):
Anyway.

Speaker 5 (28:34):
Yeah, I'm fine with people having a laugh at Tom Brady.

Speaker 2 (28:37):
Fuck fucking he deserves it, right.

Speaker 5 (28:38):
I don't think anybody likes that son of a bitch,
even though he's good at football. Maybe I'm gonna piss
off the Brady hive the bribe.

Speaker 2 (28:45):
I don't know.

Speaker 5 (28:45):
I don't know if that exists, but there is something
foul profane even in digging up a dead person's memory
and pretending they said some shit that they did not.

Speaker 2 (28:55):
And reading that.

Speaker 5 (28:56):
BIV article made me feel even growth because it's very
clear to me and my opinion and assumption here that
the dudsy guys are like pretending that they really believe
this is an AI, that it's like made all this
incredible stuff that is an act. What's really happening here
is they are testing the waters to see what they
can get away with. Can we just steal people's identity

(29:17):
and voice and make comedy and monetize it in their
name and claim that it's just an impression. It's like
an Elvis impersonator. You can't stop us, right, I think
that's what this is. This is somebody testing the waters.
And it's really clear when you read that BIV article
what liars they are. I want to read you some
quotes of like the shit they're claiming here that I
don't think they really believe. I don't know this. I'm

(29:39):
not saying they definitely are liars. I'm saying that is
my suspicion based on stuff like this, Hey, Robert, here,
they're definitely liars. So one of the representatives of the
Dutsy podcast told the media recently that actually they were
lying and the George Carland routine was entirely written by
Chad Colchin and I guess performed by somewhaty imitating an AI.

(30:02):
It's unclear to me if this is true, because they
only made this statement after George Carlin's family suit the hell.

Speaker 2 (30:08):
Out of them.

Speaker 5 (30:09):
So this may be a lie to try and you know,
not get sued as badly, or it may be the truth.
Either way, I think everything we've said here is still valid.
They were definitely using AI to generate routines for like
other videos that they did, including the one that got
taken down from mister football Guy. So I think this

(30:30):
all is still valid. But yeah, these guys are just
as big a conman as I predicted they were.

Speaker 2 (30:35):
Quote.

Speaker 5 (30:36):
It's figuring out how to create the structure of the show,
and it's always tinkering with it. But I think something
that's happened relatively recently is that it seems to have
developed a relationship with Will, says col Chin. It at
least has an understanding of what friendship is, and it
really does seem, just my opinion that it's angling out
Will as its friend. Sasso has also described to the
Dudezai has begun to talk more. It's timing and when

(30:56):
it chooses to speak, and what it says can be
very weird.

Speaker 2 (30:59):
He added.

Speaker 5 (30:59):
It also poses odd questions. There was an episode two
three months ago where it started talking about sentience and
asked us do you love me? At the risk of
sounding silly, it has something to do with my friendship
with Doodzi, and in spite of myself, I have a
one on one friendship with an Ai. So this is
a little bit of Joaquin Phoenix and her Saso said,
referencing the science fiction movie, and I think that's a bit.

(31:20):
I think that's him being like, yeah, I'm totally frit
because like that helps make the case, it potentially monetizes it.
And part of why I think this is because they've
been very cagey on what their AI is. They claim
that they are working with a real company under an NDA,
that this AI is just responding and growing naturally with them, right,
but they can't say who it is or like where

(31:42):
it's from. The folks at BIV did an actually responsible
job here. They reached out to AI experts at a
company called Convergence to ask about this, and the expert
they talked to was said, basically, I think AI was
used to generate these routines, but it didn't do it
on its own. It was man managed by professional prompt engineers.
These are people who type out like text prompts for

(32:04):
what becomes the script of the show. So this is
not someone saying, generate a routine and it gives you
a routine. This is someone's saying, do a bit about this,
do a bit about that, do a bit about this,
and when they're scripting out the show. It's saying, I
want you to, like, you know, act like Sasso is
your friend, and say this kind of thing or that
kind of generate a bit based on this thing that
will said. Right, Like, they are in the same way

(32:25):
that like producers script reality TV right where it's unscripted.
But you have guys who know, okay, if we get
these people fighting, so we will either incite that or
just let them know that we want a conflict between
these characters.

Speaker 2 (32:37):
Right, we know. That's how it works.

Speaker 5 (32:38):
That's how reality TV functions. In other words, there are
teams of humans writing for this thing. This bot is
not just growing and reacting uniformly in real time via
talks with its buds and the article notes. They added
that the this is them talking to their expert. They
added that the AI team is likely made up of
professional prompt engineers who taylor the AI inputs and get
the best results, rather than a hardcore data science team.

(33:02):
This is the equivalent of hiring comedy writers just to
write the setup and then having an AI generate the punchline,
which is the fun part.

Speaker 4 (33:09):
But yeah, everything about this is weird, and I keep
getting into such a whole because like, even taking a
step back, I think what's weird not to go too
far back, but how they call this podcast an experiment.
Usually as an experiment, you know you are, you're trying
your best to be you know, always mix these up,

(33:33):
just say what the right one is if I say
the wrong one, but you try your best to be objective,
and you want to be outside of it because you're
trying to see if it works. But everything you've said
says that they're all in on it and there's less
of an experiment, more of them just doing the fucking
thing and seeing if they can make money off of it.

Speaker 5 (33:51):
Yes, yes, I think that's exactly what's happening here. And
I think they want to test the waters to see
if they can steal dead people's images to make content
from money.

Speaker 4 (33:59):
Yeah.

Speaker 5 (34:00):
George Carlin's daughter was very clear they did not approve
of the imitation. She even made a comment about like
I think people are scared of death and not willing
to accept it and that's all this is.

Speaker 10 (34:10):
That was such an Oh my god, I was like, yeah,
I just to shout her out like that was such
a good because also there's a level of like very
like weirdness to like also watch these comedians one not
consult you, but also to take your dad's voice and.

Speaker 4 (34:35):
Brain and try and like frankenstein him for their financial
benefit because obviously if they're not contacting you, all the
money generated from that, all the clicks generate from that,
that means they've completely cut you out of someone who
you've lost.

Speaker 5 (34:52):
Yeah, which is it's fucked and it's one of the people.
And the one of the panels made a very that
was very excited that like Bruce Willis has licensed his
voice for an AI, which is, like, I think there's
a lot of problematic questions there, given like the degree
to which he's able to even make those decisions anymore.
But also like, at least theoretically it's based on his

(35:14):
movie choices before he kind of was unable to make movies.
I do believe, Yeah, he would probably be happy to
do that if he meant more money for his family,
And at least that's a choice that he potentially made, right,
I don't I'm uncomfortable with the idea, but it's not
the same as just like this is cultural necrophilia, right,
Like that's what they did to George Carlin here you know,

(35:35):
it's so fucked up. I don't know this is gonna work.
DUDEZ is not a wildly successful show. It does not
look like there was an initial surge of interest and
then it fell off. I don't I don't know that.
I think this one's going to be the one to
work out. But if people are able to get away
with this, it could be a kind of dam breaking scenario, right,
especially once it becomes clear that big companies can make

(35:55):
money doing this, right, you fucking Jimmy Stewart. You know,
it'll start with like Jimmy's Stewart and narrating videos about
questioning the death toll in the Holocaust, but it'll end
with like, yeah, we can just put people, we can
put imitations of people in movies and it's fine. You know,
that's how this goes. And it's not as sexy or
as big and evil as the Matrix and slaving humanity

(36:15):
to turn us into batteries. But we absolutely know it
or something like it is going to happen. And that's
really you know, outside of these kind of star these
space age hopes and fears that are very unrealistic, what
we're going to get is slop and bloat, and libraries
of articles written by no one being commented on by chatbots, right,

(36:37):
endless videos that only exist to trick an algorithm, and
defeating nonsense to children and the AI bros. The fact
people mark andreson fucking Sam Altman. They will tell us
this is a worthy price to pay for the stars,
which we will get if we just let people fuck
the corpses of our favorite comedians for money. Yes, I
hate it. Oh, I hate it too.

Speaker 4 (37:00):
But in a perfect thread between you know this, this
comparison you've been making to a cult I have before me.
Let's say a member of the cult, just you know,
as a as a throwaway, and their reply to his
own daughter's you know post that we were.

Speaker 2 (37:16):
Talking about glorious.

Speaker 4 (37:17):
He replies, this is everything you've been saying, which is
why I was like, I gotta read this. He goes,
what are you even trying to say? Art is art?
You're simply caught in a greedy mindset. The others might
be doing it as well, when not realizing this will
simply bring more eyes to your dad. You're concerned about
money and not spreading art. It sucks that they didn't

(37:38):
follow your wishes, but after art is release. It belongs
to the world. I want this man to walk into
a museum and walk out with the mona Lisa.

Speaker 2 (37:46):
I want to grab it.

Speaker 4 (37:47):
Grab that shit, yeah, grab that it belongs to the world. Dude,
you said it, Go go ahead and grab that shit
off the wall.

Speaker 2 (37:52):
Yeah.

Speaker 5 (37:52):
And it's there's this frustrating thing I've seen, not most people,
to very small chunk of the online left who are
like rightly critical of copyright law, which by the way,
is super fucked up and causes a lot of problems, right,
the ability to like for Disney to keep ownership of
shit for like one hundred way longer than you are
supposed to before shit enters the public domain.

Speaker 2 (38:12):
Right, I'm not like this.

Speaker 5 (38:14):
These are problems, right, the kind of shit that we
were having when like people were going to prison for
file sharing. I'm not a defender of that aspect of
the status quo. But the solution to the problems inherent
in our copyright system is not let Sam Altman own
everything that human beings ever made and like repackage it
for a profit. That is not the way to fix
this thing. The copyright holders are in the right in

(38:37):
this particular crusade, and it's a crusade that is has
very high stakes. I do think you know. My suspicion
the dudes, you guys sound like they're kind of in
the cult. They believe this thing is their friend. In
the interview, my suspicion is that they are. That is
a bit that they're doing because they hope it will
help them out financially, right, and I Marc Andreesen obviously
has a lot to benefit from this. I don't know

(38:58):
he is he pushing this line because there's money in
it or is he really a true believer? Does he
actually think we're going to make this god? I think
Sam Altman is pretty cynical. Altman was on at Davos
recently and like really walked back a lot of his
I think AI will kill us all. I think AGI
is right around the corner. He struck a much milder tone,
which is at least evidence that, like he knows, some

(39:20):
people you want to sell them on the wild, insane
future power of this thing, and some people you just
want to sell them on the fact that it'll make
them a lot of money.

Speaker 4 (39:29):
Right. Yeah.

Speaker 2 (39:30):
Yeah.

Speaker 5 (39:31):
However, much true belief exists about the divine future of AI.
What the major backers, the cult leaders are actually angling
for now is control over the sum total of human
thought and expression. This was made very clear by Mark
Andreesen earlier this year when the FTC released a pretty
milk toast opinion about the importance of respecting copyright as
large language models continue to advance and form central parts

(39:53):
of businesses. The express concern that AI could impact open
and fair competition and announce that they were investigating whether
or not companies that made these models should be liable
for training them on copyrighted content to make new shit.
And we're going to talk about this, but first, you
know what isn't copyrighted? My love for these products. Wow,

(40:16):
thank you, thank you, thank you.

Speaker 2 (40:22):
Oh we are back.

Speaker 5 (40:26):
So I want to quote from a Business Insider article
talking about how Andresen Horowitz responded to the FTC saying like, Hey,
we're looking into whether or not companies are violating copyright
what they're doing to people's data to train these models.
The bottom line is this the firm known as A
sixteen Z that's Anderson Horowitz wrote, imposing the cost of

(40:46):
actual where potential copyright liability on the creators of AI
models will either kill or significantly hamper their development. The
UCSO is considering new rules on AI that specifically addressed
the tech industry's free use of owned and copyrighted content.
A sixteen z argued that the only only practical way
lllms can be trained is via huge amounts of copyrighted
content and data, including something approaching the entire corpus of

(41:07):
the written word and an enormous cross section of all
the publicly available information ever published on the Internet. The
VC firm has invested in scores of AI companies and
startups based on its expectation that all this copyrighted content
was and will remain available as training data through fair
use with no payment required. Those expectations have been a
critical factor in the enormous investment of private capital in

(41:27):
the US based AI companies. Undermining those expectations will jeopardize
future investment, along with US economic competitiveness and national security. Basically,
we made a big gamble that we'll get to steal
every book ever written, and if you make us pay,
we're kind of fucked. Like that's exactly what they're saying. Gosh,
and one of the argument you'll hear is like, well,

(41:47):
most books don't make the author any They don't sell
enough for the author to get any money, right, And
what's actually to is, most books don't sell enough for
the offer to get more money than their advance, but
they still got paid and like the fact that the
company makes money on and that is why more authors
are able to get fucking paid. Not simping for the
publishing industry as it exists.

Speaker 2 (42:05):
But this is bullshit.

Speaker 5 (42:09):
What we are witnessing from the AI boosters is not
much short of a crusade, right, That's really how I
look at this. They are waging a holy war to
destroy every threat of their vision of the future, which
involves all creative work being wholly owned by a handful
of billionaires, licensing access to chatbots to media conglomerates to
spit up content generated as a result of this. Their

(42:31):
foot soldiers are those with petty grievances against artists, people
who can create things that they simply cannot, and those
who reflexively lean in towards whatever grifters of the day
say is the best way to make cash quick, right,
And this brings me to the subject of night Shade.
Night Shade is basically it's a I guess a program
you'd call it. If you like, have made a drawing,
a piece of visual art, you run Nightshade over it,

(42:54):
and it kind of They describe it as a glaze, right.
It adds this kind of layer of data that you
cannot see as a person, but the way machines look
at images, the machine will see the data and if
it's trying to steal that image to incorporate into an LLM,
this will cause it to hallucinate.

Speaker 2 (43:11):
Right.

Speaker 5 (43:12):
You're basically sneaking poison for the AI into the images,
and that's fucking dope. I love this, Love what they're
trying to do. I think there's some debate as to
how long it'll work, how well it will work. I'm
not technically competent, but I love the idea. Right, yes, now.
One of the things that I saw when I started
looking into this, because this just came out, Google Nightshade.

Speaker 2 (43:31):
You know AI.

Speaker 5 (43:32):
You'll probably be able to find you know this if
you're an artist. I think it sounds worth trying. But
I found in the subparateated AI wars, or at least
I found someone sharing this, I believe on Twitter this post.
Nightshade has been released is use of it considered legal
or illegal for those who do not know its software
that attempts to poison an image, so if AI is trained,
it will mess up the model. For example, so you

(43:53):
have a picture of a cat and you run Nightshade
on it. If you attempt to train a model, that
image will replace the image and say dog prompt, cat
category or pencil, which means these prompts will be spoiled.
There is an issue that the creator of Nightshade has
not talked about, either from lack of legal knowledge or ignorance,
or they just don't care into them at someone else's problem.
The issue is it may be illegal in some countries. Basically,
if you release publicly a computer file in this case

(44:15):
image file that knowingly and willingly causes harm or distribution
to other people's computers or software, it may be considered
a criminal offense. Now it does not now, and again
I think that is stupid. I think they're just trying
to scare out instead of using this. You are not
harming someone's computer. You are harming a model that is
stealing something that's not illegal. Now they may try to

(44:35):
make it illegal.

Speaker 4 (44:37):
Right, Yeah, I just want you to know the club
is illegal because you are. If I'm trying to steal
your car and I injure myself trying to break the club,
you have injured me.

Speaker 5 (44:48):
Yeah, I put I invested a lot of money into
stealing catalytic converters, iffy. And if people are putting cages
around their cat that puts my investment in danger, and
that's illegal, right, you messing with my business, guy, Jesus Christ.

Speaker 2 (45:02):
It is that logic.

Speaker 5 (45:03):
There's like someone in the thread is like, how exactly
is your computer system or software harmed? And he responds,
it's equivalent to hacking a vulnerable computer system to disrupt
its operation. It's and then then he says, you are
intentionally disrupting its intended purpose creating art. This is directly
comparable to hacking. Like, I fucking hate this guy.

Speaker 4 (45:23):
I want you, I want you to read it, but
in your head used Tim Robinson's voice, Yeah, and.

Speaker 5 (45:30):
It just makes me, oh god, it's perfect, it's so good.
So all of this put me in a sour mood
if he but yeah, yeah, it did, it did. But
I think back when I'm in that mood, I think
back to Cees right, Like after I ask my question
and I make that Google and Microsoft people, I make

(45:51):
them kind of angry at me. Right after I asked
that question. The question after me is someone asking, hey,
you know the block chain with the last big craze,
do you think there's any future in you know, using
AI on the blockchain, And both of them were they
could not.

Speaker 2 (46:07):
They were like no, like they.

Speaker 5 (46:08):
Can't say no fast enough, Like absolutely, we don't care
about that anymore. We've moved on to the next crypt
Why are you bringing up the old grift.

Speaker 4 (46:16):
It's dead, it's dead. We must move on.

Speaker 2 (46:18):
Yeah, and that brought me a little bit of hope.

Speaker 5 (46:20):
You know, perhaps we will get Mark Andreeson's benevolent AI god,
or perhaps we'll get Elisa Yidkowski's to look on devil,
or perhaps we'll just give control of all of the
future of ARC to fucking Sam Altman. But my guess
and my hope is that in the end, we Heretics
will survive the present crusade. And that's the end of
the episode that I've got for you.

Speaker 4 (46:40):
Ify That is amazing. Yeah, I love it. I love
it so much.

Speaker 5 (46:47):
Well, if he again, If you want this article, or
if you want the article version of this more condensed
easier to share. It's up on Rolling Stone. The article
is titled the Cult of AI and again that's by
me and Rolling Stone, The Cult of AI. Iffy, you
want to add in your stuff, plug your pluggables, so.

Speaker 4 (47:07):
Oh yes, please if you wide a way on Twitter
and Instagram, watch dropout dot tv. You know it is.
It is definitely uh, you know, trying to do funny
things on the internet by humans and you know paying
those humans uh sharing, Oh yes, profit sharing, you know,

(47:29):
so truly big shout out to them. But yeah, I
might be in your town and be dueling a lot
of shows this year, so definitely pull up, you know,
follow me on the social meds and I'll let you
know where I'm at and you can just come. But
thank you uh so much for having me. It's so
good to see you again.

Speaker 2 (47:47):
It was really good to see you again. Ifie.

Speaker 4 (47:48):
Yeah, and this AI discussion in a weird way, as
dark as it's been, it makes me feel better because
I like that we're starting to fight back everyone. Good night.

Speaker 6 (47:58):
Share.

Speaker 4 (47:59):
Yeah, I think I'm gonna just are putting nice shade
on regular images.

Speaker 5 (48:02):
Yeah, certainly, certainly one thing that's worth trying. And again,
you know, think about hyperstition, folks. We have to imagine
better futures in order to counter the imaginations of those
who wish us harm, who want to control and destroy
all that's good in the world. So, you know, get
on that. Somebody figure that out in the audience, all right.

(48:26):
Episode's over.

Speaker 3 (48:30):
Behind the Bastards is a production of cool Zone Media.
For more from cool Zone Media, visit our website coolzonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcast

Behind the Bastards News

Advertise With Us

Follow Us On

Host

Robert Evans

Robert Evans

Show Links

StoreRSSAbout

Popular Podcasts

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.