Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Ah, welcome back to Behind the Bastards, a podcast that
is uh, it's a podcast. You know what what what
do you want? That's honesty and advertising can't argue with that.
Speaker 2 (00:12):
It is.
Speaker 1 (00:13):
It is a podcast. I'm Robert Evans, your host, and
I didn't think of a funny introduction for this episode,
So pretend I shouted a tonally or screamed out the
name of a dead dictator, and let's welcome our guest
for the episode, Ben Bowler.
Speaker 3 (00:34):
If you could be a dictator's name, that would be
most appreciated.
Speaker 1 (00:38):
Yeah, one of the hard to pronounce ones.
Speaker 3 (00:39):
Yeah, go for it.
Speaker 2 (00:40):
Oh okay, uh wow, that's that's definitely uh oh. There
are so many good wines though. Okay, Matt Mood.
Speaker 4 (00:48):
I'm.
Speaker 1 (00:50):
Jodd. I think you're right there. You go out there.
Speaker 2 (00:53):
This is why you're the host, Robert.
Speaker 1 (00:55):
Yeah, yeah, I mean that's an interesting one, right because
he wasn't I don't know how you yeah characterize exactly
what he was, but he's become a darling of Twitter
since his time as the as the leader of Iran,
which is always fun as the air quotes leader. Yeah yeah, yeah,
wild stuff. I do love like the every now and
(01:17):
then you're like, this guy who had a very different
vibe you know when he first came to my awareness
via media is now like a completely different person in
a lot of people's eyes because he like went after
Donald Trump on Twitter. Always funny, Ben, how do you
feel about AI? Not the movie with Haley Joel Osmond?
Speaker 2 (01:40):
Oh well, oh okay, Well, all my notes are trash then,
because I thought we were talking about this.
Speaker 1 (01:46):
Okay, I did say prepare for an episode on AI
and then send you thirty seven pictures of Jigglo Joel.
Speaker 2 (01:53):
So yeah, and thank you, thank you. I'm framing those, Robert,
I'm framing those. Yeah. Well, I got to give a
shout out to a lot of folks ethicists working in
the field who are asking about the nature of consciousness
and why why the term AI is somewhat you know, problematic,
(02:16):
like what makes an intelligence artificial?
Speaker 4 (02:19):
You know?
Speaker 2 (02:20):
But when I hear AI, if I'm most people, what
I'm thinking is, oh, I should have studied math. Numbers
are scaring.
Speaker 1 (02:31):
Yeah, I've been so, I've been kind of digging into
this as a new beat for myself as a reporter.
As we'll introduce today, Like there's some some some stuff
I've kind of gotten sucked into here, and as I've
kind of talked to more people who are on the
technical side of things here, I'm gaining both a deeper
appreciation for how complex this stuff is. You know, there's
(02:54):
been a lot that's sort of You've got this mix
of like incredibly dense academic papers and kind of the
the commentary of people who are actually like working with
the nuts and bolts of this stuff and not trying
to pump up the price of you know, the IPO
valuation of their company. Right, and then you've got kind
(03:17):
of the hype cycle, which is very different.
Speaker 5 (03:19):
Right.
Speaker 1 (03:20):
The people who are are kind of talking about, you know,
how models are actually constructed, and you know what the
ideal size for the models might be, and like how
the capabilities kind of will change as they sort of
ramp up or ramp down the scale, talk about like
the potential problems like model collapse and whatever. Versus the
people who are like an AI is going to take
(03:41):
control of all the nukes. No, no, this isn't just
me stealing the plot of Terminator too in order to
like hype up the power of the product my company's making.
I swear these it's real swears ease, So.
Speaker 2 (03:56):
That's how you know it's legit. That is, that is
some actual nomenclature used in the halls of powers where.
Speaker 1 (04:04):
Yees yeah, and I think you can always trust when
a bunch of people with a vested financial interest in
making their AI products seem extra powerful. Recite the plot
of Terminator too to you as if it's you know,
a serious, serious business. So as we've kind of let
into the internet is a wash right now and kind
(04:26):
of cheap takes on what AI means for the future,
you know, on one hand, you've got these kind of
terminatory fears. Kind of the best example of this recently
was WECE put out this very dumb article that was
like a military AI killed its creator because it was
trying to stop it from launching a missile, and everyone
was like, oh my god, you know, we have to
(04:47):
stop this, and then it turns out, well, actually what
happened is like a bunch of human beings were sitting
around a table like kind of plotting out things that
might happen if they ever built one of these, and
someone was like, well, what if you did this, and
like that's all that happened like they were playing fancy
D and D. It's nothing to be scared of.
Speaker 2 (05:03):
Yeah, man, it's like it's like a war game, like
a simulation, right, it's a planning out a hypothetical. Yeah,
and it is that that's that's headline chicanery.
Speaker 1 (05:14):
And it's not it's not even a simulation where like
they didn't like even code you know, a fake AI
to like test it as a simulation. We're literally just
like bullshitting, you know, like nothing, nothing was built. Which
is not to say like I'm not concerned about the
possibility of like AI, you know, involvement in weapons targeting
and stuff and that kind of thing. Like there's a
lot to be concerned about there, but not this article.
(05:37):
And then kind of on the other end, on the
positive end of the hype stuff, you've got like this
this growing chorus of people who are like certain that
AI is going to save education and rescue students. Bill
Gates is like a big kind of he's one of
the big guys like kind of pushing this idea. I
found a thing recently where he was like very excitedly
talking about how open ai had taught a chat bot
(05:58):
to pass an ap test and why this might mean
something good for education. I'm kind of hesitant to say
that I buy that, just because I haven't really seen
any evidence that like an AI is actually good at
teaching people stuff, which is not to say I haven't
seen evidence that there are uses for it in education potentially,
but like, I think people are getting a little bullish
(06:19):
about this, and kind of somewhere in between those two
hype peaks, you know, the robots are going to nukasol
or the robots are going to replace teachers. Somewhere in
between those two hype peaks is a story I ran
across a couple weeks ago about an author who wrote
ninety seven books in nine months using chat GPT.
Speaker 2 (06:37):
Oh no oh no oh no, I think, oh gosh,
all right.
Speaker 1 (06:42):
I'm gonna guess a lot of people came across this
story and it's one of those things it sort of
dropped and a bunch of different like places like the
I think the Post was one of them. Insider was
one of these kind of like low quality publications or
publications that have recently unk in quality significantly, who are
(07:03):
chasing SEO shit right, And it's also kind of noteworthy
to me that like this story hit right as the
writer strike began, like the writers go on strike, and
then there's this story about like, well, this guy wrote
almost one hundred books in less than a year using
chat GPT. You know, maybe we don't need all these writers.
Now if you actually look at this, this author quote
(07:24):
unquote and with the most quotation marks around that word,
I can possibly add like like a whole script that's
just quotes around the word author. There is a guy
named Tim Buscher, and the reality if you actually look
into what's he did here, he didn't write ninety seven
model novels using chat GPT. He had an AI generate
like ninety seven stories each about two to five thousand words.
(07:49):
And if you don't know how novels work generally speaking,
kind of like the minimum word count before something's considered
a novel, it's about fifty thousand words. It's and that's
a short novel, right, you're talking about like I don't know,
Old Man in the Sea style shit or something like that.
I don't actually how many words Old Man in the Seas,
But like my silly book with robot, horny cyborgs and
(08:12):
stuff was one hundred and twenty thousand words.
Speaker 2 (08:14):
So also, it's a really good book and people should
read it.
Speaker 1 (08:17):
Thank you. That's very nice.
Speaker 2 (08:20):
It's not a compliment. I'm objectively saying it's a good book.
Speaker 1 (08:23):
Well, thank thank you. I agree that because like he's
not having it write novels. What he did was he
had it churn out short stories each with dozens and
dozens of illustrations in them. There's something like fifty or
sixty in each story, which is also too many for
a short story. Waits, Robert, did he draw the pictures himself?
He sure did not. And we'll talk so like if
(08:45):
you look at these, like I went to his web page,
and thankfully, one thing I will give Tim Buscher is
it looks like he's put most of these on a
personal web page rather than dumping them all on Amazon,
which is what we're going to talk about in a second.
But if you look at the shit he's got like
a bunch of different Like one of the stories is
Occupy AI, which is described as like a story about
(09:08):
protesters fighting cops because AI takes the jobs away or
something like that. There's not like characters given or any
kind of like sense of a narrative arc. But the
AI generated art for the of the cops, like beating
up people makes them look like the bad guys in Spaceballs,
And I do appreciate that. So that's that's that's good.
(09:31):
Sophie can show you the Spaceball's illustration. Oh holy smokes, Yeah,
they found shit. So I don't I don't have a
lot of respect for Tim's story writing project, but I
also don't think it's particularly harmful, right, Like, other than that,
(09:52):
you know, this guy got a bunch of news attention.
His books are presumably geared towards adults, or at least
young adults, and I don't really think a lot of
people are gonna get swept up in it. But there's something,
there is something sinister in the world of people using
AI to generate stories. And this is where we get
into the meat of what we're actually going to be
talking about this week, because I have found myself doing
(10:15):
some journalism again. I don't do as much of that
as I used to, but I kind of got I
fell down a rabbit hole over the last two weeks
and I have been investigating something that I found really
fucked up and scary. And the short of it is,
the robots are coming for your children. If you've got
kids out there, there is a shady network of influencers
(10:37):
and conmen who are trying to warp your kid's brain
so that they can make a quick buck off of
Amazon KDP, which is called which is kindle direct publishing.
So that's what we're fucking talking about today because it's
actually a serious problem that people should be aware about.
And I haven't seen any real reporting on this. I
was keyed into this story when I ran across a
(10:57):
Reuter's article like two weeks ago about a boom in
AI written ebooks on Amazon, and the piece zooms in
on this wanna be author named Brett Schickler. Brett is
a salesman from Rochester who heard that chat GPT could
write stories and he decided like he'd always wanted to
be a writer, but he just could never get anything
(11:17):
down on the page. And he was like, well, maybe
this will make it possible for me. He told Reuters,
the idea of writing a book finally seemed possible. I thought,
I can do this. Unfortunately, the book he put out.
Speaker 3 (11:30):
Is why is that so funny? I can do this?
Speaker 1 (11:33):
By not so sad when you learn what he wrote
because it's his book is The Wise Little Squirrel, a
tale of saving and investing. It's a guide to fight
literacy through the eyes of Sammy the Squirrel. And I'll
say this, everything we're about to talk to on this
episode is much worse than The Wise Little Squirrel, because
(11:56):
I will give him credit for one thing. He Schickler's like,
Brett has a thing he wants to get across to people.
Speaker 2 (12:03):
I'm not.
Speaker 1 (12:04):
I don't think he's done it well. I don't think
this is the right way to teach kids kids financial
literacy is to have like a robot generate a story
about a squirrel so they learn how to invest. But
he actually seems to care about like transmitting information to kids,
which is Wellson who does. Yeah, So I'll give him
some credit for that. Also give him some credit for
(12:26):
the fact that the cover of his book it doesn't
look good, Like this doesn't look like a real book.
I wouldn't say, but there's nothing like inherently off putting
or terrifying about the squirrel drawing on the cover.
Speaker 2 (12:40):
I don't I don't know, man, I don't know.
Speaker 1 (12:42):
Robert Sophie find unsettling.
Speaker 2 (12:44):
Well, there's the other there's the other animal there, which.
Speaker 1 (12:51):
To wait, where's the other in the trees?
Speaker 5 (12:54):
Oh?
Speaker 1 (12:54):
Yeah, yeah, well, and you know, the perspective is pretty
fucked on that chipmunk. Yeah, I don't buy it. I
don't buy the perspective here.
Speaker 3 (13:02):
And where did you and where did they get that acorn?
We don't know.
Speaker 1 (13:06):
The acorn's massive.
Speaker 3 (13:07):
The acorn is bigger than the bear.
Speaker 1 (13:11):
Well, well that's just because the bear's further away. But
the acorn is at least the same size as the
squirrel's Torso, yeah, which I think might be, you know,
maybe maybe the subtext here is that the acorn is
inherited wealth and and the squirrel, oh you know, it's
to yeah represent you know, whereas whereas the squirrel in
(13:31):
the tree behind him kind of represents the working class.
Yeah yeah, yeah, who has no acorn? Right, because the
inherited wealth of our of our anyway, whatever, I don't know,
why am I? Why am I? Why am I doing this?
So you nailed it, nailed Brett's Brett's book, which is
(13:51):
available paperback only. That's another thing I'll give him credit for.
Because the stuff that's actually really grifty is all being
done on Kindle. It's currently in about thirty thousand, number
thirty thousand in literature and fiction for children. One thing
that does show is that like that sounds really low.
That's not terrible, right, Like, given the number of books
that are on Amazon, it's not like a runaway success.
(14:12):
I think that this is like probably because there was
a bunch of news stories around this, right because it
was it's kind of the first AI children's book to
go viral. It's not again any good, but I don't
see the harm. Like what we're about to be talking
about is like a bunch of people who have done
this in a much more harmful direction. So this story
(14:33):
kind of got me looking into what other people were
writing AI children's books. And part of why I thought
to do this was late last year I watched a
documentary on the Folding Ideas YouTube channel. You're not aware
of that. Folding Ideas As a channel run by a
researcher and a writer named Dan Olsen, who I admire
(14:53):
quite a lot. He's the guy who did that line
go up documentary on NFTs that helped cause the nf
T crash. He's very good at what he does, and
late last year he published an investigation into a group
of Amazon con artists who like charge several thousand dollars
so that they can teach people how to mass produce
low quality books to have both read by voice actors
(15:18):
for like audible books, and to sell on Kindle through KDP,
which is again Kindle Direct Publishing. And it's one of
those things where like they're promising, they have all these
lurid stories about like you can make tens of thousands
of dollars in a few weeks. This is they really
brad They're called the Mickelson Twins, the particular folks he's investigating,
and they brag a lot about like how little effort
it takes to do this, you have to do very
(15:39):
almost no work whatsoever. And it's one of those like
they're basically just having people look through to see what's
selling on Amazon, create topics and then hire some ghostwriter,
pay him a couple hundred bucks to write a full
book in two weeks, three weeks something like that, and
then find a voice actor and put it up. And
these book books like they're nonsense in a lot of cases,
(16:03):
like he was. I think the book that Dan put
together is kind of like a thought exercise was about
using self hypnosis to deal with epilepsy or something. So
a lot of really irresponsible topics, all that sort of stuff.
But the topics, you know, what matters is not like
what you're actually trying to get out. It matters that,
like what you're you're having these ghostwriters write include keywords
(16:25):
that correspond to popular searches on Amazon. Right, it's an
SEO scam.
Speaker 2 (16:29):
I read about this. I read about this because because Robert.
The immediate question is why are these guys now selling
this trick, right, this grift that they have exhausted, right,
Like why did they move to the next grift. It's
very it's it's very interesting. You know, I wonder if
(16:53):
their next thing is going to be self help books.
Speaker 1 (16:56):
Yeah, I think that, like there's a there were already
some sort of in that topic that they're putting out.
I think mostly what it is is that, like you
can make money doing this. Your goal basically is to
trick people looking for real books on similar subjects into
buying your books because it's kind of hard to tell
on Kendle, and they may you know, if you just
are putting enough stuff out there, the sheer number of
(17:17):
people searching means that you'll make you know, sales, and
if you know, you don't. If it doesn't take that
much work, you know, if you get a couple hundred sales,
it can be worth it. This is not an easy path,
like actually an easy path to making a lot of money,
which is why the real business is in conning the
people who think that they'll make money off of this
and getting them to pay you several thousand dollars to
(17:39):
teach them all this crap. Dan in his video calls
these people coentrepreneurs, which is is a clever bit of wordplay,
but I do think the term actually kind of misses
something important because then, well, the con is Dan is
talking about, the con that he is he is unraveling,
is that like this is not actually a great business.
It's hard for people to really make money. They are
(18:01):
lying to folks about how profitable this is so that
they can take their money, and that is that is true.
That is a con. But there's an actual outside harm
aside from this con because like, the people they're conning
are not just passive rubs. They're trying to like unethically
make a quick buck. So to some extent, I'm like,
I'm not I don't have a huge problem with who
they're conning necessarily, But what I do have a real
(18:23):
issue with is that the byproduct of the cons they're
running is that the largest bookstore on the planet gets
filled with thousands and thousands and thousands of nonsense titles
that aren't real books that can spread disinformation, that can
just cause people to waste money, that can like, like,
there's a lot of issue. It makes it harder to
(18:44):
like find stuff that you're looking for that's of quality,
you know, on various topics. I've had to deal with
this a few times with bastards where I'm looking for
books on niche subjects and I find something that seems
to be about it, but it turns out it's like
one of these like crap books. It's like someone's at
best paraphrased to Wikipedia entry or something.
Speaker 2 (19:00):
Is it is it that prolific thought?
Speaker 1 (19:03):
Like what thousands and thousands of times really I I
have I have actually reached out to Amazon for comment
on like how many AI titles they have in their
library and how they like the degree to which or
whether or not they do anything to attempt to like
figure out which titles or AI or not, like I have.
(19:23):
I've asked them a couple of questions on that about
like their plagiarism, filters and stuff. I haven't heard back yet,
And if I do, I'll update this. I'll record it
update for you.
Speaker 2 (19:33):
Well, surely they can just they can just create a
large language model that will respond to your insightful questions
in novel form.
Speaker 1 (19:45):
Yeah, that would at least be a response. So far,
I haven't gotten anything from them because I think they're
they're following the Elon Musk root of just ignoring any
questions from media because there's there's a lot of money.
They make money anytime people upload the shit in anytime
it sells, so like they don't have a problem with
con men putting fake books up on the site. And
it's one of those things. Again. When when I watched
(20:07):
this video by Dan, I was like, well, this is
an interesting con. I think what's happening here is kind
of unsettling. And then you know the late like the
AI explosion happened right, and suddenly this con gets supercharged
because the main barrier to entry and barrier to production
and profitability and the old way of doing things is
(20:28):
the ghost writing. You know, these people the original version
of this con You're hiring a human being to ghost
write a fake book for you, basically, right, And a
human being can only write fifty thousand dish words so fast,
even if all they're doing is like paraphrasing a bunch
of like Wikipedia and news articles, right, Like, there's a
degree there's a hard limitation on how many of these
(20:49):
can get written. And it's going to cost money because people,
even people who will do this very cheaply, won't do
it for nothing, whereas chat, GPT and other large language
models will write thousands of words for you for effectively nothing.
So as soon as I kind of read this story
about this this children's book and thought back to Dan's video,
(21:10):
I was like, oh shit, I bet this is causing
I bet this is causing like a massive surge in
crap getting posted onto Amazon Kindle. And it's unfortunately like
more unsettling than I had initially thought, because with this
early scam, with like the you know, these these these
ghostwriting scams, the books tend to be fote like aimed
(21:32):
towards adults. Right, You're trying to get adults who might
might be looking for a topic or something to buy
your book but it's hard to get chat GPT to
write a whole proper nonfiction book, Like getting it to
write a fifty thousand dish word book is almost impossible
unless you do a bunch of like really messy sort
of like prompt tricks, which is why all of Tim
(21:54):
Buscher's books were a couple of thousand words long. But
children's books are a very different story because children's books,
if you like have gone through books for little kids,
for toddlers or whatever, they're heavy on illustrations. There's an
illustration every page, and there's often just a couple of
sentences or a paragraph of text per page, which is
(22:15):
exactly the kind of length that chat GPT and other
lms are perfectly capable of reproducing. Right, so the places
you'll go exactly exactly, and the places you're going to
go are like directly into the recursive loop of slop
feeding through you know, an LM, which is what the
(22:37):
investigation we're talking about is this week, because as soon
as I start type, I started typing in like how
to write AI children's books, and shit, my YouTube search
revealed video after video with titles like easy AI money,
make one hundred K writing children's books with chat GPT
and mid journey and easy passive income with chat GPT
in mid Journey creating children's books, I watched way too
(23:01):
many of these fucking videos, all of which lay out
the same basic strategy. The first step is to pick
a prompt an actual author might be decide. For example,
I want to teach children about the importance of conservation
with a cautionary tale about the thoughtless capitalization of natural resources. Right,
you know, that's the Lora Axe, or one of my
favorite books as a kid. You know, I want to
write a book about a bat raised by birds that
(23:21):
kind of shows kids that bats are beautiful and complex
creatures and not just spooky set dressings for vampire stories.
That's Stella Luna.
Speaker 2 (23:28):
Right.
Speaker 1 (23:28):
Those are, you know, examples of things human beings might
try to transmit to children through, like actual human created
children's books. But chatbots and grindset influencers are both incapable
of feeling things or of wanting to transmit their feelings
to others. Right, Like, You've got people who are effectively
soulless using a robot that is effectively soulless in order
(23:50):
to try and transmit messages to children, and that gets
dark pretty quickly, Ben. But you know what's not dark?
Speaker 2 (24:00):
It is it goods and services?
Speaker 1 (24:01):
Yeah, the products and services that support this podcast. I
don't know if you're aware of this been none of
them are, you know? We we we have one. We
take one promise from our our advertisers. Don't be evil
dot dot dot unless it makes a lot of money.
Speaker 4 (24:19):
Your ads unless you have Cooler Zone Media are ad
free subscription channel available on Apple.
Speaker 1 (24:26):
Podcasts, which is the most ethical way to consume this,
you know, I mean, it is true that every time
you get a Cooler Zone Media subscription raytheon fires a
Maffick missile at a at a school bus in a
foreign country. But we are we are working to fix that. Yeah,
that is it's not an intent.
Speaker 3 (24:45):
Yeah, that's a glitch.
Speaker 2 (24:47):
Yeah, I heard. I heard that every time someone subscribes
to U two cool Zone Media, they get a they
get a little coupon and if we get another coupons,
we can we can finally subtract a day from Henry
Kissinger's life. I don't know which day.
Speaker 1 (25:09):
Yeah, I don't know.
Speaker 2 (25:10):
If it's a last letter, it's.
Speaker 1 (25:12):
Likely one of the days he ordered bombings in Cambodi.
Speaker 2 (25:15):
You don't think it done us. Yeah, we gotta subscribe.
Speaker 1 (25:20):
Yeah, so get enough subscriptions and together we can kill
Henry Kissinger. Ah, we're back. So the process bin of
picking a topic for an AI kids book is mechanical
and bleak. It starts with a trip to the Amazon
sales rankings. People use extensions like book Bolt, which show
(25:43):
which topics are selling well, and it's it's kind of
a mix of the videos that I found pointed out
that like, you want stuff that's reasonably high up on
the lists and has a good number of reviews, but
you also don't want it to have too many reviews.
The most successful AI kids books will only sell like
does or maybe one hundred or so copies, and so
they kind of like a lot of the people making
(26:05):
these videos will disguise that by showing you like sales
calculation apps where they plug in books by actual human
beings to be like, you can make thirty two thousand
dollars a month, you know, just like this guy who
wrote this you know, storybook or whatever that's a bestseller did,
which you're not gonna like actually do if you're really
doing this for a business. But one of the creators
(26:29):
that I watched kind of put together the One of
these guys is called the Zinny Studio. They advocate copying
the premises of real kids books that have several hundred
reviews but less than a thousandth They've like worked this
out to a science where like, if there's more than
a thousand reviews, you're gonna get lost in the mix. Right,
It's it's the stories are too big, So like whatever
you put out isn't gonna break through. But if they've
(26:52):
got a couple one hundred reviews, that's enough that like
you know, there's a lot of interest. But also you
have a good chance that while your book's new, if
you can get a couple of early sales, you can
you can juke it up the spots. There's also a
strategy for price points. They note that like Amazon pays
seventy percent in royalties if your book's two ninety nine,
nine ninety nine, but if it's cheaper than that, they
(27:14):
only pay out like half as much.
Speaker 2 (27:15):
We are in the wrong business. Yeah, we're in the
wrong business. I cannot wait to read because this is
this is like a dark side of cyborg philosophy, right, yeah, right,
so like, uh, Robert Sophie, let's get on the seo.
Let's find uh, let's find a topic, and let's have
(27:39):
our evil Pandora jin kind of thing. Uh right, rite
us a little old man in the sea and just
mad lib whatever whatever the narrative is, right, I.
Speaker 1 (27:52):
Mean, I think the real answer Ben, because you and
I both have hundreds of hours of our podcast. You know,
there's ridiculous history and Behind the Bastards and everything else
we've been on recorded. We feed all that into an
AI and then just like, look at what's trending on Twitter,
and then we have an AI generate like six hours
of you and me talking about how masks don't work
(28:13):
or something like that. Just like crap that stuff onto
the internet. Whatever Joe Rogan talks about that week, We'll
just generate an AI conversation between us about it, you know,
toss in like fifteen percent Elon Musk, and then we retire.
Speaker 2 (28:28):
Yeah. Yeah. What I also love about this first off
is the sincerity, and second off is the idea that
nothing could go wrong at all.
Speaker 1 (28:41):
No, it won't, Like I don't know, I feel like,
as long as you know, we need one human being
in there to keep a handle on things, so it
can just be like an AI version of you and
an AI version of me, and then human Sophie sitting
there and like trying to trying to control the robot conversation.
We start giving out like the ingredients for job.
Speaker 2 (29:07):
So uh yeah, you know, uh Sophie, it sounds like
you're cool with that, right, Yeah?
Speaker 3 (29:15):
No, I could never replace Robert with the robot.
Speaker 1 (29:18):
Yeah you could. I mean it would be a rot.
But you know, I got that like that AI Seinfeld
where it's just like a never ending mediocre Seinfeld episode.
We could, we could. We could slip Sophie in that too,
make you make you moderate AI brainless Seinfeld.
Speaker 2 (29:37):
So well, because you know, we're keeping things interesting, right.
The problem is that Behind the Bastards is an important,
uh and profound at times show. I'm kidding, yeah, okay, okay,
all right. The problem is that Behind the Bastards is
(29:59):
a Horton's show, so it might be too high on
our SEO. We need to get some kind of spinoff
thing that appears to be irrelevant but works on a
search term that is that is kicking it for Jeff Bezos, right,
and then and then we just crap out hours of that.
(30:20):
Obviously this is behind the scenes, right, this is not
going out live, right.
Speaker 1 (30:25):
Yeah. I think we call it the Passive Income Diet
Hack Cast with Robert Evans and Ben Bolin. That's that.
That's our that's our path to sixty seven million dollars.
Speaker 2 (30:37):
I love I love how it's so so brief and
so snappy. You know, there's a lot of immediacy. Take
that himingway.
Speaker 1 (30:45):
Yeah, stick pat yea exactly for sale o Zimbic never
a drop shipped. I don't know like that that's most
of what you need in there.
Speaker 2 (30:55):
We got them freestylely do I like it?
Speaker 1 (30:57):
Yeah? Yeah, it's good. So yeah, this is kind of
the process of like making these books start. You use
like these these different extenses to kind of see what's selling,
figure out a premise for yourself. And some creators will
take this research and they'll like try to make an
original premise for a children's book to feed into the AI.
But most of them suggest just collecting like they're not
(31:18):
even doing that for the most part. Most of these
people suggest collecting keywords by like whatever's popular, and then
plugging those keywords into chat GPT and asking it to
generate a story premiss, which they then feed back to
it to have it generate a story, thus avoiding the
risk that human creativity might happen at any point in
this process.
Speaker 2 (31:37):
The mess see prompts you were talking about the it's
sort of our cane, right. Writing prompt is a bit
like casting a spell.
Speaker 1 (31:45):
Yeah, yeah, casting a spell or a math equation. We'll
talk about that in a little bit. But I want
to play you a clip from this guide by the
Zeni Studio, which had about two hundred and sixty eight
k views when I watched it, kind of cautioning, attempting
to make it seem a little bit less soulless than
the process actually is. Sophie, you can play that.
Speaker 6 (32:03):
Now, do you research to find books to copy your
Do you research to understand what is working? It's important
to make your own original pieces, original story books, so
you could stand out from your competition. Okay, now, so
you could see that.
Speaker 1 (32:24):
She says that, right that it's you're not doing this
to like copy people. You know, you're trying to find
ideas for original work. Here's the actual prompt that she
uses to generate her children's book, Write me a Children's Story.
About a girl going on an adventure to find a
missing treasure. Lesson you achieve what you put your mind on?
Make it six pages, entitle each page, make his mind
(32:46):
blowing and intriguing. I'm trying to have to go too
hard on the misspellings, but it does show you how
lazy you can be with these Like that's not a
story idea, like going on an adventure? What kind of adventure?
Finding a mission? What kind of treasure? How does she
like the lesson you achieve what you put your mind on?
Like that, that's like that's a producer. That's a producer
(33:08):
walking in and saying, uh, you know, do a thing
with the sort of vibe and and I love that.
I love that shamal On at the end, Robert make
it mind blowing, yeah, mind blowing yeah. And it's it's
very funny because, like it is, all of these AI
people are the same as like the worst folks in Hollywood, right,
(33:29):
you know, there's good producers and bad producers, and nothing
gets made without the producers who do know what they're doing.
But there's a lot of folks who are like a
lot of corporate folks, you know. I'm thinking about like
David Zoslov and shit here who are like, Yeah, I'm
the idea. Why do we need writers? I already had
the idea, you know, write me, write me a story
about about a guy who's angry.
Speaker 2 (33:51):
Yeah, Max, Yeah, you guys heard about sharks? What what like?
So I saw a thing on Instagram about torne So
what if we what if we just do that?
Speaker 1 (34:04):
Yeah? Yeah, give me a Tornado thing with Bill Paxton.
You know that's the that's the hard work. I already
did it. You know, somebody can crap out the script anytime.
Speaker 3 (34:12):
D just call me Ishmael.
Speaker 1 (34:18):
Yeah, there you go. So I want to actually like
read you the story that or at least a chunk
of the story that gets generated by this this this
prompt because I love it again. People are flipping out
so much about like the quality of writing that these
things are capable of, and like, I think what you're
all going to learn is that like, this isn't writing
(34:40):
what it puts out. So Page one, The Missing Treasure.
Once upon a time there was a brave and curious
girl named Sarah. She lived in a small village at
the foot of a big mountain. One day, she heard
about a missing treasure that was hidden in the mountain
a long time ago. Sarah was very intrigued and decided
to go on an adventure to find the missing treasure.
So you see that, you say, like, first, I'm taking
(35:01):
a break here, like is she The prompt said like
make it intriguing, And what the robot did was like, well,
I'll just say that Sarah is intrigued. That means that
it's an intriguing story. Now, because I've used that word, we.
Speaker 2 (35:13):
Also see we also see repetition of phrases, right, Oh yeah.
Speaker 1 (35:18):
Yeah, it's it's very clear that it's like a mechanical turk,
you know. Page two, The journey begins. Sarah packed her
backpack with food, water, and a map and set out
on her journey. She hiked up the mountain and along
the way she met many animals who offered to help her,
but Sarah declined their help, saying I want to do
this on my own. Why are their animals offering her
to help? Like what what is it that she wants
(35:41):
to do on her own? Like why is this important
to her?
Speaker 2 (35:44):
Intriguing?
Speaker 1 (35:45):
Yeah, I'm intrigued by how incomplete this story is. Page three,
The Cave of Trials. After several days of hiking, Sarah
finally reached the top of the mountain. There, she found
a cave that led deep into the mountain. She entered
the cave and was met with many trials, including steve cliffs,
deep chasms, and dark tunnels. But Sarah didn't give up.
She kept moving forward, always remembering the lesson she learned
(36:07):
from her parents, you can achieve what you put your
mind to. Oh, now, her parents are characters in this,
her parents, well not even really characters, Like she's not
a it's you know, you can see what it's doing here, right,
it's a simulacrum of a plot, right where you've got like, well,
I know that plots have to have a conflict, you know,
and characters in a story are supposed to have trial.
So I will just say she went through trials, she
(36:30):
didn't give up because her parents said, you can achieve
what you put your mind to. Like, and that's what
the story views as like transmitting a lesson to kids, right,
is just like saying the lesson in there, which like,
that's not if you think about children's books, like good
children's books, the books that like you can still remember
as a kid, that's not how they get messages across, right,
(36:52):
Like it's deeper than that. Kids are actually not just
capable of understanding like much more complicated messages than like
you can achieve what you put your mind to. They
they crave that, which is why, I mean part of
why shit like doctor Seuss's books, you know, still sell
a century or whatever after he fucked close to his
century after he fucking wrote him.
Speaker 2 (37:12):
And he's kind of a piece of good books. Good books.
Speaker 1 (37:17):
What author isn't It's why like the Hobbits, you know,
successful and shit like you don't just have Bilbo being
like I learned that you can always trust in the
power of friendship. No, you you show you know the
character like evolving and changing and entering conflicts and and
failing and then learning lessons and then succeeding. Like this
(37:39):
is not super complex stuff. You're not like writing fucking
ulysses just because you put this in here. But the
chat bot like is incapable of kind of understanding this
because it's it's it's a calculation. It's an algorithm, right,
it's an it is not it can't want to tell
a story, and like a critical part of telling a
(37:59):
story is the intent you know, and and you know
kind of spoiler. That's what's most unsettling about all of
this to me and what we're kind of building too.
But you can see, you know, kind of how how
these how this story you know, technically meets the requirements
that the Zenny Studio or whoever like set out for it,
(38:22):
and none of these grindset people actually care about anything
beyond the fact that like it might be able to
pass muster on kindle, so like it's good enough, right,
we'll go for it. We'll take this script and turn
it into a book for little kids.
Speaker 2 (38:35):
Right right, right, But what first question, like, what do
you think Joseph Campbell would think of this? Because it's
clearly like again, it's a structural aggregate approach. It's a
very lazy mad lib that you know, there's the uncanny
valley to it. I agree with you there, it does
(38:57):
feel that it's missing a soul. Right, we don't even
know the animals, we don't even know the parents.
Speaker 1 (39:03):
Like wait, I think he'd probably be kind of interested
in the fact that sort of the again, it's kind
of the simulacrum of a story. It's not an actual story.
It understands some aspects of the shape of a story,
and it attempts to reproduce that, but without any kind
of understanding of what the original is. So I suspect
there's a degree to which he would find it intellectually fascinating.
(39:25):
I think he would probably be buying a gun right now,
towards the open AI offhsis he would be buying.
Speaker 2 (39:32):
He would be buying a gun, and he would like
DM checkov and say, what's happening?
Speaker 1 (39:38):
But yeah, let's get the fuck down there. So I
found an even lazier example of synthetic storytelling in a
video by a guy named Christian Hidorn, which has about
one hundred and twenty thousand views. His YouTube channel is
called tokenized AI and he had a mid He had
mid Journey create what he calls a comic book. And boy,
I am using that those words as fucking it's not
(40:01):
a comic book, right, He's lying about what this is.
It's a book with illustrations and barely any text, but
it's not like anyway, it's not actually a comic book.
And he plugs in the plot for this. He gets
chat GPT to create a plot for him by kind
of plugging in an equation for story ideas that reads
(40:21):
like the brain of a Netflix executive. And I want
to read you this equation because I find it really interesting,
and Sophi'll show you it because it is it's physically
laid out like a math equation.
Speaker 2 (40:30):
Almost.
Speaker 1 (40:31):
Story. Please come up with a short story based on
in brackets theme that uses elements that are typical for
in brackets genre stories. Provide me with a three to
four sentence summary and store it in plot, and again,
like you kind of plug in a theme. I want
the theme to be an adventure, you know, or I
want the theme to be like, you know, yeah, an
(40:51):
adventure story in you know, the horror genre or whatever,
something like that, and provide me with a yeah. So
that's that's kind of the way the equate looks. And
Christian decides he wants a comic book that has kind
of like Indiana Jones uncharted style art vibes about like
a man and a woman who meet up by coincidence
(41:12):
and go on and adventure in like a dungeon to
get a treasure, right, And it's one of those things
I say. The artwork has the vibes of like Indiana
Jones or Uncharted because the vibes are purely a product
of the actual art itself. There's no actual plot in
the story that chat GPT generates for him, and he
has he has, he has the robot break down a
(41:33):
story page to page, but it's it's not a story.
Seen one. In a bustling Southeast Asian market, Maya and
Leo accidentally bump into each other while following the same
ancient map dialogue. Maya, hey, watch where you're going, Leo,
that's my line?
Speaker 2 (41:50):
Whoa, whoa?
Speaker 1 (41:51):
That doesn't that's not like that, that's nonsense, Like it's
one of those like why is that his line? He's
never we've never met these characters before. We haven't heard
him say that to anyone else, Like would why would
that be the response that he would make to her?
Like you know again, and this is because he's like,
that's the setup, that's the background for these characters. You
(42:13):
get scene two and again this is like the second
page of the story. Maya and Leo reluctantly agree to
work together deciphering the maps cryptic riddles dialogue. Maya, fine,
but we split the treasure. Leo deal. Scene three, Journeying
through against Jennal the duo encounters a treacherous river crossing test,
testing their teamwork skills. Dialogue Maya will need to build
(42:34):
a raft, Leo on it?
Speaker 5 (42:37):
Oh four?
Speaker 2 (42:38):
Good?
Speaker 1 (42:39):
Yeah, classic Leo. Yeah, classic Leo. That's the Leo I know,
always raft building. Scene four, the pair solves a puzzle
in a hidden temple, revealing the entrance to the treasure chamber.
Dialogue Maya, we did it, Leo, Leo. Together, We're unstoppable.
Scene five. Inside the chamber, Maya and Leo find the treasure,
(42:59):
but also face it choice greed or friendship dialogue Maya,
we could just take it all, Leo put at what cost?
And this is in the video the videos, like I
want him to have to like choose between greed and friendship,
which like they like that. They don't act like it's
just an artificial choice. There's no like, you know again,
if you're doing like even a slightly more effortful version
(43:21):
of this, do an Indiana Jones thing. You know, they
pulled the treasure, but the room starts to collapse, and
like Maya's you know, able to get across a chasm
with the bag of treasure, but she, you know, she
can't help, Like Leo falls and he's like hanging by
a thread and she has to either take the treasure
and run or drop the treasure and save him, you know,
like something like that's that's that's that's not like again,
(43:42):
it's not like high art. But that's a conflict, right,
there's lab.
Speaker 2 (43:47):
There. Last Crusae was great.
Speaker 1 (43:50):
Yeah, if you're gonna steal from Indiana Jones, do it well, right,
like again, there's a whole look fucking uncharted, and shit
started as doing that like that worked out fine for everybody, Like,
but but it has to be a story. You know,
we will forgive a lot of derivative shit. The thing
about Star Wars, right, Star Wars, if you actually look
at the stuff that Lucas was pulling from, is actually
(44:12):
pretty derivative of a lot of different things that inspired
him as a kid, of a lot of different like
movies and shows and comic books that he had read
as a young man. And it's fine because it's it's
a it's a fucking rollicking good story, you know, but like,
this is not a story. This is like the AI
is checking every box, but it doesn't understand what the
(44:34):
boxes are, so there's.
Speaker 2 (44:36):
Not that's it that's the quote.
Speaker 1 (44:38):
Yeah, there's no none of the characters want anything. Really,
there's no reason for them to be doing this. We
know nothing about their lives. They're not like people, which again,
this is like a little kid's story book. So I
think the the idea that these people have is that like, well,
kids don't notice that because they're dumb, and it's like,
yes they do. Like that is that's that's that's why
(45:01):
like fucking Pixar movies or the were the biggest thing
for kids for a while. Those are not like nonsense stories.
Those those all feet like The fucking Incredibles or whatever
or this is not Pixar, but like one of the
most beloved children's movies of all time I saw when
I was like a little bitty kid, the fucking Iron
Giant and Shit or The Brave Little Toaster. You know,
those are stories with like sorrow and character and motivation
(45:24):
and like things are set up and paid off, none
of which the AI appears to be capable of doing.
Speaker 2 (45:31):
Yeah, you know what this reminds me of Robert Are
you familiar with the book Plato? No? Okay, so back
in the days of the Humans, Can I say that
that way? No?
Speaker 1 (45:44):
Yeah, yeah, back before the robots destroyed us all sure.
Speaker 4 (45:47):
Yeah.
Speaker 2 (45:47):
In the days, in the days of the humans, there
was a guy named William Wallace Cook who wrote a
book called Plato with two t's, and he was he was,
it's like an old school hack pulp novelist paid by
the word, et cetera, or paid by the letter. I
don't know how the math works out. He tried to
(46:10):
structurally calculate to quantify the amount of possible stories, and
he said, there are exactly one four hundred and sixty
two possible plots, and right right, you like the specificity, right.
Speaker 1 (46:26):
Yeah, I love it.
Speaker 2 (46:28):
Nineteen twenty eight he publishes his great epiphany in a
book called Plato, The Master Book of All Plots, and
I am frankly baffled that the the new you know,
the new algorithmic overlords are forgetting their history. Let's do it.
(46:52):
Let's you know what, Let's fight the tires, light the fires,
kick the tires.
Speaker 1 (46:58):
Yeah. I just want to generate four teen hundred and
sixty two novels then, and then we're done with books.
You know, people said that when James Joyce put out Ulysses,
this is the end of literature. But we can actually
make it so you know, AI can and we can
have Maya and Leo star in all of them. You
know they'll, they'll, they can do it. I want to
(47:19):
play you a clip from this fucking video making this
dog shit not a comic book. So Sophie's going to
do that.
Speaker 7 (47:26):
Now need a third image though, because we need to
convey the fact that they've bumped into each other. In
this prompt, you'll notice that I start with a style,
but then I immediately set the scene first, only then
I add the characters and finally end with the activities.
As I said, you will need to experiment with what
works best for you. I've also changed the aspect ratio
(47:46):
because I'm creating a wider image panel. Okay, so this
completes our first scene. Whether you use just.
Speaker 1 (47:53):
One possible, you see what he's going into all this like,
we're trying to set the scene with these two characters
bump into each other, and you have to be very
you want to structure it this way is that the
AI knows and it generates you an actual, proper, you know,
image of this action happening. And like the picture we
get is this very vague, generic looking bizarre type background,
(48:14):
and then our two characters both staring directly at the
viewer on frame. They have not bumped into each other.
They don't, they aren't interacting, they don't seem aware of
each other. It is not an illustration of what he
asked it to write, but it's good enough.
Speaker 2 (48:28):
Like fuck it, you know, like.
Speaker 1 (48:30):
It's it's unchartedy. You know, it's for kids. Uh, yeah,
it's it's it's pretty good. So I want to We're
gonna get more into this, but you know, you know,
what also reminds me a lot of the uncharted video
games is the uncharted deals that our sponsors are thrown down.
(48:58):
We're back. So looking at a bunch of these stories
in a row kind of pulls the curtain. Aside on
AI storytelling, a lot of the dialogue it generates is
pretty nonsensical, and yeah, I can. I keep going back
to that, like bit in scene one where she's like, hey,
you bumped into me and he's like, that's my line. Yeah,
(49:20):
it makes no sense and my kind of my only
The only thing I think is that at some point
an actual human wrote a story or probably a few
people a number of people did where a character used
that line as a retort, right, uh, and chat Gypts
calculating algorithm or whatever, was like, this is an appropriate response.
Plug it in even though it makes no sense, because
(49:41):
that's kind of the way these things work.
Speaker 2 (49:42):
Yeah. Also, also, they both have one ancient map and
you're not tell you where did they get it? Yeah? Right?
Is it? Is it the same map?
Speaker 4 (49:54):
You know?
Speaker 2 (49:55):
Like like the do they have copies of this?
Speaker 1 (49:57):
Who made it? What is the ruins there? And again
you know Indiana Jones and The Last Crusade sets this
up very elegant quickly by having River Phoenix go on
a boy scout adventure and then come home having nearly
been murdered and find out that his dad doesn't give
a shit because he's too busy like working out you know,
(50:18):
his notebook that has all the clues for the movie
that we're about to watch. Perfect elegant, actual storytelling, good
shit man movie rules.
Speaker 2 (50:29):
So wait, So the question that a lot of us
listening today are going to have is it feels like
we're talking a little bit about something profound, which is
you mentioned Ulysses earlier. Ulysses triggered such an awesome conversation
(50:50):
in US courts, right, over the nature of obscenity. Right,
I don't know what it is, but I know it
when I see it, right, So what like this does
feel like there's something we don't know what it is,
but we know when it's missing to a human story.
Speaker 1 (51:08):
I think what's missing? I keep bringing up the term simulacrum,
and like, the simulacrum is a great example of like
the kind of classic example is you, like, open up
your a word app type thing up at somewhere probably
in the top left hand corner, that's where it is online.
You will see the image that means you click to
save your document, right, And you and I and Sophie
(51:29):
are all old enough that we know that that image
is a floppy disc, right, Like that's the image, but
like those don't exist anymore. People don't use floppies anymore.
Huge numbers of the people who click on that every
day to save their documents never lived in a world
where like they touched a floppy disc. So the original
thing like is no longer around, right, Like it's a simulacrum,
(51:52):
you know, and like the kind of whatever kind of
original meaning that it had has been replaced by something else.
And like what we have here is like the simulacrum
of a story. You've got these ingredients of story structure,
these ingredients of pieces of pop culture, but there's no
there's nothing that's actually trying to be transmitted. Nobody has
like because there's not a person behind this, for one thing,
(52:13):
and there's not an understanding of what it takes to
make a story, so it can kind of look like
a story. It can certainly trick the like these AI
grindset people think it's close enough. But what's happening here
that's really fucked up is that they are taking this
thing that is not a story, and they are their
goal is to trick overworked parents into buying this for
(52:36):
children who will read it and be too young to
elucidate why something seems to be wrong with their new book.
And that's actually very frightening, right, Like there's there's we're
going to talk about learning in a little bit, how
kids learn and how they don't learn, but like, fundamentally
kids are little information vacuums, and this stuff is wrong
(52:59):
on a fundamental level, and they'll get that. They may
not get it why, but it's not only is it
kind of like off putting to them, but it has
the potential to do harm to them. And this is
you know, we've had versions of this. There's this thing
called elsagate, right where like a year or two ago,
there were suddenly all of these hundreds and hundreds and
hundreds of like what appeared to be AI generated videos
(53:20):
that were like involving Disney characters, Elsa being a big
one from I think Frozen, and they were they were
like singing.
Speaker 3 (53:28):
Real, what are you calling this character?
Speaker 1 (53:31):
Elsa right, Elsa, Elsa, Elsa, el Yeah, I said, Elsa right,
and whatever, okay, whatever. So you've and like a lot
of them would suddenly get like really weirdly sexual or
like fucking get tweaked into this kind of like uncanny,
unsettling almost like love crafty and weirdness, and they were
(53:53):
remember this, yeah, and these were they were all crafted
and used kind of the keywords that like when parents,
you know, you're you're you got to cook dinner, you
got to like do you know, clean the ause or
some shit. Your kid won't calm down, You put them
in front of YouTube and you put on a children's
video and the algorithm eventually starts spinning, and so parents
start realizing like, well, I set them on. You know,
this video that was like a frozen song and now
(54:16):
there's like this weird pornographic thing that they're watching, Like,
where the fuck did this come from? A webitized AutoPlay? Yeah, exactly.
And I don't think there's still a solid understanding of
where all these came from. There's a couple of different theories,
but it's really unsettling because part of the problem is,
like what does it do to kids to have that
kind of shit in front of their eyes? Like, cause
(54:38):
we're not even talking about like the danger of like
what if a kid reads an adult book or you know,
stumbles in when there's porn on the TV. You know,
this is a question of like, well, this is like
frightening nonsense that like is being fed into them automatically,
like at a at a scale that was like hundreds
of these videos tit. One of the channels for this
(54:59):
was like one of the twenty biggest channels on you.
This is fucked up, really problematic shit, and this is
people Like what these grindset people are doing by pumping
as many of these AI books into kindle as they
can is they are creating the They're creating the the
the hard copy, the physical book version of this, where
(55:19):
parents are going to get tricked into buying this ship
and kids are going to be sat down in front of.
Speaker 2 (55:23):
This and and technology always outpaces legislation. Yeah what you're describing, man,
it sounds like this is a very very bad wild
West situation. Uh, it's a it's a Cormac McCarthy level
wild West situation, like the so there is no is
(55:49):
there is there blowback? I mean, children are some of
the most frighteningly intelligent things. Kids. No bullshit. Uh that
that's super uncomfortable. There's you know, this reminds me of
one of my favorite lines from the television show Community
(56:10):
This Robert, I posit to you, Sophia, I posit to you,
a fellow behind the bastards. Folks, this shit is both
silly and evil, like a candy cigarette.
Speaker 1 (56:22):
Yeah, yeah, exactly, It's like the candy cigarette of literature, right. Yeah.
And it's one of those like one of the things
I found interesting. So when I want again watching a
bunch of these videos, you become aware of, like what
the problems for these AI hustlers are, like, what the
things they have to overcome is And one of the
problems they face is that like every single book generated
(56:42):
by the AIS they're using sets off plagiarism detectors, right,
and like Amazon uses I think Amazon uses, I believe
they use a plagiarism detector. I've asked them for specifics
on this again I haven't heard back. But so one
of the first things they'll do is they'll use something
like Grammarly to like run the text that they've got
through a plagiarism detector and it will inevitably tell them
(57:03):
significant plagiarism found. But there's a solution they've got a
way to solve for this, which is an app called Quilbot. Now,
if you go online and look up quilbot, it's built
by its creators as an online paraphrasing tool. So what
quilbot does is you feed it text and it goes
through that text and it replaces adjectives and verbs in
the text with synonyms. So you know, if the original
(57:26):
text has the word courageous, it'll replace it with brave,
or if it's got the word resided, it'll replace it
with lived. Stuff like that. So you feed this frankin
text that you got from the AI into Quilbot and
then quilbot does kind of a find and replace with
a handful of the words in there, and then suddenly
you've got something that won't trigger Amazon's plagiarism sensors and
that you didn't have to actually edit yourself. Right, you
(57:48):
still don't have to do any real work here. You
just let another robot mix it up a little bit more. Yeah. Now,
once you've got kind of passable text, the next step
for these grifters is to create image prompts for all
of the illustrations in the book. And again there's not
room for creativity here or for any kind of like
human influence. For each page, they just ask chat gpt
(58:12):
to generate descriptions of the images needed for that text,
and then they feed that text in the mid Journey
or Leonardo, which is another AI image generator and used
to produce something like this. And this is from another
one of these. This is kind of like one of
the better case scenario looking illustrations and it's from like
one of these fucking AI book guides that I found
(58:34):
on YouTube, and it's you know, it's not remarkable, right,
Like it's I'll read you the text. This is from
the chapter one setting off in an adventure. This is
the whole chapter. Once upon a time, there was a
young girl named Sarah who lived in a small village
surrounded by rolling hills and lush forests. Sarah was an
adventurous spirit and love to explore the great outdoors. One day,
she heard a rumor that a long lost treasure was
(58:55):
hidden somewhere in the hills. The treasure was said to
be a great wealth of gold and precious gyms, and
whoever found it would be incredibly rich. Sarah was determined
to find the treasure and put all her focus on
the task at hand, because the message here is that
you need to focus in order to achieve your goals.
Speaker 2 (59:10):
Oh okay, yeah.
Speaker 1 (59:12):
And it's the art that accompanies it isn't bad. It
looks it kind of at first glance, you might assume
a human drew it. You can kind of tell, like,
for one thing, the perspective on the backpack on her
back is all fucked up. It's not like actually quite right.
It might be that her hand isn't a pocket, but
she might actually just not have a hand. But like,
it doesn't look bad. Especially again, you're like an overworked
(59:33):
parent looking for a book.
Speaker 3 (59:35):
What you're saying.
Speaker 1 (59:36):
Yeah, yeah. If you're an if you're just like a
parent skimming through these, maybe you you don't catch this
right as like something problematic. You just see like, all right,
well that one works and by the standards of age
children's book arc this this one looks pretty good. And
I should say here if you want to look at
this stuff yourself. The text version of this article, of
the whole article that we're using as the script for this,
(59:59):
will be up as soon as the episodes drop on
my substack, shatter Zone. Just type shatter Zone substack into Google.
It'll be the first article that pops up. If you
click on that, it's free to you. You don't have
to give me your email or any of that fucking bullshit,
and you can see all of the images and find
links to everything if you want to, like track back
to the work that I did here. So that's kind
(01:00:21):
of the best case scenario for how these AI kids
books work. Look, the worst case scenario is shown in
this video, which has about four hundred and twelve thousand
views by Paul Marls. Now. Prior to the AI explosion,
Paul he was teaching. He had a bunch of videos
teaching people how to like con other folks through Amazon
KDP kind of the old fashioned way by like hiring
(01:00:43):
ghostwriters and shit. His videos had titles like six hundred
and thirty six dollars from thirty to sixty minutes work
exclamation point, exclamation point, make money online with KDP low
content books. This is like what he's advertising is like,
these are low content books. To put anything into them.
It's the laziest thing ever. But you can make this
(01:01:04):
weirdly specific number of money if you follow my guides
and Paul Marls, if you look him up, he has
dozens of videos for sale on Amazon. I have reached
out to Paul. His One of the ways you can
tell this guy's sketchy is he has a website that
like offers all of his different sort of like you know,
guidebooks and videos and stuff for people who want to
follow him in this grift, but it has no contact info.
(01:01:28):
I did, like send him a message on his Facebook,
which I don't think he will check, but you know,
I did reach out Paul. I don't like Paul. I
think he has a face in need of a fist here.
I'm not using that kind of language in the article
that I've written because I want to seem like an
objective journalist. But as you'll see when we play a
clip from this guy's video, he's pretty immediately off putting. Now,
(01:01:51):
Paul ups the ante on laziness here by cutting out
the text entirely and instead generating a children's coloring book
that he hopes will seo well because it's about dinosaurs
and turn a profit. It's even lazier, right, that's what
Paul is is like, this is already a lazy grift.
Paul's gonna make it lazier. Fuck all that text. We
don't need a story, like, we don't need to use
(01:02:13):
chat GPT. We'll just we'll make a coloring book super easy.
And you may think a coloring book would be less
problematic than some of this other shit we've seen. I
assure you it's not. So he enters this very half
assed prompt where he misspells the name of a dinosaur.
The prompt is just like coloring page for kids, Tyrannosaurus
(01:02:34):
rex in a jungle, and he doesn't spell it right.
And I'm gonna have Sophie play this guy describing how
he generates the images for his coloring book, and maybe
then you'll hate him as much as I do.
Speaker 5 (01:02:47):
And what we want our image to be off. Now
I'm going to do a dinosaur coloring book. That doesn't
mean to say you need to go out and do
a dinosaur coloring book. You can use this method to
create any number of different types of coloring book that
you want. So I'm going to put in Turanosaurus rex
in a jungle. The next instruction is going to be
(01:03:10):
what sort of style, and we're going to put in
cartoon style. Then I'm just going to tell it how
I want the lines, and I want this with thick lines,
low detail, and no shading. Now put in no shading,
but you'll find that it probably will create some images
with some element of shading. The next is we need
(01:03:31):
to put two minus signs and then ar for aspect ratio.
So we're just going to put in nine eleven and
then we enter and we can see here it's starting
to create the images.
Speaker 1 (01:03:44):
Okay, so stop at Sylvie, because now I want to talk.
I want to talk about these images. First. I want
to say, you'll note he's like, we don't want there
to be shading, because you know, if there's shading in
a thing, that you're supposed to color in it, Like,
fucks up, you can't like color in shading very well.
But he's like, sure there will be shading, but like whatever,
who gives you know, we don't care anything about this stuff. Now,
Sophie is going to show you. These are the images
(01:04:05):
that he generates with that prompt take a good look
at these dinosaurs. Can you can you tell me anything
that's wrong with these Tyrannosaurus rexes.
Speaker 2 (01:04:13):
Oh sure they're not t rexes.
Speaker 1 (01:04:14):
No, they're not at all t rexes. One of them
is a quadruped. It clearly has four legs, but like
a t rex head on this like weird stump of
a body. Another one, the tx rex has like human
muscle arms and thumbs and thumbs. One of them, the
one on the bottom left, like, not only is the
bottom jaw twisted away from the top jaw, but if
(01:04:36):
you notice, its front arm is actually its back leg.
The back of its body is merged with a tree.
And then like there's maybe two other sort of half
formed limbs there and then the bottom right one like
these I cannot exaggerate to you, like go go to
shattersone substeck. Look at these fucking t rexes. These are
(01:04:56):
so such fucked up janky tyrannosauruses, Like it is offensive
to me, how shitty these t rexes look.
Speaker 3 (01:05:05):
The human hands are very funny.
Speaker 1 (01:05:07):
Yeah, the human hands on the bottom one is pretty great.
I do like how in the top one, like mid
journey's almost done like a rob Leaffield thing, where like
it can't figure out the so so hey, maybe AI
has at least reached Leaffield and levels of like illustrative potential.
Speaker 2 (01:05:29):
They need, they need never explained belts of ambiguous cartridges
right right around the waste.
Speaker 5 (01:05:38):
Yeah.
Speaker 1 (01:05:39):
Also, you know what if there be that's gonna be
another two iterations.
Speaker 2 (01:05:44):
Yeah, t rex with human hands that would have helped
him out.
Speaker 1 (01:05:50):
They maybe they would have been around right, you know,
if the t rex had developed the fucking handgun, you know,
they could have fought off those fucking asteroids. Yeah, it's
a you know when you say that, I think about
a much better tyrannosaur drawing. Bill Watterson's classic Tyrannosaurs in
(01:06:10):
f fourteen's still one of like the highlights of my childhood.
That part of why I'm so offended by this coloring
book is that, like the best parts of my childhood
all involved dinosaurs, and he is just he understands that
that kids are magnetically drawn to dinosaurs, always have been
and always will be forever more, and so that he's
(01:06:30):
grifting off that by provided like he just he doesn't
even care enough to make sure that they look vaguely right,
like any kid. Kids know more about dinosaurs than paleontologists.
Any child is going to point out that these are
fucked up looking t rexes. So Paul's guide after this
kind of goes through how you lay out your coloring
book to publication, how you cobble together a cover, you know,
(01:06:51):
what kind of things you need so that it'll it'll
get through Amazon sensors, and they'll let you upload it
to like not just like because it's a coloring book,
obviously you need physical copies and you can if you
upload it right, Amazon will just like print and sell
the books for you, the actual like physical books. And
again this gets to what I worry about is that
like parents who fall for these books are going to
(01:07:12):
be you know, overworked. They're not going to be focusing
on these books as much as their kids are, right,
They're just like, I want to get a dinosaur coloring
book for my kid or whatever they need a coloring book. Oh,
this is what like popped up on Amazon. The cover
looks fine. I am also worried. Particularly what scares me.
It's both like parents who don't have a lot of time,
they're going for whatever is affordable. And also there's a
(01:07:33):
lot of charities that provide poor kids with free books,
and they do bulk orders from places like Amazon that
might get tricked by this kind of thing. And so
the thing that like the vision I can't get out
of my head is this like frightening ai future in
which like rich kids get to color in proper dinosaurs
from like coloring books that human beings drew, and poor
(01:07:54):
kids grow up thinking stegosaurs had no tail and the
earth used to have like a second moon that looked
like nipple, which is another Look at this image here,
Look at this fucking picture that that it drew that
some kids, some kid's gonna get this, some kid's gonna
get this fucking coloring book. Look at this fucking thing
that's not a stegosaurus. I don't even know. It's got
like spikes coming out of like like like actual like
(01:08:16):
porcupine quills. Coming out of like the spines on its
back and like it doesn't have a tail, there's like
five legs and the perspectives wrong and then like the
world behind it look like.
Speaker 2 (01:08:26):
Yeah, yeah, yeah, it's got spikes on the left, uh,
the left hind leg.
Speaker 1 (01:08:37):
Yeah yeah, it's like fucked up. It's like super fucked up.
Speaker 3 (01:08:40):
Like what's very bad?
Speaker 1 (01:08:42):
Yeah, I hate this shit, and like the again, that's
what's unsettling. Like there will always be high quality dinosaur
coloring books and storybooks and stuff for kids, and because
they're made by humans, they'll command a premium price. So
like the poor kids grow up thinking that like yeah,
I don't know, you know, I find it fucked up,
(01:09:06):
and maybe it'll be even bleaker than that, and nobody
will get good children's books. But this is kind of
like where I what I'm suspecting is at least the
first thing that's going to happen is that like the
kids who start getting shafted with this low quality, unsettling
brain poisoning AI crap are like the kids who don't
have as much money, you know, or whose parents don't
have as much money, whose school districts don't have as
(01:09:27):
much money. Now A cursorysearch on Amazon suggests that a
lot of people have followed Paul's advice because there are
a shitload of nearly identical coloring books. So, first off, been,
this is the cover of Paul's book. Here you can
see the dinosaur coloring book. You know, pretty simple, and
that he did pick for the front cover the image
that's like least fucked up looking like that almost looks
(01:09:48):
right right. But then I found as I started looking
through Amazon, several near identical copies, all sold under different names,
and you can tell they're all AI copies because like,
so this first one here, uh, the arms are fucked up,
like they're in the position kind of like that one
on the on the left is almost like set up
like a leg and they're both bent down so you
(01:10:10):
don't actually see the clause. You can tell the perspectives
fucked up though that's by Jared Mason or it's not,
because like Paul and other people talk about how you
you just create fake names to write these books under.
These could all be Paul, they could all be people
following Paul's advice. We don't know. Here's another book where
like what the fuck dinosaur is that supposed to be
(01:10:31):
that is that is like, so again, folks, you really
owe it to yourself.
Speaker 3 (01:10:37):
Sorry, what it is? What are the legs situation?
Speaker 1 (01:10:43):
The tripod Like it's got like the body of a dog,
but like three legs, a t rex head and an
arm that's not an arm like it's a it's an
arm that looks like this. It looks like this animal
was like a Civil War veteran who was gonna say if.
Speaker 2 (01:10:58):
You date it, like this guy's been through some ship,
you know what I mean, Like and he's still uh.
Speaker 1 (01:11:07):
Wow, and it's it's it shows how lazy these people
are that like anyone seeing that, she'd be like, that's
not a dinosaur. That doesn't look there at all?
Speaker 2 (01:11:14):
What is happening here? Yeah?
Speaker 1 (01:11:17):
Oh good, there's more. And then this this one, this
is a look at how the claws are like, its
fingers are like as long as it's head. It's like
a fucking salad fingers the dinosaur. It's so fucking weird.
Speaker 2 (01:11:35):
I like the incredible, Yeah, I like the nihilation. I
like the there's something like there there's something very about
the way the eyes are rolled up. Yeah.
Speaker 1 (01:11:49):
I love it. So despite featuring easily the worst dinosaur
drawing I've ever seen in my life. This coloring book
appears to be selling reasonably well. At the time this
article was written, it was number seventy five and the
teen in young adult drawing category. That's potentially decent money.
That's not low on Amazon now. That is like in
the teen and young adult drawing category. It's not like
(01:12:10):
a massive category, but like that's there are human coloring
books that are selling worse than this, I will guarantee you,
And that's potentially meaningful money for this person. I use
the term person lightly for whoever is like shitting this stuff.
Maybe it's all Paul, maybe it's all people copying him.
But like this is already a problem, is what I'm
(01:12:30):
getting at, Because it's like this isn't like the worst thing,
you know, in a world where yeah, I don't know,
like the Catholic Church exists, fucked up coloring books aren't
the worst thing for little kids, but like this is Seriously,
I don't think it's it's good Like kids pay attention
to stuff like this, like show like the Sheer, for
(01:12:53):
one thing. I think the laziness in this reads off
to them. But like it's just it's wrong, Like you
shouldn't be shotgunning stuff that's wrong to kids because you're
too lazy to like do even the minimum work of
regenerating the images until they're right. I find this on settling,
and there's a lot more to find on settling because,
(01:13:14):
like part of what I did while researching this was
kind of a deep dive on literary education theory because
I wanted to have an idea of how some of
the stuff is going to affect children. But that's going
to be all for part one with us today, Ben,
because you know, time to take five. Time to take five,
do a little bit of a breather, and then we'll
(01:13:35):
come back on Thursday and I will finish this deeply
fucked up story. Ben, you got anything to plug?
Speaker 2 (01:13:43):
Yes, please check out and subscribe to Cool Zone Media.
Do your part in supporting accurate depictions of dinosaurs for children.
You can also find me hanging with my rider dies
Matt Frederick and Noel Brown on shows like Stuff They
Don't Want You to Know and Ridiculous History. You can
(01:14:06):
find me talking trash about mrs and French military rations
on Twitter. Named it a burst of creativity at Ben
Bolan hs w or at Instagram at Ben Bolan. That's
that's the whole thing. Yeah, I think that's it, right, No, good.
Speaker 1 (01:14:24):
Oh yeah, excellent. Yeah. I also love a good MRI
and a bad MRI. I've been making my way through
these mrs that Mountain House made, and it's like chili mac,
which is devastating for your stomach. Just anytime you try,
it's a nightmare. It's a nightmare. There's some good ones
that I got a nice bison stew that I think,
(01:14:47):
uh oh yeah, one of the companies I have made. Yeah,
there's some good ones out there. I had some when
I was out hunting for a couple of days in
the Cascades last year. I had this this freeze dried
biscuits and gravy that actually fucked on reasonably hard.
Speaker 2 (01:15:01):
Did you uh, did you ever get done with all
that mac and cheese? You said something? Oh no, I
still have a ton of it.
Speaker 1 (01:15:09):
I have a couple of years worth of dried food
at any given time, just in case I have to
hole up in my house, you know, uh, firing wildly
at trespassers. Uh in the road style situation.
Speaker 3 (01:15:21):
Oh okay, yeah, yp Cormac.
Speaker 4 (01:15:25):
We also would like to Plug the newest schools on
media series Said Oligar, hosted by j K and Rahan,
available on All the Things Robert, Do you want to
give that link for where people can follow your substack
one more time?
Speaker 1 (01:15:41):
Hell yeah, it's shatter Zone. So just go to shatter
zone dot substack dot com. Uh and you will. You
will find my thing. You don't have to you don't
have to sign up or anything. It's free.
Speaker 2 (01:15:54):
You're good, okay.
Speaker 1 (01:15:57):
By shabam Zo.
Speaker 3 (01:16:01):
Behind the Bastards is a production of cool Zone Media.
For more from cool Zone Media, visit our website Coolzonemedia
dot com or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.