Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
The heartbreaking videos and images from the ongoing fires in
Los Angeles are filling up social media. A number of
them are claiming to show the iconic Hollywood Sign on
fire last night.
Speaker 2 (00:22):
Earlier this year, a bunch of videos went viral showing
the Hollywood Sign on fire.
Speaker 3 (00:27):
Literally like the Hollywood Sign, it was burning, it was
burning yesterday, burning like what is going on?
Speaker 4 (00:32):
It looked like the apocalypse.
Speaker 2 (00:35):
They were posted in January when fires were raging across
LA where I live. This was a scary time and
those videos only made things worse.
Speaker 4 (00:45):
These viral images that we see of the Hollywood Sign
on fire actually started to spread on social media as
Los Angeles in battles one of its worst fires in
all of its history.
Speaker 2 (00:56):
There were different versions. Some of the videos just showed
fire on the hill low the letters, and in other
videos the letters themselves were burning.
Speaker 3 (01:04):
When we saw the posts on Instagram, we were devastated.
We lived here all our lives and when we saw
that on Instagram, that broke our hearts. So we wanted
to come and see for as time, and we're really
shocked that the there's no fires, there's no fires, and
they're taking it basically.
Speaker 2 (01:20):
So those videos were all fake. They were generated by AI.
If you saw these videos when they first came out,
maybe you already knew they were fake. But what you
might not know is how much money these videos can
make for their creators. And if the online course that
teaches you how to make these videos is to be believed,
you can make over five thousand dollars a month. Videos
(01:42):
like the Hollywood Sign burning are part of a bigger
phenomenon that is probably filling up your feed right now.
There's a word for this slap.
Speaker 5 (01:51):
I actually don't know where it came from. Like I
had been calling it AI spam, I was writing about
it before it was called AI, and then someone on
Twitter I think, could just start calling it slop, and
I feel like slop is a better term for it.
There's a lot of different types of AI slop at
this point, but basically it is an AI generated image
(02:14):
or video that is designed to go viral on social media.
Speaker 2 (02:19):
Jason Kevler is my friend and former colleague from back
when we both worked advice. Since then, he co founded
a site called four or for Media, and pretty much
right out the gate this stuff turned into one of
his beats. You're one of the first people who was
really documenting AI slap before it really truly blew up.
Speaker 5 (02:39):
Yeah, I mean I'm obsessed with it. I've been obsessed
with it since the get go. I've just found it
to be like very fascinating to cover because I can
kind of like watch in real time as people lose
touch with reality. To be honest with you.
Speaker 2 (02:58):
I don't know if I'm quite as obsessed with with
this stuff as Jason is. But then again, he was
looking at golden Jesus helicopters and I'm here looking at
fake fires. But when I started investigating, I realized that
the stuff that he was seeing and the stuff that
I was watching, we're all part of the same machine.
And I managed to talk to one of the people
(03:18):
behind it. I'm afraid from Kaleidoscope and iHeart podcasts. This
is kill Switch. I'm Dexter Thomas, I'm Chari.
Speaker 6 (04:02):
Buy.
Speaker 2 (04:06):
When did you start writing about AI slab?
Speaker 5 (04:10):
Yeah, it was December twenty twenty three. One of our
readers posted this image of a guy in the United
Kingdom who carves dogs out of wood like with a chainsaw,
and this reader had noticed that there were all these
copycats of this one guy's dog wood carvings. Basically, there
(04:35):
were these Facebook pages that were posting dozens and dozens
of different versions of the original photos that he was uploading.
And so in some images it's like the guy would
be would have a goatee. In some images, the guy
would look Latino. In some images, the guy would be
a woman. In some images, the dog would be a
(04:55):
German shepherd instead of like, you know, a Golden Retriever.
And it was very obvious that they had been modified
by AI.
Speaker 2 (05:03):
This stuff sounds pretty innocuous, and I guess it was.
Back in twenty twenty three. Image to image generators like
Dolly and Microsoft's Being Image Generator were still kind of novel,
and if someone wanted to use that to make variations
on existing images of chainsaw wood carvings, I mean, that's
not really my thing, but hey, if people are into it, whatever.
(05:24):
It was just kind of a.
Speaker 5 (05:26):
Lot at first. This meant a lot of people were
using these aiimage generators and they were creating Facebook pages
and then posting I don't know, twenty thirty forty times
a day, and each page would have a theme. So
one of the themes would be like beautiful log cabins.
(05:47):
So a person would generate one hundred log cabins using
these aiimage generators and then they would post them on Facebook.
Speaker 2 (05:55):
But his AI tools got better, and these Facebook pages
became more viral, the AI slap became more targeted and weirder.
Speaker 5 (06:06):
The moment that this became really mainstream, in my opinion,
was this moment of shrimp Jesus and shrimp Jesus. Oh
we go right in Jesus, I mean, I think so.
So for a while, there were all of these images
of Jesus, this sort of stereotypical white Jesus, long hair,
(06:28):
and people were just posting hundreds of images of Jesus
in different situations and then the caption would say please
give me an amen. They would be asking people to
comment on this, which would boost in the algorithm, and
none of it was real. It was all AI generated.
There was a lot of images of Jesus sand sculptures,
and then the caption would be like only true believers
(06:51):
will love my art or something like that. And then
there was this one image where Jesus was floating above
the ocean and he had the face of Jesus but
the arms of shrimp. It was basically like an iracnid
(07:13):
Jesus with like six arms and they were all made
of shrimp. And it was just like obviously insane, like
an insane image, something that a human I don't think
would probably ever think up on their own. And this
image had millions of likes on Facebook, and there were
a bunch of different versions of it. It just went
incredibly viral and weirdly, like a lot of the comments
(07:35):
were not even about the fact that Jesus was a shrimp.
It was just like, amen, I love God, things like that,
and so, I mean, I don't think people thought it
was real, but it clearly didn't matter to a lot
of people that this was just like an absurd image.
Speaker 2 (07:53):
It sounds like shrimp Jesus was kind of the turning
point for Ais. I don't want to say the genesis necessarily,
I don't want to get too biblical with this, but
when I read your article on shrimp Jesus, I think
that is when I really truly realized, Okay, something really
really strange is happening here.
Speaker 5 (08:15):
Yeah, it was very interesting that this was happening on Facebook. First, like,
Facebook is a really old social network at this point,
it's not cool anymore. Like, I don't know a lot
of people who use Facebook in the same way that
they might have in like two thousand and eight, and
the people that I do know who use it are
(08:36):
generally older and therefore, I mean, perhaps more susceptible to
fake AI images. And then I wrote that article, and
there was also some different meme accounts on Instagram and
on Twitter who took screenshots of shrimp Jesus, and they
(08:56):
started saying Facebook is cooked. Facebook is became a meme
meaning look at what's happening over on Facebook. All the
old people are getting tricked into liking this absurd stuff.
And so that is a moment where AI slop escaped Facebook.
Speaker 2 (09:18):
Shrimp Jesus went viral in April of twenty twenty four.
At the time, I remember thinking it was pretty funny.
I actually interviewed Jason about it and we basically spent
half an hour just laughing. But now, just a year later,
AI slap isn't only on Facebook tricking our parents. It's everywhere,
and it's not just bizarre or obviously fake stuff like
(09:40):
shrimp Jesus anymore. Some of those fake la fire videos
looked real, and they fooled a lot of people, so.
Speaker 3 (09:48):
Which came right here just to check out the fires,
and there's nothing burnie in Hollywood.
Speaker 5 (09:53):
The sign like you was shown on TV last night.
Speaker 3 (09:56):
You guys should help each other out instead of trying
to create false narratives and trying to promote bus, you know,
help help each other out. It's not it's not it's
not right.
Speaker 2 (10:09):
This is the latest iteration of AI content, more realistic,
more targeted, and tied to real world events.
Speaker 5 (10:17):
That's the scary thing about it, right, is that when
I first started writing about this, there was almost nothing
that was political or tied to the news in any way.
It was almost all like, look at this crazy food,
look at this amazing futuristic sci fi scenescape, look at
(10:39):
shrimp Jesus, look at this sand sculpture. And then what
has happened is the people making this stuff have realized
that they can get more attention if they start tying
it to the news in some way.
Speaker 2 (10:56):
So one night, as the fires were still burning in La,
I opened up Instagram, and this was kind of everyone's
way of communicating, and it was also kind of a lifeline.
When you look at someone's stories, you never knew if
it was going to be them saying, hey, I'm okay,
I'm safe, or if it was going to be them
showing their house burning down. So I'm scrolling through and
(11:16):
I see this video of animals that are in a
burning forest and there's firefighters saving the baby animals. There's
a firefighter holding these two baby bear cubs and carrying
them out of the flames. Pretty quickly, I realized, wait
a second, this is fake. And then I look at
the view count. It's in the millions and there's thousands
(11:37):
of comments. I get curious and I look at the
username future rider US. That's kind of weird. So I
look at the post history, and most of the posts
were of these AI motorcycles that were riding through futuristic landscapes,
so the name makes sense for those. But I scroll
(11:58):
up and I see that as soon who as the
fire started, they posted a fake Hollywood sign burning video
that got them a million views, and that was their
biggest success to that point, and they just kept posting
fire content after that.
Speaker 5 (12:14):
The thing that's happened alongside of that is the AI
has gotten a lot better. The AI has gotten a
lot more convincing.
Speaker 2 (12:20):
I said, I was able to tell this it's fake,
but it took me a second. And the comments are
kind of mixed. There's some people who are completely unaware
that this stuff is fake and saying things like, Wow,
look at these brave firefighters. I'm so glad they're helping
the animals. But there's other people who are posting angry
comments like how dare you make fake content about this disaster?
Speaker 5 (12:44):
And so now you have a lot of people who
are making stuff that is designed to hit the news
cycle in some way, to upset people, to enrage people,
or to trick them. It's not just on face Book.
It's on YouTube, it's on TikTok, It's on Instagram. You know,
I've even seen it on Pinterest and LinkedIn. So it's
(13:06):
really become a strategy for people who want to make
money on the internet at this point.
Speaker 2 (13:12):
Yeah, it's weird because it seemed like everything trickled down
to Facebook in the past few years. Good features from
somewhere else eventually make it over to Facebook. Information that
first happens on Twitter, because it's really real time, eventually
makes it over to Facebook. It's not like this really
weird thing that has started on Facebook is spread out
(13:34):
to the rest of the world.
Speaker 5 (13:35):
It's the first time Facebook's irrelevant in like a decade.
Speaker 2 (13:40):
This is the innovation that's happening on Facebook. No joke.
Yeah is aislab. But when was it that you first
started seeing stuff that was realistic and was directly tied
to actual events that were happening in the real world.
Speaker 5 (13:55):
There was this moment where in Israel's invasion of Gaza,
there was that moment where they went into Rafa and
there was an AI generated image called all Eyes on Rafa.
Speaker 2 (14:11):
Can you describe the image?
Speaker 5 (14:12):
Yeah, So it was huge font and it said all
eyes on Rafa. There was like a bunch of tents
and what looked to be like a refugee camp. And
this was shared like tens of millions of times on Instagram.
(14:33):
A lot of celebrities shared it. There was a bunch
of people writing about it. So I started looking for
where it came from, and I found this group on
Facebook for AI creators in Malaysia and they were testing
all of these different versions of AI generated images about
(14:55):
the war in Gaza, and the all Eyes on Rafa
image came from that Facebook group. So that was the
first time that I ever saw AI spam intersect directly
with the news, and it was interesting because in that
group they were talking about the all eyes on Rafa
image and how viral it went, and we're like, maybe
(15:19):
we can reverse engineer this, maybe we can do this
over and over and over again. A month after that,
I figured out how people were making money with these
images and why and the whole strategy behind it.
Speaker 2 (15:33):
Who's behind these posts and how do they make money.
We'll get into that after the break. One of the
disconnects that I've been seeing just in my own reporting
(15:54):
is that I think there is an awareness growing among
some people that AI generated images are a thing, and
that it's not always obvious if something is AI. But
I think a lot of people don't necessarily realize why
this is being made, what motivation there might be.
Speaker 5 (16:16):
For a very long time. I and also the experts
that I spoke to, One thing that we couldn't figure
out was like, why are they doing this? Like what
is the scam here? Because there was no obvious way
that it was being monetized. For months, I thought that
people were doing this and trying to make money off
(16:36):
of Facebook's platform in some way. I thought they were
trying to hack people I thought they were trying to
push them off the website to steal their credit card
information or something like that. But all along they were
just getting paid directly by Facebook because these images were
going viral.
Speaker 2 (16:54):
These creators don't need to scam anybody. Having a viral
video is enough to make a lot of money.
Speaker 5 (17:00):
Facebook has this thing called the Creator Bonus Program, and
it basically pays people a sliver of ad revenue for
posts that get engagement.
Speaker 2 (17:14):
How much money can you make? Meta doesn't publish those numbers,
but I asked a few influencers and recently the rate
seemed to be around one hundred to one hundred and
twenty dollars per million views.
Speaker 5 (17:25):
It's become a job to be totally honest with you.
It's pretty crazy because there's there's like this one example
of a guy who quit his job in India. He
was like working in the medical industry and he wasn't
making a lot of money and he was supporting his
family and he was like, I used to work like
(17:47):
sixteen hour days and then I learned about spamming Facebook.
And he posted this one image that was a train
made out of leaves. It's like just to passenger train,
but the train was made entirely out of leaves. It
was an AI image and he made four hundred dollars
from that image. And he was like, I was only
(18:10):
making two hundred dollars a month at my other job. Wow,
And so I just do this now.
Speaker 2 (18:17):
Wow. And there's the incentive. You can make pretty serious
money here, which brings us back to future writer us.
It's not really possible to know exactly how much money
they're making, but if we go back to that estimate
of about one hundred bucks per million views, let's compare
that to their most successful day. In a roughly twenty
(18:38):
four hour stretch, starting on January tenth, they posted seven
videos which collectively got about ninety four million views. Theoretically,
that could work out to nine thousand, four hundred dollars
in one day. And that's not the only way the
account makes money. They also sell a guide so that
you yourself can start your own AI slaw business for
(19:01):
just nineteen ninety nine.
Speaker 6 (19:02):
Want to grow your Instagram TikTok YouTube fast, get thousands
of followers, create viral reels, and earn five thousand dollars
plus every month with the Viral Reels Guide it's easier
than ever. Step by step instructions, proven strategies, the perfect
for beginners, no experience needed, Ready to start. Grab the
(19:24):
guide today for just nineteen dollars and ninety nine cents.
Speaker 2 (19:28):
This is a key part of the new AI slop economy.
Speaker 5 (19:32):
So there is this entire network of influencers. Almost all
of them are in India, Vietnam, Pakistan, a couple in
a few African countries and Central America, but for the
most part they are in developing countries who have YouTube channels,
(19:53):
who are like, here is how you can make AI spam,
and here's how to put it on Facebook, and here's
how to mind to ties it.
Speaker 2 (20:01):
So as I went further down this Future writer us
rabbit hole, I got so curious that I realized I
needed to know what was in this course they were selling,
So for journalism's sake, I bought it. It's a ZIP
file with two files. The first one is this really
short PDF which is pretty clearly chat GBT generated, and
(20:23):
the summary of it is it basically gives you these instructions.
First look online for what's already trending at the moment. Second,
type that into Sora dot Ai to generate a video
about that thing, add some music, and three posts a
video online. Then just repeat that multiple times a day.
The second file is just a seven minute iPhone screen
(20:44):
recording of them generating a prompt and then uploading that online.
Speaker 5 (20:49):
It's like almost like a pyramid scheme where it's like, well, well,
this one person made a lot of money and here's
a guide to doing it, so I'm gonna try as well.
You just have like tens of thousands of people who
are trying this, and the end result is the entire
platform gets spammed.
Speaker 2 (21:05):
So who's the audience for this stuff?
Speaker 5 (21:09):
I mean, there's definitely some strategic thinking about will people
in the United States, the United Kingdom or Canada care
about this stuff? That's the strategy. And I know that's
the strategy because a lot of the videos about how
to make this stuff talk about here's what people in
the United States care about. Like I watched one video
that basically was like Americans love babies and they love pets,
(21:32):
and so make AI images about babies and pets. I've
seen guides that are like, here's what you need to
know about Jesus because you may be Hindu and you
don't know so like, if you're making AI spam about Jesus,
here are words to type in that will get you Jesus.
Because a lot of Americans are Christian. You want people
(21:54):
in the United States and Canada to look at these
because the way that online advertising works is if you
are in a richer country, the ad rates are higher,
page views from developed countries.
Speaker 7 (22:09):
Are worth more.
Speaker 5 (22:11):
Yeah, but I think the audience is an algorithm. I
think that the audience is like what what works well
in the algorithm, because the goal is not to create
an amazing image or an amazing video that people are
(22:32):
going to resonate with. The goal is to make people
linger on any given image or video long enough to
send a signal to the algorithm that this is something
that a human being spent time looking at, spend time
engaging with. You know, if it's something totally absurd, you're
going to get people in the comment saying, hey, this
(22:54):
isn't real, and then you're going to have fights back
and forth, and all of those signals send to the
algorithm that this is something that's worth surfacing in someone
else's feed. And so I really don't think that the
audience for this is real human beings.
Speaker 2 (23:13):
You might have heard that last bit and thought of
something called the dead Internet theory. It's well, you could
kind of call it a conspiracy theory, but basically it
refers to the idea that the Internet has so many
bots and algorithmically generated content that human interaction is actually
a minority of the traffic online. The majority of traffic
(23:33):
is just bots spamming each other back and forth.
Speaker 5 (23:36):
And the way that you would apply this to say,
shrimp Jesus, for example, is like, well, there's a bot
that posted this AI image, and then maybe all the
people liking at our bots, and maybe all the people
commenting on it our bots, and therefore none of it
is real.
Speaker 2 (23:56):
But Jason has a slightly different theory.
Speaker 5 (23:58):
I think that's too reductive. I just I don't think
that that's what's happening, and I know that's not what's happening,
because there are human beings in the loop. I'm certain
of it. Like they're human beings who are prompting these
AI images and are posting them. They may not be
monitoring the accounts very closely, but they're making it and
(24:21):
they're posting these images. And then there are definitely bots
who are liking and commenting on some of these images.
But these images and these photos and these you know,
videos are showing up in people's feeds. So I've called
the zombie Internet, where it's like a mix of human
(24:43):
beings and bots, where you have human beings arguing in
the comments with bots. You have human beings in the
comments arguing with other human beings. You have bots in
the comments arguing with other bots. And it's to me,
that's even worse than dead Internet, where everyone is a bot,
because you have all of these real humans who can't
(25:04):
tell that they're commenting on an AI generated image and
they're arguing about it, and they're spending tons of time,
like tons of like you know, human hours, engaging with it.
Speaker 2 (25:20):
Yeah, like we're humans are moving amongst the dead, and
you end up with something that's only kind of half
alive and half dead and somewhere in the middle there. So,
while humans might not be the target audience for these
images and videos, real people are being fed this content
and you'd think that that would piss them off, but
we're starting to find out that that's not necessarily the case.
Speaker 5 (25:43):
So there was a Hurricane Helene that hit you know,
North Carolina, Georgia the southeast and in the aftermath that
there were a lot of horrible images coming out of it,
has come out of any natural disaster, and there was
an AI generated image of a three year old girl
crying and I think she was like sitting on a
(26:05):
raft or you know, there was like a flood around her.
And this image is very clearly AI generated, but it
went really viral and it was shared by a few
Republican politicians and they were sort of talking about how
this is what FEMA and Joe Biden's poor response to
(26:27):
the hurricane has done to people, right and there was
a moment where everyone was like, this is fake, You're stupid,
And the response from the politicians were like, I don't
care if it's real or not. Something like this is
happening there.
Speaker 2 (26:43):
Yeah, I mean, I'll even read it here. This is
Amy Kramer, the National Committee woman for the Republican National Convention, said,
I'm reading this verbatim. There are people going through much
worse than what is shown in this pic. So I'm
leaving it because it is emblematic of the trauma and
pain people are living through right now. That is something
(27:03):
that I've seen a lot when I look at the
comment sections of fake AI generated images that are related
to something that's actually happening in the real world, you'll
see people in the comments who were, I think, in
their own way, trying to make the Internet a better
place by telling people, hey, this is fake and trying
to educate people. And then some of those responses though,
(27:25):
are well, I don't care if this is fake. It
doesn't matter because I'm sure something like this is happening.
This is just a depiction of it. Yeah, that I
find really interesting.
Speaker 5 (27:39):
It's really interesting, and I'll tell you like I used
to be more optimistic that social media can be fixed
and like her information ecosystems can be fixed and things
like that, and a lot of my first articles about
this were about people can't tell that they're not real,
(28:02):
and therefore it's bad that people can't tell that it's
not real. And now it's a mix of people not
being able to tell that these things are not real
and people not caring that they're not real. And the
second one is almost worse, where you know it's fake,
but you share it anyway because it captures some sort
of vibe, like if it verifies their worldview, then it's
(28:26):
useful to them in some way. Making it very unclear
what is real and what is fake is part of
the point of that entire project, where the truth is unknowable.
Speaker 2 (28:39):
What are social media platforms doing about this? And is
there anything that we can do about it? That's after
the break, So this is where I have to ask
about what the platforms are doing, because this sounds like
(29:03):
it would make the experience really not fun. You get
on Facebook, you get on Instagram, you get on TikTok,
and everything is fake. Platforms can't possibly want this. So
you've talked to platforms, what's been the responsive places, especially
like Facebook.
Speaker 5 (29:21):
Yeah, they don't care, like they like it, and I
know that they like it because Mark Zuckerberg has talked
about it in quarterly earnings reports Q three of last year.
Speaker 7 (29:35):
Another part that I haven't talked about quite as much
yet is the opportunity for AI to help people create content.
And I think we're going to add a whole new
category of content, which is AI generated or AI summarized content.
And I think that that's going to be just very
exciting for the for Facebook and Instagram and maybe threads
(30:01):
or other kind of feed experiences over time, and.
Speaker 5 (30:04):
I've also talked to Facebook comms people and said, like,
do you want this type of stuff on your platform?
Are you going to delete it? And you know, they
will delete some of the really grotesque things if it
violates other parts of their content policies, but they will
not delete anything just because it is AI generated or
(30:25):
because it's spam. And at the same time, Meta is
developing its own artificial intelligence. You can make AI slop
for lack of a better term, using Meta's own tools
and then post it to Facebook. And then most recently
that they said, you know, we're going to hopefully create
(30:47):
tools that will allow users to make their own AI
generated profiles of fake people. And we imagine a future
where like a lot of the content on these platforms
is generated by AI.
Speaker 2 (31:03):
Jason, why like this seems like a bad All this
stuff seems bad.
Speaker 5 (31:08):
Yeah, I've tried to figure out exactly why this is happening,
Like I've tried to put myself in Mark Zuckerberg's shoes
and be like, Okay, what is like the bullish argument here?
And so one, it's like they're spending billions and billions
of dollars on AI data centers, so they need to
push this on people because they think that artificial intelligence
is the future of work, it's the future of the Internet,
(31:31):
it's the future of humanity, and so they are desperate
to find use cases for this in some way. The
other thing that I've been thinking about is that the
way that all of Meta's platforms work is it tries
to learn as much about you as possible so that
you stay on their platforms as long as possible so
(31:51):
that they can deliver targeted ads to you. And right
now they have billions of people posting on their platforms
all sorts of different types of content, and then they
need to rely on their algorithm to categorize that content
in some way and deliver it to people who they
think will like it. And what I think is happening
(32:12):
here is they want to use artificial intelligence to create
hyper specific types of content so that you get on
their platforms and you don't stop scrolling. It's like, if
you are really into, like a specific type of sports
car and you only want to see like amazing videos
(32:34):
of that sports car driving around, there might not be
human beings creating enough of that content to like satisfy
you and keep you on that platform. Long enough, Whereas
with artificial intelligence, they can just make millions of different
variations of the specific thing that you're into, feed it
to you over and over and over again, and then
(32:56):
you know, target you even more close to with ads
and keep you on the platform longer. They want to
trap people into like even more specific algorithmic silos where
hyper specific, artificially intelligent content is fed to you endlessly.
Speaker 2 (33:14):
Which brings us back to our guy or gal. I
actually don't know, future writer us. I'd seen all these
angry comments in their posts, and I decided I just
needed to see what the person behind the account was thinking,
so I DM them. They told me they were Russian,
and at first they were mostly bragging about how many
views they were getting. But when I pointed out that
(33:36):
a lot of people were either angry about their posts
or had no idea that it was fake, they started
getting defensive. Future writer us said that they didn't see
the problem because they'd added an AI tag to their videos,
and well, they have a point. Instagram does have a
feature that allows uploaders to voluntarily tag their posts as
AI generated. The trouble is that this label doesn't show
(34:00):
up when you're watching the video Normally, you only see
it if you look at the bottom and there's a
C more tag and you tap that, and even then
space is prioritized for the song title, so sometimes that
tag is pushed off the screen and all it says
is AI info in small text in the bottom right,
and if you tap that, it shows you some more
(34:22):
information AI info, not AI warning, not AI caution, just
AI info. Why would you ever tap that? Meta has
a page on their site that makes a big deal
about the introduction of this tag, and they primarily show
what it looks like in the grid view. The issue
(34:43):
is that it's even more imperceptible there. So when you
first look at a post and the grid, you see
the location tag, then you see the music title, and
that scrolls in a view for a few moments. Only
after that does the text AI info appear. By that
you're watching the video, you're not looking at tiny text
rolls in the upper corner of your screen. And also,
(35:06):
Meta never notified users that they were rolling this feature out.
Why would anyone know that this exists? And by the way,
I did reach out to Meta when I was first
reporting on this, and I asked them about their policy
on AI on their platform, specifically, if there's any obligation
for a user to more clearly label a post as
AI other than that small badge, or why the desktop
(35:28):
doesn't show the badge at all. They never responded, and
this is where I have to kind of agree with
Future writer us. They said that if people don't notice
the interfaces AI tag, it's not the poster's fault, it's Instagram's,
and it's the platform's responsibility to make those tags bigger
or more noticeable. That doesn't seem to be happening, and
(35:50):
since both the AI slot posters and the platforms are
making money, there's no real incentive for anyone to stop.
But it does get weird when this stuff is happening
in a place that you live. You've been reporting on
AI slap a lot, and I'm sure that when you
saw the Hollywood sign thing on fire, youknew exactly what
(36:12):
that was. Was there anything about the AI generated slap
that was coming from the LA fires that was surprising
to you?
Speaker 5 (36:21):
I wasn't surprised that it happened, like I wasn't surprised
that people were making slop about it. I was surprised
that it upset me, like I was surprised, as someone
who lives in Los Angeles, who knows people who lost
their homes, who saw the fires like you know, the
first day as they were growing. I was surprised that
(36:44):
I was upset that this was happening and that people
were making money off of it. And I think I
was also upset because there were some really brave journalists
from the LA Times, New York Times, a lot of
local spots who were going there take photos, taking videos,
risking their lives to share this stuff. And then many
(37:08):
of the images and videos that were going really viral
were just like the Hollywood Sign is on fire. And
I was surprised because it was like the first time
that aislop had impacted anything that had any personal meaning
to me, and I found it to be like pretty upsetting.
Speaker 2 (37:24):
It's interesting that you say that about people risking their
lives because the creator, when people started calling them out
a lot on their most viral reel, they posted a
response and they say, I'm Marita here. In this video,
I aimed to shed light on the reality of what's happening.
The problems are very real. Animals are dying, homes are
(37:47):
being destroyed, and firefighters are risking their lives to save others.
They don't have the time to produce visually stunning and
powerful footage to raise awareness about these issues. That's why
I took the initiative to create something that could help
people see and truly think about these tragedies. So basically,
people are risking their lives, they don't have time to
make these really well produced things, so I made this
(38:09):
for you. Which, of course, this all breaks down when
right after the fires stop being in the news as much,
they go on to making completely other unrelated AI generated content.
Speaker 5 (38:21):
I find it to be offensive more or less. It's
just like it's I think it's kind of tasteless. I
don't think that that matters anymore on the Internet. But
you know, there were plenty of images coming out of
the La Fires, and we don't need an unlimited, infinite
(38:42):
visual gallery of everything that is happening. And I think
that's one of the biggest problems with AI slot more generally,
is that you can find any news that you want,
anything that confirms your worldview because of and because of
social media. So it's like, if you think that the
(39:04):
La fires was an act of God punishing gay people
in Los Angeles, which I've seen AI videos where that's
what they're about, Like God is striking back against Hollywood's
gay people. There's a video for you. If you think
that it was like a space laser, there's a video
for you. Like if you think that it was just
(39:24):
like climate change, there's a video for you. If you
only care about the animals that lost their homes, like,
there's a video for you.
Speaker 2 (39:32):
Yeah, it just feels like something we're gonna see more
and more. You know, every time there's some new advancement
or some new engine has dropped, some new you know,
AI technology is dropped, people look at it, and the
really excited people will post, oh my gosh, look at
this and just think we're able to accomplish this today.
But this is the worst it'll ever be. What does
(39:55):
that mean for the future of AI slop and the
future of how we're going to experience reality on the internet.
Speaker 5 (40:05):
Yeah, So two things. One that's absolutely correct. It's like
I've watched this stuff evolve in real time, and AI
generated videos especially have gotten far more realistic in the
last month and way way way better than they were
a year ago. And two the people were saying, like
we can use artificial intelligence to improve special effects for Hollywood,
(40:31):
to improve productivity for you know, writers or whatever like that.
All the productivity gains that are going to happen in
sort of like a best case use of artificial intelligence,
that very well might be true. But alongside of that,
you're going to have so many more people using these
tools to spam the Internet, to abuse people, to make
(40:55):
deep fakes of you know, celebrities and people that they
know in high school and things like that. And when
you're just like using the Internet as a consumer like that,
that's a lot of what you're going to experience because
there's a lot more people trying to make a few
bucks on Facebook than there are Hollywood studios that are
(41:15):
trying to make like a triple A film.
Speaker 2 (41:19):
So one last thing before I let Jason go, I
asked if he could read me this one paragraph from
his shrimp Jesus article, and hopefully it'll make sense why I.
Speaker 5 (41:29):
Asked, there are AI generated pages full of AI deformed women,
breastfeeding tiny cows, celebrities with amputations that they do not
have in real life. Jesus as a shrimp, Jesus as
a collection of Phanto bottles, Jesus as sand sculpture, Jesus
as a series of ramen noodles, Jesus as a shrimp
(41:49):
mixed with sprite bottles and ramen noodles. Jesus made a
plastic bottles and posing with large breasted AI generated female soldiers.
Jesus on a plane with AI generated sexy fe flight attendants,
giant gold Jesus being evacuated from a river, golden helicopter,
Jesus banana, Jesus coffee, Jesus goldfish, Jesus rice Jesus, any
(42:11):
number of AI generated female soldiers on a page called
beautiful Military, a page called everything skull, which is exactly
what it sounds like. Malnourished dogs, indigenous identity pages, beautiful landscapes,
flower arrangements, weird cakes, et cetera. I should write like
that more often, where it's just like, here's a bunch
(42:33):
of shit I saw.
Speaker 2 (42:36):
I mean, just the breadth and the absolute unhinged quality
of it. I think it gives. It's a wild snapshot
of where we were. What is this last year? That
was the state of Ai back then was slop, was
kind of funny, and now not that long long after this,
(43:01):
we're now in the zone where aislop is being used
to make people afraid. It's being used to make people
think the things that are happening that are obviously not happening.
And this is affecting people who otherwise think that they're
very smart and informed and engaged and knowledgeable about the world.
So we've made a jump, a really quick jump from Hahaha,
(43:25):
isn't this funny all of us smart nerds on the
internet get to laugh at our parents' generation to wait
a second, this is affecting stuff that's affecting our perception
of a literal natural disaster that happened where you and
I live.
Speaker 5 (43:43):
Yeah, I mean almost everything is trying to scare people.
It's trying to talk about the news, it's trying to
confuse people. And yeah, there's definitely a part of me
where I'm like, can we go back to Jesus with
the hot flight tence? Like what are we doing here?
Speaker 2 (44:03):
And here we are?
Speaker 5 (44:05):
Yeah, And like I feel like I can still generally
tell what's AI generated and what's not, but I'm using
like way more of my brain power like to try
to decipher what's real and what's not because it is
getting way better.
Speaker 2 (44:21):
Thank you so much for listening to kill Switch and
let us know what you think. If there's something we
want us to cover, let us know that too. You
can hit us at kill Switch at Kaleidoscope dot NYC,
or you can find me at dex Digi on the
Gram or Blue Sky if that's more your flavor. And
also if you can leave us a review, it helps
(44:42):
other people find the show, which helps us keep doing
our thing. This show is hosted by me Dexter Thomas.
It's produced by Seno Ozaki, Darluk Potts, and Kate Osborne.
The theme song is by Kyle Murdoch, who also mixed
the show from Kaleidoscope Or execut Native producers are Oz
Valascian on Guest, Kaki Kadur, and Kate Osbourne from iHeart
(45:05):
Our Executive producers are Katrina Norbl and Nikki E. Tor See.
All in the next one h