Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
I don't want to see us in a position where
like we're just flooded by low quality, inaccurate misinformation. That's
just like brain candy.
Speaker 2 (00:19):
There are no girls on the Internet. As a production
of iHeartRadio and Unbossed Creative, I'm brigitad and this is
there are no girls on the Internet. So recently, I've
been deeply fascinated by this noticeable uptick in AI generated
low effort content all over social media. On Facebook, maybe
(00:43):
you saw the picture of the elderly twins who are
celebrating their one hundred and twentieth birthday, posted alongside the
caption prompting you to leave them a little birthday love,
or that bizarre image of Jesus made entirely out of
shrimp asking for an amen. Now, on Facebook, it does
tend to trend a little bit more Sacraine. But over
(01:05):
on TikTok, creators are flooding the platform with low effort
videos that push truly ridiculous conspiracy theories, things like the
government is holding a vampire hostage. And not only that,
but because of the way TikTok's creativity program works, spreading
these ridiculous lies equals a payday. It sounds really silly
(01:27):
and maybe even kind of harmless but disinformation. Researcher and
friend of the show, Abby Richards, who has been tracking
the spread of this kind of content for Media Matters, says,
if garbage like this is incentivized, it makes our entire
Internet ecosystem worse. I usually start by asking my guests
to like introduce themselves, give us their title, all of that,
(01:48):
But I feel like by now people know you. You are
a certified friend of the show, Abby Richards, Welcome back.
We're so happy you are here.
Speaker 1 (01:56):
Oh my god, Bridget you know I would do absolutely
anything for you. And do you need a kidney, You
can have one.
Speaker 2 (02:02):
I feel the last time I saw you, I was
pretty drunk in.
Speaker 1 (02:05):
Berlin and we had a great time.
Speaker 2 (02:11):
So the reason I wanted to talk to you today
is because you're sort of my I don't know, resident
TikTok expert. I feel like nobody knows the platform better
than you. And what I say, knowing the platform, I
mean the good, the way that it can connect people,
educate people, bring people together, give people access to community
they didn't know they had, and also the bad because
as much as I know you like TikTok. I don't
(02:34):
know that I know many people that are as honest
about the platform and as critical of the platform as you.
Speaker 1 (02:40):
Yeah, you know. I really approach TikTok from two angles,
which are the two hats that I always wear. My
first hat is as a researcher and like studying misinformation
extremism on the platform. But then my other hat is
a content creator and like a user of TikTok, someone
who was found great joy there at times, and there's
(03:06):
I like situations where there's complexity, you know, like we
can understand that things can be simultaneously amazing and also terrible,
and we can hold space for both of those things
at the same time.
Speaker 2 (03:18):
I'm glad that you were doing that because I feel
so often when we're talking about technology, it's either this
technology is harming everybody and it should be made like
it's it's I think there's a temptation to talk about
it in a binary either like all good or all bad.
And I'm happy that people like you can really bring
the nuance of like, well, here's all the good things.
(03:39):
I want to be honest about all the good things
it's brought to me, but let's also be honest about
the places that are not so good and the things
that need to be fixed on the platform.
Speaker 1 (03:46):
Yeah, because also, how are we going to move forward
and create better digital spaces if we can't pinpoint what's working,
what's good, what do we like, and also what's not
working what's harmful? Like, I'm all for creating healthy digital
spaces where people can thrive and like evolve and learn
and grow and laugh and build community love like, yeah, amazing,
(04:10):
but I want to avoid creating spaces that like are
exclusively prioritizing profit right and in doing that, creating some
like really toxic spaces.
Speaker 2 (04:22):
Well, to that end, let's talk about some of the
reporting that you did around something happening on TikTok. So
you initially pointed this out with Media Matters back in February.
We're talking months later. But it's really that folks had
been exploiting TikTok's creativity program by pumping out viral conspiracy
theory content using AI generated images and voices for profit.
(04:44):
So I want to start with what exactly is TikTok's
creativity program for folks who aren't familiar with that.
Speaker 1 (04:49):
Yeah, So TikTok's Creativity Program actually was TikTok. It was
technically was called or TikTok Creativity Program Beta because it
was in beta, and it was a program designed to
compensate creators for the content that they produce, which is amazing,
that's great. We do want to be paying people for
(05:09):
their labor on those platforms. And the criteria for eligibility
for the Creativity Program Beta was that you had an
account based in countries where it was available. You had
to be at least eighteen years old, you had to
have at least ten thousand followers, you had to have
at least one hundred thousand video views in the last
(05:30):
thirty days, and the videos that you could monetize that
could be monetized had to be at least sixty seconds long.
So that was replaced. In March, they ended the Creativity
Program Beta and officially launched the Creator Forwards program, which
is very very similar in structure, essentially same requirements, but
(05:51):
is no longer in beta.
Speaker 2 (05:54):
So because of this program, these tiktoks can rack up
millions and millions and millions of views, which makes woever
post that probably a pretty good bit of money. What
kind of conspiracy theories are you seeing in these tiktoks.
Speaker 1 (06:05):
Oh. We had the seemingly AI Joe Rogan predicting Texas
will seceed starting a civil war, which had eight hundred
thousand views. Simpson's predicted that the Baltimore Bridge would collapse.
That has two point seven million views. It is also
a minute in one second seemingly Ai j o'rogan saying
about three weeks ago, a man named Blake Dawson embarked
(06:28):
on a mission to uncover the hidden depths of the
Denver International Airport. He discovered a vast laboratory underneath, et cetera,
et cetera. Now he's missing. That had just under four
hundred thousand views. I had an Ai seemingly Ai Joe
Rogan and seemingly AI Matt Rife talking about SeaWorld hiding
(06:48):
the world's last megaladon, Oh my God, that had jos
under nine hundred thousand views. And I had a seemingly
AI Elon Musk talking about flat Earth and I have
one eight million views, and it was one minute in
one second long.
Speaker 2 (07:03):
Something that strikes me about this, like there's that undercurrent
of like the kind of guy who likes Elon Musk,
Joe Rogan, and Matt Rich. It's like, if you're if
that guy is you, you should be offended that they're like, oh,
stupid dumbasses who will believe absolutely anything. That is who
we're targeting. That is you, Like they're almost insulting you
(07:24):
to your face in a kind of way.
Speaker 1 (07:26):
Again, I think you have different read on the situation than.
Speaker 2 (07:30):
They do, Like are you a gullible sucker? So with
the current iteration of this program, are you still seeing
these the kind of conspiracy theory tiktoks that you saw
on the platform when the program was in beta.
Speaker 1 (07:46):
Yeah, So let's walk back and I'll explain, like what
was even showing up on the platform at the time.
What we were seeing was this entire niche of AI
generated conspiracy theory content on TikTok. And so it used
seemingly AI generated images sometimes mixed with like non AI
generated images and AI generated speech to text to like
(08:10):
pump out conspiracy theory content really high volumes. Uh, and
it performed really well in TikTok's algorithm. Right, We've been
seeing that for years. We know that that conspiracy theories
often perform well in these algorithms. That are engagement driven.
They're driven by watch time, and so these videos were
pretty formulaic a lot of the time. So they started
(08:30):
with some sort of unhinged statement at the top to
hook you in. So they'd be like the US government
discovered a vampire and they're keeping it a secret, or
scientists just discovered the last megaladon and they're keeping it
a secret. And they would use like, uh, they would
use AI generated images fast paced like super you know,
(08:54):
they would use like AI generated images like this very
fast paced editing style captions the speed to text, and
like spooky background music that really like hook you in
and like they stimulate like all of your senses at once,
and then they'd create this background story just essentially to
take up the remaining sixty seconds of the video, presumably
(09:18):
so that they could qualify for TikTok's creativity program at
the time. And so yeah, like I said, like they
were very unhinged statements at the top oftentimes, and I
really liked this bit. They would use like a fictional
researcher slash explorer character who had just discovered something, and
(09:39):
it was always a guy. It was never a woman,
because why would it be a woman. Women can't be explorers,
and they'd be like, I don't know. Like Eric Smith
was on an expedition to Antarctica just three weeks ago,
and somehow he'd always ended up dead at the end.
So we were like, how do we know this?
Speaker 2 (09:58):
Then they're really you're you're really bringing a lot of
like logical questions to this of like, then how do
we get this? How do we get the story out?
Speaker 1 (10:07):
Yeah? How did we get the story out if he
died on the expedition? I just want to know. I'm
just asking questions. So I spent a lot of time
deep diving into these. Uh. And there were also content
creator gurus who were creating videos like teaching people how
to make these too, So I was watching their videos
(10:29):
learning how to make them.
Speaker 2 (10:30):
Why do you think there are so many of them?
Are like about Godzilla or asteroids, almost sort of science
fiction y.
Speaker 1 (10:37):
Well, I think to some extent they're a little bit
Again this is purely speculation. I think they're slightly safer
to play with, of like, mythical creatures are probably less
likely to get you in trouble with TikTok moderations, and
like a conspiracy theory about the Rothchild. I do sometimes
(10:59):
see ones that are, you know, more explicitly harmful, Like
we saw like a lot of like space is fake,
histories has been rewritten, history is fake, Flat Earth kind
of stuff, and that stuff definitely is harmful. But I
do think that a lot of like the mythical creature
stuff is kind of in that gray area where it's
(11:24):
it's not some super nefarious, clearly anti Semitic conspiracy theory,
right Like we're talking more about like mythical creatures. A
lot of the time the stories seem to reflect like
horror more than even like conspiracy theories. At times, they
use a lot of like horror tropes and trying to
(11:45):
evoke that like eerie feeling.
Speaker 2 (11:48):
And even though we're talking about wacky horror stuff, usually
there is some undercurrent of like, oh, the government is
lying to you. You're not being told the full truth.
They know something they're not telling us, but I'm telling you.
Speaker 1 (12:01):
And that's so appealing to us, right Like we want
to feel like we're in on some special knowledge. We
there's nothing we love more than feeling like we know
something that other people don't. So it's it's again, it's
very appealing to sorry, I think my cat just screamed.
Did you hear that the girls are fighting?
Speaker 2 (12:24):
The girls are fighting? Maybe they maybe they're like chiming in.
They know something we don't.
Speaker 1 (12:29):
They do, they're always planning. Conspiracy theories against me are conspiracies.
We love feeling like we're in on some special knowledge.
And that's, you know, one of many many reasons why
conspiracy theories continue to be popular time and time again.
Speaker 3 (12:51):
Let's take a quick break eder back.
Speaker 2 (13:06):
I should say a common tactic of these low effort
AI generated conspiracy theory tiktoks is making the content look
as if Joe Rogan is discussing it on his podcast.
They'll actually use AI generated content of Joe Rogan to
make it seem as if Rogan actually did discuss the
government covering up that they found an alien or a
(13:26):
vampire or whatever. But even Joe Rogan has not actually
entertained these conspiracy theories, and it's just another way to
credential whatever outlandish claim their content is making by visually
associating it with Joe Rogan. I did notice something else
about a couple of these videos that I don't know
if you have thoughts about what's going on but where
(13:48):
they have been designed to make it look like they
are specifically being discussed on Joe Rogan's podcast. So it's
like an image of Joe Rogan and it's like him
being like, oh, did you know that we're all gonna
die because scientists found an asteroid? Pull this up, and
then it makes it seem as though they've discussed this
on the podcast even though they have it as far
(14:08):
as I know.
Speaker 1 (14:10):
Uh yeah, and quite cleverly, the captions are always put
right over his mouth so that you can't tell easily
how dubbed it is and how bad that dubbing is. Yeah,
that was a really interesting part of this research that
I wasn't necessarily expecting, but so much of this content
(14:32):
was especially I got a lot of videos using Joe
Rogan where Joe Rogan did the exact same formula, right,
he'd be Like three weeks ago, a man called Blake
Dawson embarked on a mission to uncover hidden depths of
the Denver International Airport and he discovered a vast laboratory underneath.
That's like YadA, YadA, YadA, literally verbatim. I wrote that down. Yeah,
(14:56):
Like I got another one of Joe Rogan predicting that
that Texas was going to secede from the US starting
a civil war on May fourteenth. It is now May
twenty first, and that hasn't happened.
Speaker 2 (15:09):
Date is come and gone. As far as I know,
Texas is still part of the United States last.
Speaker 1 (15:15):
Time I checked. And one thing I also do I
want to stress too, because I didn't save this up
at the top, but like does provide some context, is
that these videos are going mega viral, like very very viral. Like.
We were seeing these types of videos with like, you know,
twenty thirty million views at times, you know, ten million
(15:39):
views on some just millions and millions of views. We
found one account that, yeah, there was one account that
actually only say this. So we identified a network of
seemingly affiliated accounts that we're all posting these exact same
type of AI generated conspiracy theory content, a lot of
megaladon content on this account for whatever, on these accounts,
(16:02):
and they were posting these videos in English, Spanish and French,
using like a translated version of the same user name
and the same profile picture, which also seemed AI generated.
It's hard to say for sure, but and so they
had received at the time that we looked at it,
over three hundred and forty two million views on this
(16:25):
account just posting this sort of content, and then the
Spanish language the Spanish language account had received three hundred
and twenty nine million views.
Speaker 2 (16:35):
Those numbers are shocking, but also what I know about
Spanish language content, I feel like people who are interested
in flooding digital spaces with junk, they are definitely going
to translate that junk into Spanish because they know we
don't have as many, you know, really good Spanish language
content platforms as we should, and so it's like there's
(16:57):
a gap of that and they're like, oh, we can
fill that gap with nonsense and make a clickbook.
Speaker 1 (17:03):
Yeah, yeah, absolutely.
Speaker 2 (17:05):
Why do you think these videos are going viral?
Speaker 3 (17:07):
Like they are?
Speaker 1 (17:08):
Where to start? First of all, we love a good story,
and we love a good conspiracy story, like these are
candy for our brains. We love mythical creatures. We love
to think that, you know, the government's hiding some sort
of knowledge from us, like there's a reason why Men
in Black is still a classic movie. Like, we fundamentally
love these stories. And then you combine it with like this,
(17:34):
this kind of attention hijacking approach to creating content where
we know that that sort of content plays really well
on these algorithms where you're just trying to get watch time,
and the more people watch it, the more that that
video will be recommended to other people and it'll then
go viral. And then on top of that, you're able
(17:55):
to just create such a high volume of this content.
It's really easy to pump this content out because you
don't need to go create your own art. You can
just get like an AI to do it. You don't
need to read your own text, like do you know
how long it takes me when I film one of
my videos to get all the words right like and
(18:16):
get my delivery right Like. That's time that you don't
need to do, Like you know, No, you don't need
to spend that anymore. You can just plug it into
a speak you're a text to speech software, so you
can really crank this stuff out quite quickly. And I
think that then you can just put a lot out
and see what hits.
Speaker 2 (18:40):
When it comes to AI, we talk a lot about
the obvious dangerous examples of the ways that AI can
disrupt our democracy, but there's also a real danger to
the seemingly more benign AI generated content too, because it
just makes our Internet landscape worse, more stupid, and less trustworthy.
And when it's financially incentivized, not to mention easy and
(19:02):
fast to crank out at scale, it's not really that
difficult to see how it gets us away from the
kind of Internet we actually want, which is not one
full of low effort AI generated lives, scams and garbage. Yeah,
part of me wonders if that's really the role that
we're seeing AI being utilized for here. It's just you know,
(19:23):
the speed at which you can crank these out, the
ease at which you can crank these out, and you
can just really just flood the space and see what
happens to hit that virality.
Speaker 1 (19:34):
Mark, I'm definitely nervous about it. I'm very concerned about
the role of AI here in just its ability to
create volume, and I'm concerned about the combination of AI
plus algorithms that prioritize watch time over the actual originality
(19:58):
and like high quality sure of the content. I don't
want to see us in a position where like we're
just flooded by low quality, inaccurate misinformation. That's just like
brain candy. That makes me. That makes me nervous because
a lot of the times, like I get asked why
this sort of content is concerning, Like why is it
(20:21):
bad if like you know, the US like has vampires
and like it's clearly silly. But I am worried about
more so at scale, what it means to be in
digital spaces that are fundamentally prioritizing, like not just prioritizing, incentivizing,
financially incentivizing content that is low quality and easy to
(20:42):
pump out.
Speaker 2 (20:43):
In what ways do you feel that TikTok's Uh, this
program is incentivizing junk like this, Well, so.
Speaker 1 (20:50):
If you're paying creators, if you're if you're financially rewarding
people for content based on how viral it is, Uh,
the end result, it makes sense, is that you're going
to get a lot of people just trying to pump
out as much content that will go as viral as possible,
rather than like actually funding the creators who do their own,
(21:12):
you know, their own original, high quality work, because I
do want to see them funded, of course, I do,
Like I want the platforms to pay the creators who
are the backbone of the of the platforms like TikTok
would not be good if it weren't for all the
creators out there making that content. I would like to
see them, to see them compensated, but I want to
(21:33):
make sure that we are we have systems in place
so that they are the ones being compensated and not
people that are just flooding the feeds with sabertooth tiger
conspiracy theories.
Speaker 2 (21:45):
The reason why I liked TikTok to begin with is
that I felt like they were my people, and that
the people it was people who wanted to go deep
on an issue, and it's like everything you ever wanted
to know about this specific thing. If that is your
dam rock with me and I'll tell you we'll get
into than nitty gritty minutia. But getting into the nitty
gritty minutia. If something takes research, it takes time. You
(22:05):
can't just plug it into an AI system and have
it poof create create that kind of content. It really takes,
As you know, it takes a lot of time, and
I worry that it's so much easier to rather than
reward that person who's putting so much time and effort
into making the thoughtful, deep dive content that we all love.
It's so much easier to just throw money at people
(22:26):
who have completely gamified this via AI just flogging the
space with junk And I don't know, I like you,
I worry about the what the what that says about
our larger information ecosystem.
Speaker 1 (22:37):
Yeah, it is concerning. I think it's also presenting like
a good challenge that we could potentially, you know, figure
out how to climb together. Like I'm trying to be
inspirational here. I think that there is an opportunity here
to figure out what it looks like to create a
better system, Like we know what the end result is
(22:59):
of just like simply prioritizing just engagement over all else,
and so like what are we going to do to
try to also incentivize accurate, high quality information.
Speaker 2 (23:13):
One question I do have is is this against TikTok's
terms of service or rules in some way? Like I
my understanding is that TikTok has rules that prohibit misleading
or inaccurate content, But it seems like they're allowing not
just allowing this kind of content on their platform, but
allowing creators to be be profiting off of it.
Speaker 1 (23:35):
Yeah, So I'm a little bit confused on this one.
So here's what I know. So I know that we
published this in February, and a TikTok spokesperson told Axios
in late March that conspiracy theories are not eligible to
earn money or to be recommended in for you feeds
(23:57):
on TikTok. I have gone since then and looked and
found plenty of the exact same right formula of conspiracy
theories that are just over a minute long, using that
AI generated text to speech and the spooky music and
the editing and the and the images like all of that,
(24:19):
And I've seen videos that were posted since then that
seems a little inconsistent. The way they've talked about their
newest update to the community guidelines around conspiracy theories doesn't
seem to reflect that. So they said that conspiracy theories
that are unfounded and claim that certain events or situations
are carried out by covert or powerful groups such as quote,
(24:42):
the government or a secret society, are not eligible for
for you page recommendation. I don't know where a lot
of these fall on that. It's a pretty it's it's
it's just unclear exactly what TikTok is enforcing and what
they're allowing to be monetized.
Speaker 2 (25:03):
And something we've seen time and time again is that
the people who create this, especially if they're doing it
for profit, are so good at creating content that really
skirts that gray area and staying just on the other
side of that line to keep them on the platform
and to keep the money coming in.
Speaker 1 (25:19):
Yeah. So the conspiracy theories about like the unfounded that
are unfounded and claim about certain events being carried out
by secret powerful groups, that's FYP ineligible. What's not allowed
on TikTok are conspiracy theories that name and attack an individual.
Speaker 2 (25:36):
Oh so it seems like it seems like it wouldn't
be that hard to create content that would be eligible,
but still does the same kind of thing in spirit,
just not in specificity.
Speaker 1 (25:46):
Yeah. Like, you know, because like the video that I
talked about with an AI or seemingly AI Joe Rogan
predicting that Texas will secede from the US, starting a
second civil war on May fourteenth, which would then lead
out lead to an all out nuclear war. That was
uploaded on April fifth, and it has over eight hundred
thousand views. It's a concerning video to be making a
(26:10):
claim and to be using such a well known figure
that a lot of people do get their information from
to claim that you have like special knowledge from a
time traveler, claiming that you're aware of imminent civil war? Like,
where does that fall within these guidelines that we've set up?
(26:32):
Where does that fall within the guidelines that TikTok is
set up?
Speaker 2 (26:35):
And I gotta say not for nothing. If I'm Joe Rogan,
I would be so insulted that this is the kind
of content that people think they can plausibly associate with
me unrelated to this whole conversation. I would really be
taking a step back if I were him and be like, wow,
people think that I would be giving them content about
(26:56):
cryptids and stuff, and like that's what people would associate
with me, And people will be like, oh yeah, I
buy that. I would be so insulted and I would
be taking a step back to deeply reflect on the
kind of content I'm known for.
Speaker 1 (27:06):
I think you and Joe Rogan might be different in
certain fundamental ways. Bridget.
Speaker 2 (27:14):
I mean, like when the whole Joe Rogan the last
time that we talked about Joe Rogan, one of the
points I made was like we like I'm not saying
I'm such a great podcaster, but like, come on that
it does.
Speaker 1 (27:32):
But Okay, the issue though of like taking a celebrity figure,
especially someone that is like trusted by a bunch of
people straight men across America, uh and you know, altering
them with this AI voice and putting this information in
their mouths. Like where I want to see more much
(27:55):
more of a conversation happening from TikTok but also from
other platforms about like where does this stand within policy?
Because TikTok's community guidelines only prohibit this sort of like
misleading AI generated content of a public figure who is
being degraded or harassed, engaging in criminal or anti social behavior,
(28:18):
taking a position on political issues, commercial products or a
matter of public importance, or being politically endorsed or condemned.
So the policy doesn't really seem to account for AI
generated content of jer Rogan claiming that a time traveler
predicted Civil war.
Speaker 2 (28:33):
Yeah, and I think it really exposes some gaps in
policies that these people are exploiting to make a quick book.
Speaker 1 (28:42):
Yeah. And I mean the more that we're having these
conversations around AI, right, we're learning more ways that it's
going to be used that maybe we hadn't thought of before.
And like a lot of the conversations that I saw,
especially early on around AI and like deep fakes and
deep fakes of celebrities or politicians, was like, oh my god,
they're going to deep fake world leaders and say that
(29:04):
like they're launching nuclear war and it's going to start
nuclear war.
Speaker 2 (29:09):
Right.
Speaker 1 (29:09):
It was like this really high stakes kind of discussions
about how deep fakes can affect us. It's like, okay,
but what about like really low effort deep I'm using
some deep fakes because it's using just regular video of
him with an AI generated voice. What about that? And
like how that affects regular people and not like high
(29:31):
up politicians, Like how about that? Like I just want
to see this this evolution in our conversations around AI.
Speaker 2 (29:39):
Yeah, that's something that I really see a lot of
where so often and I get it, like I don't
think it's it's wrong, but so often we're talking about
high level deep fakes, so we're not talking about things
like low effort cheap fakes, which I think are arguably
easier to make, and we're talking about like, oh, how
will it disrupt this high life level system, which is
(30:01):
important to talk about, but we also should be talking
about how is how might it disrupt the thing that
most people are coming into into contact with every day,
which is the way our digital ecosystem or information ecosystem.
And I don't know, I think the conversation needs to
have both of like how it is impacting people at
this high level and also the thing in their hands
(30:23):
that they are most likely to be coming into contact
with these kinds of of inaccurate content by Does that
make sense?
Speaker 1 (30:30):
Yeah? Of course. Do you have any idea how many
people have fallen victim to those like cheap fake Taylor
Swift is selling cooking were ads?
Speaker 2 (30:39):
Yes, it's everywhere and like and I I don't think
it should be just on the individual to have to
figure out like, well, is Taylor Swift actually endorsing this
cooking ad or whatever? Like like we it's clearly a
bigger institutional problem that is not going to be solved
(31:00):
by it just only individuals being more savvy.
Speaker 1 (31:04):
Yeah, and that's an issue that we run into a
lot of the time with conversations around media literacy, where
like media literacy, digital literacy, they're amazing, They're great. I
highly encourage them in you know, all aspects of education
for all ages. But there are a lot of the
time only going to reach some of the most privileged
in society, and you're going to need to be constantly
(31:26):
evolving as like digital spaces evolve too, so you're always
in this game of ketchup. And it's also fundamentally just
putting the burden on users who a lot of the
time straight up don't have the time to go learn
how to spot you know, gan like generated images, like
they don't know how to do that, and that's fully okay.
(31:48):
They have lives like that burden should not be on consumers.
That burdens should be on the platforms that are hosting
that content and a lot of time profiting off.
Speaker 3 (31:56):
It more After a quick break, let's get right back
into it.
Speaker 2 (32:16):
Now. Like anything else, this is just a hustle. It's
a grift. People are doing it because it makes them quick,
easy money, and they can also make money from purporting
to teach other people how to do the grift as
well the grift within the grift. Do you see coordination
with these accounts, Like I know that you in the
piece you figured out that some of these accounts were
(32:38):
sort of associated with each other. Do you think that
these folks are like back channeling and trying to tell
others like, well, here's how you do it, here's how
you can make these kinds of videos, here's what works,
here's the communities that you should be targeting that kind
of thing.
Speaker 1 (32:53):
Oh yeah, there's a whole cottage industry around this sort
of content farming a little bit. It's what I was
seeing was you would have these accounts going viral on
TikTok with a decent number of followers, and they would
have a link, oftentimes to a discord server in their
TikTok bio. And so I started joining these discord servers
(33:14):
and watching them talk about how they create this content.
And typically the way I saw it be run was
that there was like one dude running the discord server. Again,
haven't seen a woman do it yet. I hope a
girl boss gets in on it, but I haven't seen
it yet. And so one dude's running the discord server,
he'll maybe make like YouTube videos about how to make
(33:37):
this content, and like weekly videos like on like the
next hot niche to get in Like maybe it's conspiracy theories,
Maybe it's horror, maybe it's history, maybe it's top three videos.
Maybe it's like men's motivational workout content, or just like
men with deep voices saying motivational quotes and their team
(34:00):
seeing people how to do it. And they often have
like a premium level of some sorts. You could pay
them like seventy bucks a month or something to go
and be their protege, and like they just give you
one on one feedback a lot of the time, or
like special access to courses and stuff. And then there's
a back and forth in these communities where you have
(34:22):
these discord servers of people sharing tips on like how
to make content, like or what sort of content's performing well,
what sort of rate of like how much money are
you guys getting per million views for each type of content?
How can I switch my content so that it targets
a wealthier demographic. That sort of discourse I saw. I
(34:47):
saw a lot of the time in these communities.
Speaker 2 (34:49):
It's so interesting to me how there's a component of
it that's like, oh, if you pay me, I'll teach you.
Like there's always there's always a little bit of an
overlap with like a life coaching scam or like an
mL like there's always a little something of that in there,
do you know what I mean. I mean, the grift
just keeps on giving, the grift within the grift.
Speaker 1 (35:14):
The grift on grift on grift on gript. There's like
this affiliate marketing level to it, where it was like,
use this AI service and then if you get people
to use your affiliate code, you'll get X money. And
so there are a lot of layers to it. I
think that when we're dealing with communities that are susceptible
to messaging about get rich quick, you're going to inevitably
(35:36):
end up with a lot of different sorts of grifts
that could emerge.
Speaker 2 (35:40):
Throw some supplements in there, like throw some like ough.
The grift is deep.
Speaker 1 (35:45):
As you said, the grift runs deep.
Speaker 2 (35:48):
So we were talking a bit about thinking about how
this works in a high level way. But I guess
I'm wondering, you know, twenty twenty four is a pretty
big election year, not just in the United States globally.
I think more countries than ever before in history have
elections this year. I'm wondering, what do you see as
at stake for all of this as we head into
(36:10):
such an important global election year.
Speaker 1 (36:15):
Proof Immediately my blood pressure spikes. Like see a lot
of these videos weren't inherently what you would say, like
political using airports, because you know everything's political, but like
you know, government's hiding a vampire, isn't really taking a
partisanside necessarily. I did see some that were overtly political,
(36:39):
like one that was like Biden is you know, controlling
the water or something, and stuff like that. I think
what I'm worried about is the system at large, Like
We're coming into this this election year, and now we
have a whole group of people who were really skilled
at cranking out AI generated content that is inaccurate, and
(37:05):
we know what performs well, we know that you can
like flood feeds with this stuff, and so I'm very
worried about how easy it is to manufacture and this
content and how easy it is to make it really engaging.
Speaker 2 (37:24):
So for folks who are listening and are thinking, like, well,
I use TikTok, but what the heck am I supposed
to do about this? Is there anything that you would like?
Is there a role for the regular TikTok user that
they should be playing in combating this not falling for
it just creating the kind of Internet ecosystem that we
(37:44):
actually would want to live in.
Speaker 1 (37:46):
I mean, I think that the best piece of advice
I can give specifically for this sort of content is
like a strict non engagement policy. If you see it,
and you see Joe Rogan is there and it's bad
dubbed and maybe he's having a conversation with another like
badly dubbed Jordan Peterson or whatever, that's a good time
(38:10):
to like report it as authentic content or you know,
misinformation if it is misinformation, and like scroll class or
like say you're not interested in it. I think that
one of the best things that we can do is
just show that we aren't engaging and go about your
life and maybe maybe go outside, maybe touching your ass,
(38:30):
maybe smell a flower, gives someone a hug.
Speaker 2 (38:33):
So that is great advice for individuals. But what about
institutions like TikTok, Like one thing I imagine they could
do is provide a little more transparency around this content,
Like why shouldn't we know whether or not the thing
that we are being shown is making whoever posted it money.
Speaker 1 (38:48):
I should say too, it's like impossible to know which
of these are monetized. I don't have that insight from
TikTok of like which accounts monetized versus which isn't. But
they all, I mean, they're all still fitting all of
the requirements that we described in that February article that
indicate that they're being used by creators looking to monetize
(39:09):
their content. But I would love from TikTok to potentially
a way to know, you know, I would like some
more transparency on that front, to know what's being monetized,
how are they enforcing it? Right, Like what does that
actually look like? Because they say that for the Creator
Rewards program, all videos are before they're actually monetized, before
(39:31):
they begin that process of being monetized, they are approved
by somebody, And how strict is that? Like what does
that actually look like? How many videos are being rejected?
And like on what grounds? I want a lot more
transparency from them around the entire system of monetization that
they are creating.
Speaker 2 (39:50):
And I think we deserve that, Like, wh when I
am making a podcast, I legally have to say when
I am being paid this up to say something and
it's an ad and when not? Right, Like we deserve
to know what does the financial situation. Look like from
the person who is telling me something, are they just
telling me because they're making that content and they like it.
Are they telling me that because they've just been monetized
(40:12):
in some way, Like, I don't know, I don't I
don't think that that's a bridge too far to expect
a little transparency behind.
Speaker 1 (40:19):
No. I mean, I would argue that we deserve much
more than that. I think that the bar is on
the ground. But I do think that we deserve to
know when there is a very clear financial incentive for
certain types of content. You know, is this person making
flat earth content because they are genuinely a flat earther
and like they believe it with their full chest, or
(40:40):
are they making it because they know that it'll get
millions of views and then they'll get money. And like
that's a big difference, And I do want to know
where that where that line is, Like, I just I
want to know more about because I mean, you can
never know people's motivation. There are still lots of grifters
even when financial motives are very clear. But I still
(41:02):
think that that transparency is a very good starting.
Speaker 2 (41:05):
Absolutely, Abby, thank you so much for being here. It
is always a pleasure.
Speaker 1 (41:11):
I literally love nothing more. Let's hang out.
Speaker 2 (41:16):
I'll see you in Berlin. Hopefully a little more sober
this time, absolutely not, hopefully more drunk.
Speaker 1 (41:22):
Hopefully if we're doing it right, less sober.
Speaker 2 (41:32):
If you're looking for ways to support the show, check
out our mark store at teangody dot com slash Store.
Got a story about an interesting thing in tech, or
just want to say hi, You can reach us at
Hello at tegody dot com. You can also find transcripts
for today's episode at teng Goody dot com. There Are
No Girls on the Internet was created by me Bridget
cod It's a production of iHeartRadio and Unbossed Creative edited
(41:53):
by Joey pat Jonathan Strickland is our executive producer. Tari
Harrison is our producer and sound engineer. Michael was our
contributing producer. I'm your host, Bridget Todd. If you want
to help us grow, rate and review us.
Speaker 3 (42:04):
On Apple Podcasts.
Speaker 2 (42:06):
For more podcasts from iHeartRadio, check out the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.